HLRN Brings Advanced Performance to HPC

HLRN chose Intel® Xeon® Platinum 9200 processors to meet their increasingly diverse needs for HPC workloads.

Executive Summary
HLRN supercomputers are used by over 100 universities and over 120 research institutions enabling exploration of the many frontiers of scientific research to help unlock a better future. The selection of Intel’s latest processor technology to power the newest HLRN supercomputer came after detailed testing to find the best solution. Prof. Dr. Ramin Yahyapour of Göttingen University explains “the expectation for HLRN’s supercomputer acquisition was to have a significant step up in computer power for new experiments”.

“Science in general is getting more compute and data intensive. This means that having larger systems available translates into an ability for the scientists to do better work. That’s why HLRN is crucial for scientific research,” says Prof. Dr. Ramin Yahyapour.

HLRN lays claim to being a very demanding client—HLRN has substantial expertise from their prior deployments of three supercomputer systems. Prof. Alexander Reinefeld from Zuse Institute Berlin emphasizes that “We are expecting the highest performance for all benchmark applications. Our benchmark suite was carefully chosen so that each code challenges specific parts of the system: CPU, communication network, and parallel I/O. We are not looking for peak theoretical performance—we demand real system performance which makes it more complicated for vendors to optimize their infrastructure for our applications. That meant that our selection of the right processor and the right interconnect are all crucial for the overall performance”.

As with most research today, the need for more real-world computer capacity stems from the reality that simulations of many kinds are critical to the researchers. Faster computers are primarily used to increase the simulation in size and resolution—with the expectation of finding new discoveries.

“We demand real system performance… that meant that our selection of the right processor and the right interconnect are all crucial for the overall performance.” — Prof. Reinefeld

HLRN procured a new supercomputer with just under a quarter of a million cores. The Intel® Xeon® Platinum 9200 processors (from the 2nd Generation Intel® Xeon® Scalable processor family) will serve as the “right processors” for HLRN. For the “right interconnect,” HLRN chose Intel® Omni-Path Architecture (Intel® OPA). The system is produced by Atos (formerly Bull Computing) and will be physically split between the Zuse-Institute Berlin (ZIB) and the Georg-August-Universität Göttingen (University of Göttingen). These sites have previously used this split system model, and already have in place a dedicated, redundant, 10 gigabit, fiber optic cable spanning the more than 170 miles between Berlin and Göttingen.

Researchers at ZIB will use HLRN-IV for fluid dynamics, including developing turbulence models for aircraft wings.

HLRN has announced that the new system, HLRN-IV, will be approximately six times as fast as the prior systems—offering 16 PetaFLOP/s performance.1 The excitement among researchers is palpable, and the list of research being done is mind-boggling. Prof. Reinefeld summed up his excitement saying, “It’s a great system. Our users will benefit right away from the more powerful system without needing to change their code. The homogeneous architecture of the 2nd Gen Intel® Xeon® Scalable processors will provide true performance portability, which is a crucial aspect for our researchers in order to quickly benefit from the new, more powerful system”.

Key research areas within HLRN include:

  • Earth System Sciences - Which includes work on climate change. Subjects include the dynamics of oceans, rain forests, glaciers, Antarctic phytoplankton (microalgae), mineral dust cycles, and the stratosphere.
  • Fluid Dynamics - Which includes turbulence models for ship turbines, wind turbines, and aircraft wings. These models are notorious for needing enormous compute power—the acquisition of HLRN-IV will enable the running of more fine-grained turbulent simulations on large systems such as wind flow through a city, or across a blade on a turbine. Modeling complete cities will allow studies in how new buildings would change wind flow, and other factors that impact various microclimates within the city. This may lead to new design aspects to enhance city life. Other researchers hope to gain understanding that will pave the way for future high-lift commercial aircraft. Other researchers are hoping to save lives and ships by studying liquefaction of solid bulk cargo (such as iron ore or nickel ore). Failure to properly deal with this issue has led to the complete loss of at least seven vessels around the world in the past decade.
  • Healthcare - Is a broad area of research, and HLRN researchers hope to help in many ways including improving medical care at home. Gaining a better understanding of illness and treatment of diseases stands to impact us all. Research includes simulations of drug efficacy, interactions, and side-effects. Enormous compute power allows leading researchers in these fields to start exploring the “personal medicine” aspects of these simulations, not just the average effects on a general population.

At the University of Göttingen, research areas include collaborative projects on cellular and molecular machines.

High Performance Across Diverse Research
In terms of science communities, HLRN has to support all types of workloads for their many researchers. Therefore, HLRN systems need to have the characteristics of a general purpose system but still be of the highest performance. Their final choice had no accelerators.

“Although we looked at accelerators, including GPUs, as part of the procurement process, there was no advantage with regards to obtaining the highest performance in using GPUs or other accelerators in the system.”— Dr. Thomas Steinke, Head of ZIB Supercomputing

HLRN’s benchmarks are open and include benchmarks that can take advantage of GPUs. HLRN found that any advantage in performance on some workloads are insufficient, when considering the reduction in general purpose compute capacity, or additional costs involved. A homogeneous system based on the 2nd Gen Intel® Xeon® Scalable processors proved itself to be the best choice for the diverse needs of the HLRN scientists and researchers.

Beating Back Amdahl’s Law
Ever mindful of Amdahl’s Law, Dr. Thomas Steinke is fond of emphasizing the use of fast algorithms for fast computers. He shared that “The pressure of optimizing code for scaling on a node is less because of the high real-world performance of the 2nd Gen Intel® Xeon® Scalable processors compared to previous many-core architectures”.

The 2nd Gen Intel® Xeon® Scalable processor family offers an outstanding choice for high performance computing (HPC) and helps programmers cope with Amdahl’s Law.

“Our users will benefit right away from the more powerful system without needing to change their code.”— Prof. Reinefeld

Future of AI in HPC
AI and Machine Learning stand to impact all areas of HLRN research. A hot area of interest is the blending of machine learning and AI techniques with traditional simulation capabilities. While promising results have been reported, there is much work to be done. The exploration of algorithms is likely to take researchers in many directions, and this need for flexibility is one reason HLRN chose 2nd Gen Intel® Xeon® Scalable processors to support their next generation of research.

Avoid Data Movement
Prof. Yahyapour emphasized that “the CPU is quite good for artificial intelligence and machine learning. That’s an area where we see more need from our researchers. Traditionally they were not so much into data intensive work but that’s something we see as a new trend for the new system that will also be of particular interest”.

Intel® Advanced Vector Extensions 512 (Intel® AVX-512) proved to be the logical choice to help increase HLRN’s compute power, and with the addition of Intel® Deep Learning Boost (Intel® DL Boost) to augment AVX-512, offered outstanding performance for the new frontier of HPC applications.

The ability to compute data where it is, for all types of algorithms, saves data movement. That represents a boost for compute capacity, and less wasted energy. A double win!

When exploring new algorithms, and new application techniques, nothing is more important than the flexibility of a system. The 2nd Gen Intel® Xeon® Scalable processor delivers high performance coupled with the flexibility needed to meet future challenges.

Explore Related Intel® Products

Intel® Xeon® Scalable Processors

Drive actionable insight, count on hardware-based security, and deploy dynamic service delivery with Intel® Xeon® Scalable processors.

Learn more

Intel® Deep Learning Boost (Intel® DL Boost)

Intel® Xeon® Scalable processors take embedded AI performance to the next level with Intel® Deep Learning Boost (Intel® DL Boost).

Learn more

Intel® Omni-Path Architecture (Intel® OPA)

Intel® Omni-Path Architecture (Intel® OPA) lowers system TCO while providing reliability, high performance, and extreme scalability.

Learn more

Уведомления и отказ от ответственности

Доступность функций и преимуществ технологий Intel® зависит от конфигурации системы, а для их работы может потребоваться оборудование, программное обеспечение или активация сервисов. Значения производительности могут изменяться в зависимости от конфигурации системы. Ни одна вычислительная система не может быть полностью защищена. Проконсультируйтесь с производителем или продавцом системы. Подробная информация также представлена на сайте https://www.intel.ru. // Программное обеспечение и рабочие нагрузки, используемые в тестах оценки производительности, оптимизированы для обеспечения высокой производительности только с микропроцессорами Intel®. Тесты производительности, в том числе SYSmark и MobileMark, проводятся с использованием определенных компьютерных систем, компонентов, программного обеспечения, операций и функций. Любые изменения этих параметров могут привести к изменению конечных результатов. При принятии решения о покупке следует обращаться к другим источникам информации и тестам производительности, в том числе к информации о производительности этого продукта по сравнению с другими продуктами. Подробная информация представлена на сайте: https://www.intel.ru/benchmarks. // Результаты тестов производительности основаны на тестировании по состоянию на дату, указанную в конфигурациях, и могут не отражать всех общедоступных обновлений безопасности. Подробная информация представлена в описании конфигурации. Ни один продукт или компонент не может обеспечить абсолютную защиту. // Описанные сценарии сокращения затрат приведены в качестве примеров того, как конкретная продукция на базе архитектуры Intel® в указанных обстоятельствах и конфигурациях может повлиять на будущие затраты и обеспечить их снижение. Обстоятельства могут различаться. Корпорация Intel не дает гарантий относительно объемов затрат или их снижения. // Intel не контролирует и не проверяет сторонние данные тестов и сайты, упомянутые в настоящем документе. Для проверки точности упомянутых данных посетите указанный веб-сайт. // Некоторые результаты были получены с помощью расчетов или прогнозов с использованием внутреннего анализа Intel либо симуляции и моделирования архитектуры и представлены здесь в информационных целях. Реальные значения производительности могут отличаться в зависимости от изменений конфигурации и настроек оборудования или программного обеспечения вашей системы.

Информация о продукте и производительности


Предыдущая система HLRN-III состоит из двух комплексов, расположенных на базе ZIB в Берлине и Leibniz Universität IT Services (LUIS) в Ганновере, в сочетании с выделенным оптоволоконным соединением 10GigE для HLRN, чтобы обеспечить так называемое единое представление всей системы. Работа состояла из двух этапов. Первый этап включал два компьютера Cray XC30, каждый из которых состоял из 744 вычислительных узлов, 1488 процессоров Intel® Xeon® с двумя сокетами E88-2695v2 с общей оперативной памятью 93 ТБ, которые подключались через быструю сеть Cray Aries с топологией Dragonfly. На втором этапе был добавлен процессор 2064 Intel® Xeon®, вычислительные узлы E5-2680 v3, с 85248 вычислительными ядрами, 1872 вычислительными узлами в Берлине и 1680 вычислительными узлами в Ганновере. В результате пиковая производительность достигла уровня 2,7 петафлопс, а основная память была увеличена до 222 ТБ.