Sometimes The Road To Petaflops Is Paved With Gold And Platinum

Supercomputing, with a few exceptions, is a shared resource that is allocated to users in a particular field or geography to run their simulations and models on systems that are much larger than they might otherwise be able to buy on their own. Call it a conservation of core-hour-dollars that allows a faster time to model in exchange for limited access.

So it is with the Norddeutschem Verbund für Hoch- und Höchstleistungsrechnen (HLRN) supercomputing alliance in Northern Germany. The HLRN consortium, which provides calculating oomph for the German federal states of Berlin, Brandenburg, Bremen, Hamburg, Mecklenburg-Western Pomerania, Lower Saxony, and Schleswig-Holstein, has used a variety of different architectures from different vendors over the past several decades, and as such is representative of mainstream HPC shops that, as we pointed out recently, comprise the majority of the revenue stream in the HPC sector and account for thousands of HPC facilities worldwide. HLRN in particular has a very large number of university and research institution users, at close to 200, all jockeying for time on the system, so adding capacity makes the lines a bit shorter, at least in theory.

The second phase of the HLRN-IV supercomputer, known by the nickname “Lise” after Lise Meitner, an Austrian-Swedish physicist who was one of the discoverers of nuclear fission in 1939, has fired up recently, and the machine is noteworthy for a few reasons. First, Atos is the prime contractor on the machine, and second, it is based on the doubled-up “Cascade Lake-AP” Xeon SP-9200 Platinum processors that Intel launched last April and that are employed in custom enclosures that Intel itself manufactures.

Since its founding in 2001, the HLRN consortium has operated a distributed system across two datacenters; one is usually at the Zuse Institute Berlin and the other has been located at Leibniz University in Hannover or at the University of Gottingen. The initial HLRN-I system, which was called “Hanni” and “Berni” across its two halves, was comprised each of a 16 node cluster of IBM’s RS/6000 p690 servers based on its dual-core Power4 processors, which debuted that year. The p690 machines had 32 sockets and 64 GB of main memory each and were connected by a proprietary federation interconnect that IBM created for its parallel NUMA systems. This HLRN-I machine had 26 TB of disk capacity and had a peak performance of 2 teraflops at 64-bit double precision. You can get a graphics card with way more floating point performance these days, and it fits in your hand instead of taking up two datacenters.

In 2008, these systems were upgraded wit a pair of Altix ICE supercomputers from Silicon Graphics in Berlin and Hannover, called “Bice” and “Hice” naturally. This system had a mix of NUMA and scale-out nodes. The NUMA nodes were comprised of a mix of two-socket Altix XE 250 nodes and two-socket Altix UV 1000 nodes using a mix of Xeon processors from Intel (four-core and eight-core chips with fatter memory) and the NUMAlink5 interconnect to share the memory across the 2,816 cores and 12.5 TB of main memory across the 200 nodes in the machine. The regular, scale-out part of each side of the HLRN-II system had a mix of two generations of Xeon processors across its 10,240 cores in 1,280 nodes and a total of 12.1 TB of main memory. Add it all up and the HLRN-II machine had 124.76 teraflops of double precision floating point calculating capacity; this was balanced out by an 810 TB Lustre parallel file system.

Enter HLRN-III in 2013, which we wrote about five years later. This machine, which cost $39 million and which was built in phases like prior systems using a mix of generations. In this case, by Cray based on its “Cascades” XC30 and XC40 system designs and their “Aries” interconnect. The HLRN-III systems were nicknamed “Konrad” and “Gottfried” and they each used a mix of “Ivy Bridge” and “Haswell” Xeon processors, with the Berlin system having a total of 1,872 nodes with 44,928 cores and 117 TB of memory yielding a peak performance of 1.4 petaflops and the University of Leibniz (which is where the Gottfried name comes from, the mathematician and co-creator of calculus) had a total of 1.24 petaflops of oomph and 105 TB of memory across its 1,680 nodes and 40,320 cores. Each machine had a 3.7 PB Lustre file system and a 500 TB GPFS file system.

With the HLRN-IV system, the two halves are not just a little bit different, but really distinct systems that were installed at different times. The “Emmy” system at the University of Gottingen, which was operational in October 2018, was named after groundbreaking German mathematician Amalie Emmy Noether, who blazed a trail for women in that field as much as Meitner did in physics. The Emmy system at Gottingen had 449 nodes, with 448 of them having just “Skylake” Xeon SP-6148 Gold processors and one of them having four “Volta” Tesla V100 GPU accelerators from Nvidia added. Not counting that GPU-accelerated system, Emmy had 17,920 cores across its 448 nodes and 93 TB of memory. These nodes were interlinked with a 100 Gb/sec Omni-Path interconnect from Intel, and its performance was never divulged. Presumably Emmy will be upgraded at some point to deliver the expected 16 petaflops of aggregate performance

The Lise half of the system in Berlin, which is just coming online, has significantly more computational power than that initial Emmy partition in Gottingen. This system currently has 1,180 nodes with 113,280 cores in total using a pair of the Xeon AP-9242 Platinum chips per node, which themselves put two 24 core Cascade Lake processors into a single socket for a total of four chips and 96 cores per node. These nodes are also interlinked with 100 Gb/sec Omni-Path interconnect. This machine is noteworthy in that it is showcasing Intel’s multichip Cascade Lake-AP processors, which have not really dented the attack by the AMD Epyc processors and which are not exactly taking the HPC market by storm. (We suspect HLRN got a great deal on these Intel Cascade Lake-AP chips and the servers that sport them, with Atos as the system integrator hopefully making some dough.) Back in November 2019, when the Lise system was tested with 103,680 of its cores on the Linpack benchmark, it was rated at 5.36 petaflops, so there must be some pretty big upgrades on the horizon to get to the 16 petaflops and more than 200,000 cores that the final HLRN-IV system (Emmy plus Lise) will eventually encompass. The completed system with all of those 16 petaflops spread across the Berlin and Gottingen sites will cost €30 million, or about $32.6 million.

The interesting bit as far as we are concerned is that the combined HLRN-IV system will have 6.2X more double precision performance at 16.4 percent lower cost than the HLRN-III system it replaced seven years later. This illustrates the principal that we have talked about before, which is that it is far easier to increase the performance of a supercomputer than it is to lower its price. HPC centers have tended to budget linearly over the decades, but it is getting more expensive to make the flops leaps. Still, a 7.4X improvement in bang for the buck over seven years can get a deal done.

We realize that our bang for the buck comparisons are imprecise because of the lack of publicly available data on supercomputer costs over time, but at around $15,000 per teraflops back in 2013, the HLRN-III cluster was twice as expensive per flops as Tianhe-2 system in China, which used GPU accelerators, but about half the price of the all-CPU and very custom PrimeHPC systems from Fujitsu that were inspired by the K supercomputer at RIKEN lab in Japan. The price of systems, particularly those that used accelerators, dropped significantly between 2013 and 2018, and GPU accelerated machines like “Summit” and “Sierra” cost just north of $1,000 per teraflops around the time the all-CPU Emmy portion of the HLRN-IV system was going in, which cost $2,038 per teraflops at current euro to dollar exchange rates. Call it two grand.

So in general, all-CPU machines are, it seems, more expensive, and this stands to reason. The programming is harder for GPU accelerated machines, and that costs money, too. Or, you can as many HPC centers do outside of the largest national labs, stick with all-CPU architectures and pay the premium there. GPU-accelerated exascale machines due to be installed in the United States in 2021 through 2023 will cost on the order of $400 per teraflops, and we suspect that all-CPU systems over that timeframe will cost 2X to 3X that per teraflops. None of that counts the facilities or electricity costs that come with the architecture choices, of course. As best we can figure.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.