Cramming The Cosmos Into A Shared Memory Supercomputer

The very first Superdome Flex shared memory system has gone out the door at Hewlett Packard Enterprise, and it is going to an HPC center that has big memory needs as it tries to understand the universe that we inhabit.

Earlier this month, Hewett Packard Enterprise unveiled the newest addition to NUMA iron, the Superdome Flex, which is an upgrade from the SGI UV 300 platform that HPE sought when it bought SGI for $275 million last year. As we outlined in The Next Platform at the time, the system can scale from four to 32 sockets, is powered by Intel’s “Skylake” Xeon SP processors, supports 768 GB to 48 TB of memory, and includes the shiny new NUMAlink 8 interconnect combined with various firmware and other technologies from the vendor’s previous Superdome X servers to make something that combines the best of the SGI and HPE platforms.

The Superdome Flex is a big system, and it is the foundation of HPE’s efforts to be the key hardware supplier to the growing in-memory database market that is trying to balance lightning quick response against massive amounts of data. SAP with HANA, Oracle with its Database In-Memory, and Microsoft with its SQL Server are among the high-profile in-memory database vendors, and HPE says that about half of the SAP HANA deployments are running on its servers.

Now HPE is using the Superdome Flex to help scientists – notably including Stephen Hawking – to sort through 14 billion years of data to research the origins of the universe and the mysteries surrounding black holes. Hawking’s Centre for Theoretical Cosmology (COSMOS) will use the new shared memory compute platform in conjunction with other systems – including an HPE Apollo server and Xeon Phi-based systems already on site – to crunch the new data that is streaming into the center. The influx of new data is driving discoveries in such areas as cosmology and relativity, according to Paul Shellard, director of the Centre for Theoretical Cosmology and head of the COSMOS group.

The cosmology field is facing the “two-fold challenge of analyzing larger data sets while matching their increasing precision with our theoretical models. In-memory computing allows us to ingest all of this data and act on it immediately, trying out new ideas, new algorithms. It accelerates time to solution and equips us with a powerful tool to probe the big questions about the origin of our universe.”

Hawking said the COSMOS researchers are trying to gain insight into space and time from as far back as “the first trillion trillionth of a second after the Big Bang up to today.”

In-memory computing calls for putting data into RAM rather than traditional disk-based databases or file systems and is designed to enable the data to be more quickly analyzed and insights from the data to be more quickly gained. As the amount of data generated has risen, enterprises and other organizations have looked to in-memory databases and NUMA scalability or distributed analogs like the Spark in-memory computing platform, to help them make sense of the data. However, as we noted last year, putting all data into memory isn’t a panacea, and while it may work for some applications, it doesn’t work as well for others, and organizations will need multiple techniques for gaining low-latency access to the rapidly growing data stores.

Randy Meyer, vice president and general manager of Synergy and Mission Critical Servers at HPE, said the Superdome Flex’s in-memory capabilities are “uniquely suited to meet the needs of the COSMOS research group. The platform will enable the research team to analyze huge data sets and in real time. This means they will be able to find answers faster.”

The system will help the researchers use information that’s already been gathered about the universe with new sources of data, such as gravitational waves, the cosmic microwave background and the distribution of stars and galaxies to challenge theories about the universe.

The Superdome Flex is one focal point in HPE’s memory-driven computing initiative, which calls for a pool of memory that is accessed by compute resources over a high-speed interconnect. This approach is at the heart of The Machine, HPE’s future system that includes such new technologies as a silicon photonics interconnect, new software to run the system, and a highly scalable shared memory pool, with the system being able to hold as much as 160 TB of data in memory in its protoype form and which should, in theory, be able to push up into the exascale stratosphere. That’s 160 TB down, 1,048,416 TB to go. . . .

The Superdome Flex will be used not only by the COSMOS team but also by other groups within the Faculty of Mathematics at the University of Cambridge, which along with HPE announced the use of the supercomputer at the school.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.