HPC

Inside HPE’s Blueprint For Next-Generation Supercomputing

Published

SPONSORED POST Hewlett Packard Enterprise sets out its latest advances in large-scale computing at the SC25 high performance computing conference in November. In recent weeks, it has confirmed two new systems for the Oak Ridge National Laboratory in the US in collaboration with AMD. The Discovery exascale machine and the Lux AI cluster are designed to handle the next wave of high-intensity workloads that blend traditional simulation with large-scale AI training. The wave of breakthrough announcements also include Mission and Vision, a pair of supercomputers built with NViDIA for Los Alamos National Laboratory to drive research and national security projects.

The announcement adds to HPE’s record as the world’s leading builder of supercomputers. Its technologies power three of the top ten systems on the Top500 list and several of the most energy-efficient entries on the Green500. This position comes from decades of in-house R&D through HPE Labs and the strategic development of the Cray and SGI portfolios which were acquired.

This leadership has extended to Europe too. The IsambardAI supercomputer at the University of Bristol is one of the UK’s most advanced AI-focused platforms, built to provide open, sovereign compute capacity for academia and industry. Using HPE’s modular datacenter design (coined ModPod) and backed by a comprehensive services delivery team, the project was delivered at speed and is already accelerating work in fields such as materials science and climate modeling.

Sustainability also features heavily in HPE’s current generation of systems. While HPE has been at the forefront of direct liquid cooling technology development for over a decade with over 300 patents in this space, flagship sites like Lumi are now enabled to innovate in the capture and reuse of waste heat from HPC clusters to feed local energy networks or initiatives. As datacenter efficiency becomes a more prevalent design constraint, their unique proficiency in this area is critical.

Behind these deployments lies a clear methodology. HPE’s blueprint for large-scale computing combines efficiency, scalability, and digital sovereignty. The result is an architecture that can serve nationally critical research programs, as well as multi-tenant commercial AI factories that churn vast amounts of tokens for model training, simulation or burgeoning applications such as digital twin development.

As AI and data-driven science demand ever larger compute resources, the challenge will be to maintain its balance between performance, sustainability, and sovereignty. It’s a delicate equilibrium that will define the success of our next generation of global supercomputing.

Sponsored by HPE.