Europe’s Evolving View of “Continental Exascale”

Europe is known for taking its own routes in almost every segment and supercomputing is no different. While the broad expectation was for a Euro-centric processor ecosystem for exascale, that intention has been subverted in terms of hitting roadmap goals and establishing single-center dominance. Instead, Europe is taking a collective approach to extolling its dominance in HPC.

In the U.S. and China we tend to talk about exascale supercomputing in the context of single systems. However, for Jean-Marc Denis, who chairs the European Processor Initiative (EPI) in Europe, the continent has already surpassed exascale and will hit 4-5 exaflops of compute capability in the next couple of years. This is because he’s counting the whole of Europe under a single view—EuroHPC.

The new large systems could be “in Germany or France,” Denis says but this continental view of exascale is “significant and aligned with our strategy to serve all twenty-seven countries. When you look at the pre-exascale and large number of petascale systems and add that together we are well past one exaflop, it’s just not centralized in one place.” Denis shared the following collection of HPC systems that comprise what we call “continental exascale.”

Distributed exascale, or in Europe’s case, continental exascale is useful for collaborative scientific computing. But exascale matters not to reach some cumulative count across sites (that may not have enough networking capability to keep up with performance) but because of scalability on a single machine across many threads and nodes. Exascale as a concept is most useful when it can capture a single problem across many thousands of cores on the same machine (versus making network hops to other resources, etc.).

This collection of sites to add to an exaflop total was not always the plan for Europe, however.

In 2015 when European Commission president Jean-Claude Juncker announced ambitions for an exascale system based on a homegrown processor type by 2020, the concept was not continental, but rather one grand-scale machine. EPI got off the ground two years later, beginning 2017 with a rough outline based on Arm with later designs around RISC-V (led by French RISC-V commercial entity, SiPearl), which will serve as an accelerator with Arm as a host processor.

The RISC-V-based EPI accelerator (EPAC), code-named Titan, is itself of heterogeneous design, incorporating Vector Processing Units (VPUs) and Stencil/Tensor accelerators (STX). With an eye toward both HPC and AI acceleration, EPAC will support every standard numeric format from INT8 through FP64, as well as bfloat16. According to Denis, they’re also looking into something called variable precision, where the number of bits devoted to processing is adapted at runtime depending on the desired precision.

With the original EPI goals slipping from intended dates, Europe is recasting the messaging around its strategy. While EuroHPC and EPI were both committed to European-wide success in supercomputing, this collective effort or “continental supercomputing” vision focuses on key systems as the high performance superstars with petascale machines bringing up the FLOP count. This actually fits well with the system requirements Denis and colleagues at EPI and EuroHPC see going forward—these are far less focused on monolithic machines and instead emphasize modularity, not just at the processor and component level, but at the continental one.

In the EuroHPC and EPI view, “future supercomputers will be modular,” Denis says. “They’ll have massively non-homogeneous architectures, combining one general purpose processor with several different accelerator kinds. He adds that processors must be designed for this modularity, meaning they are open and agile.” With this modular approach, he explains that software will have to do even more heavy lifting as the unifying force between all modules. These modules must stretch from the edge to HPC to cloud, making open source more important than ever. “Proprietary software stacks, especially for specialized hardware, will become problematic.”

A more comprehensive view of EPI’s current status was shared at the recent Supercomputing Frontiers conference, which can be viewed here.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.