Maintaining Europe’s Edge in Supercomputing Software

As the era of exascale supercomputing approaches, Europe is making a concerted effort to become a first-class HPC power, on par with that of the United States, China, and Japan. That goal is being spearheaded by the EuroHPC Joint Undertaking, a one-billion euro initiative to build out the continent’s supercomputing infrastructure and develop HPC software for 21st industries in areas like bioengineering, precision medicine and smart cities.

But even as it lags in HPC horsepower, Europe is already a recognized leader in algorithm and software development across many science and engineering fields. A good example of this is the superior accuracy of the European Centre for Medium-Range Weather Forecasts (ECMWF) model for hurricane prediction. Historically, the ECMWF model has outperformed competing models, including the US National Weather Service’s Global Forecasting System (GFS), in predicting the path of hurricanes. ECMWF may even be better than the enhanced FV3-GFS model that is being rolled out this summer in the US.

So how are Europeans able to develop some of the world’s best HPC software? A recent survey conducted by the Partnership for Advanced Computing in Europe (PRACE) offers some hints. The survey, which was conducted in the spring of 2019, was designed to find out how the PRACE Tier-0 supercomputers are being used and to understand some of the basic requirements for HPC infrastructure from its user base. The results are based on responses supplied by 50 principle investigators (PIs) who had led research projects that had been awarded time on PRACE systems.

The research projects in question represented the gamut of scientific disciplines, including physics, earth sciences, chemistry, biology, and astronomy/astrophysics. The latter, under the general category of “universe sciences,” received the largest number of responses, with a 30 percent share. No other field received more than 18 percent of the total.

According to PRACE researchers, Turlough Downes and Troels Haugbølle, both of whom presented the survey results at a EuroHPC meeting last month, agreed that the breakdown is generally representative of the way the Tier-0 resources are used today. (Interestingly, both Downes and Haugbølle are astrophysicists.)

A full 90 percent of the research projects across these disciplines are using PRACE systems for large scale parallel simulations, with lesser amounts for post-processing of data (24 percent), in situ data analysis (20 percent), and embarrassingly parallel simulations (20 percent). Typical resource usage for a research project was in the range of 10 to 100 million core-hours per year, representing 58 percent of those surveyed. About 12 percent of the projects used over 100 million core-hours.

As far as the type of codes running on these machines, 38 percent use unadorned in-house developed codes, while 58 percent employ in-house codes that have been heavily optimized. “When you add those two figures together you find that an awful lot of Tier-0 users use in-house developed codes, not community codes,” observed Downes.

Downes also noted that 15 of the 50 largest project that were surveyed all reported using in-house codes. A significantly smaller number of research groups use standard community codes.

“This may be something that people find concerning, in some respects, because in-house sometimes means there are errors in the code that nobody talks about,” said Haugbølle. “But really that is part of the strength of European HPC – that there is this wide diversity of codes. So, you can get validation of research results by people running essentially the same problem, but running different algorithms.”

Haugbølle explained that most of the community codes started out as in-house codes developed by a small team of researchers. From his perspective, it’s important to maintain this two-layer development arrangement, since these tightly-focused teams are often the genesis of the community codes. Although these standard codes are employed less often in the more cutting-edge environment of PRACE, they are likely to be more widespread in the larger HPC community.

More importantly, these in-house codes will also be the primary source of application software that will run on Europe’s upcoming pre-exascale and exascale machines. These are the teams that could develop “the next GROMACS,” suggested Haugbølle.

On the hardware side, the PIs reported they are generally satisfied with the Tier-0 machines for running their applications. According to the survey, 62 percent said their workflows are very well supported, with 30 percent reporting they are moderately supported, and just 8 percent saying their codes are difficult to support with the current infrastructure.

Drilling down a bit, the vast majority of the projects (86 percent) are running their codes on multicore processors, but 36 percent are using GPUs or Xeon Phi CPUs, with the same percentage running in multicore/manycore hybrid environments.

Given those numbers, it can be assumed that a fair number of the in-house and community codes have been ported to accelerators or manycore CPUs. That’s a good omen, considering that the pre-exascale and exascale systems going into Europe are likely to use both types of processors. That’s assuming the European Processor Initiative makes good on its plans to develop Arm CPUs and RISC V-based accelerators for the EuroHPC work.

But since neither of those is a common platform today, many of the European HPC codes will have to be ported and optimized for these chips. And of course, these same codes will also need to be to be scaled up to take advantage of the larger node counts expected in future supercomputers. Since PRACE is the testing ground for the largest of these machines in Europe, it will be key to driving these science applications into the exascale era.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.