The State of MPI for Future Exascale Systems

A working group formed on behalf of the Exascale Computing Project (ECP) in the U.S. has taken an in-depth look at how the message passing interface (MPI) is being used in existing HPC systems with an eye on how these use cases might evolve with ever-larger systems.

The team, comprised of researchers from several national labs, was able to collect the responses from 77 ECP project owners. Of those projects, 56 were using MPI and could weigh in on their usage patterns and trends. The goal for ECP is to better understand what will be needed for MPI developers on the path to exascale. Of course, as a side benefit the detailed report provides a wealth of system design, application, and other extreme scale hardware and software choices from some of the largest scale HPC environments.

ECP has been keeping tabs on MPI trends since its inception and has two noteworthy MPI-based exascale software efforts currently underway. OMPI-X from Oak Ridge National Lab is bolstering Open MPI for exascale scalability and reliability and ExaMPI from Argonne National Lab is developing its own scale-proof implementation of MPI. Both are geared toward providing a more stable and scalable MPI base, which is critical since MPI will continue to be a critical piece of the exascale stack.

The ECP MPI survey results highlight the reasons why the HPC community will continue to evolve with MPI for reasons that include flexibility, portability, and efficiency but there is demand for new options that build on the MPI base, hence the above extensions of MPI being developed at ORNL and Argonne. Further, the need for MPI to meet exascale level demands in terms of fault tolerance, thread usage, and integration with accelerators (namely GPUs) and RDMA capabilities are all important topics that arise for ECP project owners that already work with MPI. Respondents also made it clear that “the capabilities of interest are, in most cases, covered by point-to-point and collective communications, even if one-sided communication (RMA) is gaining interest in the context of exascale.

It is worth noting that while the survey focused on ECP projects that are using MPI the results are more nuanced. Most of the project owners surveyed said they do not interact with MPI directly but instead use it through libraries or an abstraction layer. This probably will not come as a surprise, but it does highlight the role of the MPI ecosystem in continuing development toward exascale.

The survey team also notes that nearly half of projects reported that MPI covered all of their communication needs. They add that there was significant interest in active messages and persistent communications (the latter may refer to persistent collectives, or it may be an indication that users are not familiar with the persistent point-to-point capabilities already available). Job-to-job communications were also requested.

Among other noteworthy findings is that current or planned use of threads (especially OpenMP, Pthreads, Kokkos or RAJA) together with MPI is nearly universal among the responses. “Nearly half of responses indicated that they would like to be able to make MPI calls from within multi-threaded regions of code. However a majority of responses also indicated that they did not need thread-level addressability on the target side of communications. These results motivate continued pursuit of both of the complementary Finepoints and Endpoints approaches for exascale.”

For anyone even remotely interested in MPI and larger pre-exascale system software trends the ECP MPI report is a must read. Responses are thorough and reveal much about how ECP project owners are thinking about both hardware and software.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.