HPC Heavyweight Goes All-In On OpenACC

Across the HPC community, commercial firms, government labs and academic institutions are adapting their code to embrace GPU architectures. They are motivated by the faster performance and lower energy consumption provided by GPUs, and many of them are using OpenACC to annotate their code and make it GPU-friendly. The Next Platform recently interviewed one key organization to learn why it is using the OpenACC programming model to expand its computing capabilities and platform support.

If the earth was the size of a basketball, its atmosphere would be the thickness of shrink wrap. It is fragile enough that in 1960, the US government decided it should probably learn a bit more about it. It created the University Corporation for Atmospheric Research, a collaboration of over 100 North American colleges and universities, to take on the challenge. Since then, the scope and complexity of UCAR’s studies have grown. Its areas of research and expertise now extend from under our oceans to outer space, acknowledging how each affects the earth’s atmosphere.

Models produced by UCAR can help to forecast the weather, a supercomputing task that took center stage this past Summer as people across the Caribbean and Southeastern United States watched hurricane events roll in one after the other from the Atlantic Ocean. The predicted paths of each of these hurricanes was updated each day in the run-up to landfall. That’s because of limits in accuracy of the predictions are due in part to limits in computational power – even on UCAR’s Cheyenne supercomputer, a 5.34 petaflops beast featuring 4,032 dual-socket 36-core “Broadwell” Intel Xeon E5 nodes. “We know that if you’re forecasting hurricanes, resolution makes a difference, as does the amount of statistics you have,” says Dr. Rich Loft, director of UCAR’s technology development division. “That puts a premium on computational performance.”

A little over a year ago, UCAR began investigating whether and how GPU computing could help address the enormous requirements for computing power generated by its existing and next generation weather models. Adopting and integrating GPUs into its workflow showed great potential for performance gains, but there was a challenge: UCAR’s models have become increasingly complex over the years, sometimes containing a million lines of code. “It’s a real obstacle if you have to rewrite and touch every one of these lines of code and refactor it into something that can be used by a GPU,” Loft adds. This has been a barrier to adoption by some large production applications.

UCAR decided to tackle the necessary code modification by using OpenACC compiler directives for the GPU functions. Loft’s team used it to introduce GPU support into the dynamics portion of the Model for Prediction Across Scales (MPAS), an emerging meteorological model in development at UCAR that can be used for climate and weather predictions. One advantage to working with MPAS dynamic was its relatively compact, 20,000-line code base. According to Loft, “We wanted to see if we could get better performance by using OpenACC on the GPU side, while maintaining the performance and correctness of our baseline code.”

The UCAR team took a year to build support for OpenACC into the part of the MPAS code dealing with dynamics, designed to model fluid dynamics equations at atmospheric scales around the globe. It turned out to be a very good investment of their time. “The resulting source code not only produced correct results on the GPU and ran slightly faster than the original version on Xeon multicore CPUs, but it also achieved really good performance results on GPUs,” says Loft. A single Nvidia P100 Pascal GPU currently achieves about 2.5X to 2.7X the performance on MPAS compared to one Cheyenne dual-socket Xeon Broadwell node. Work remains to enable use of multiple GPUs per node.

For phase two of the development effort, the UCAR team plans to complete the MPAS port by GPU accelerating the physics part of the model using OpenACC. It is a more complex effort, involving about 90,000 lines of source code that models how the surface boundary of the earth works, how clouds form (and the precipitation within them), and how sunlight is turned into heat radiation. It’s an ongoing project that would be far more difficult to do without OpenACC, Loft says, not least because the models are a living, evolving code base, frequently changed by scientists. “OpenACC is easy enough to use that if you have to change or enhance a given part of the source code in some way, it’s usually not much more than a cut and paste of the OpenACC directives into the new stuff.”

Once UCAR’s MPAS development is complete, all of the source code will be made available to the atmospheric research community at large, Loft says, enabling researchers and weather forecasters around the world to model atmospheric conditions at higher resolution, while at the same time using less electricity to run their computing systems in the process. “A rising tide floats all boats,” he concludes.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.