Debating The Role Of Commodity Chips In Exascale

Building the first exascale systems continues to be a high-profile endeavor, with efforts underway worldwide in the United States, the European Union, and Asia – notably China and Japan – that focus on competition between regional powers, the technologies that are going into the architectures, and the promises that these supercomputers hold for everything from research and government to business and commerce.

The Chinese government is pouring money and resources into its roadmaps for both pre-exascale and exascale systems, Japan is moving forward with Fujitsu’s Post-K system that will use processors based on the Arm architecture rather than the vendor’s own Sparc64 chips, and the EU, through such initiatives as Horizon 2020, EuroHPC, and European Technology Platform for HPC (ETP4HPC), is ramping up its supercomputing efforts, including two pre-exascale systems in 2021 and 2022 and two exascale systems a year after that.

In September, the US Department of Energy’s Office of Science unveiled plans for the country’s first exascale system, the Aurora supercomputer being built by Cray and Intel, which is scheduled for 2021 at the Argonne National Lab.

Exascale was a key source of discussion throughout the recent SC17 supercomputing conference earlier this month, from the efforts underway in the EU to the work of the Green500 to drive power efficiency in systems. The goal of exascale computing, as far as the US efforts are concerned, is to create systems that are 50 times faster on real applications than the “Titan” and “Sequoia” machines installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory, with a power consumption target of between 20 megawatts to 40 megawatts. Others have their own definitions, some sticking to a pure 1,000 petaflops of peak aggregate performance as their yardstick. During one session at the conference, the discussion focused on the challenges to getting there, from whether it can be done with commodity processors to the challenges around I/O to whether the ongoing convergence of HPC and data analytics can help with the efforts.

The question about commodity processors raised a host of other issues among the panel, including agreeing on the definition of exascale (which panel members said had more to do with application performance than simply reaching exaflop scale), whether the least expensive path is always the best one to take and the need for a hardware and software ecosystem to drive exascale computing.

For some of the planned exascale systems, the CPUs are going beyond the traditional processors from Intel and IBM and GPU accelerators from Nvidia and AMD. China’s massive Sunway TaihuLIght system, which sits at the top of the Top500 list of the world’s fastest supercomputers with a speed of 93 petaflops, is powered by a homegrown chip, the SW26010. As mentioned, Fujitsu is moving forward with its Arm-based chips for its Post-K system, and among the goals of EU scientists as the region looks to be move into the top three HPC regions by 2020 is to develop their own processors.

Bill Gropp, director of the National Center for Supercomputing Applications at the University of Illinois, countered the question with a number of his own around the definitions of exascale and commodity processors.

“Those questions are important, because understanding why we care is an important part of this,” Gropp said. “You may say, ‘Of course, it’s the only way to make it cheaper.’ But the thing is, a commodity processor has to make design compromises. Those design compromises have costs, so it’s not necessarily the case that a commodity processor will be cheaper to get to where you want to get.”

It may technically be possible to build an exascale system with commodity processors, but “that’s not the only question you have to ask. Is there the political will do so? Is there the financial will? Is there the support and emphasis to make that system useful? Those are question I think that need to be answered,” he said.

Doug Kothe, director of the Exascale Computer Project and currently deputy associate laboratory director of the Computing and Computational Sciences Directorate at Oak Ridge, agreed that an exascale system could be built today, but “it would be what I call ‘Linpack exascale,’ and Linpack is one of many aspects of application performance. So the system would probably realize an ever-shrinking set of applications and likely not be affordable.”

However, exascale goes beyond the CPU, Kothe said.

“We need to recognize that real application performance from tuning of the hardware and from algorithmic improvements,” he said. “It’s not just about exaflops. You need an entire ecosystem to deliver the performance that I’m thinking exascale should be at, capable exascale, where you measure by application performance relative to today. So yes, you can build something today, but I’m pretty positive it wouldn’t be able to hit mission needs based on the CPU.”

A challenge with commodity processors is in the at-times differing motivations of chip makers and the HPC community. Scientists and engineers working on the exascale projects are looking years down the road, while processor vendors tend to look more short-term, at their next couple of releases of their products.

“The reason why there are exascale initiatives around the world is because commodity technologies do not allow us to reach at least capable exascale,” said James Ang, director of hardware technology for the Exascale Computing Project’s efforts at Sandia National Laboratories. “We can probably get to Linpack exaflops, but the reason why there are national initiatives around the world is because commodity processor roadmaps don’t actually get us there for our real application workloads.”

Still, as technologies are built to support the exascale projects, they should be done with the idea that they will end up being commodity, said Jesus Labarta, director of the Computer Science Research Department at the Barcelona Supercomputing Center. The goal of exascale computing is not to accelerate the performance of one or two HPC applications to 50 times that of current speeds, but for many workloads. Developing these technologies for exascale systems is costly and time-consuming work, so you want them to have as broad a reach in the industry as possible. Building something “without the intention of becoming a commodity would be a waste of money. Having a one-of-a-kind system to reach exascale for one application or for very limited kinds of applications doesn’t make sense. It has to be born to be a commodity. Otherwise, it’s a loss of time and effort.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.