The Texas Advanced Computer Center (TACC) will house the latest leadership-class supercomputer funded by the National Science Foundation, a project that stands as a tribute to the NSF’s continued efforts to push supercomputing projects and the latest indication of the ground the organization is losing to the Department of Energy (DOE) in this effort.
Phase 1 of the TACC project will cost $60 million, an award given by the NSF to develop an HPC system that will provide two to three times the application performance of the Blue Waters supercomputer, a five-year-old system hosted at the University of Illinois that at the time of its deployment in 2013 was the world’s fastest system at an academic institution. Blue Waters also was the most powerful NSF-funded supercomputer, with a peak performance of 13.3 petaflops. The system – which includes 22,640 ray XE6 nodes and 4,228 XK7 nodes that are powered by AMD’s Opteron “Bulldozer” processors and Nvidia K20 GPU accelerators – is managed by the National Center for Supercomputing Applications (NCSA).
Among the other supercomputing centers that TACC beat out for the NSF project were the University of Illinois at Urbana-Champaign and the San Diego Supercomputer Center.
The NSF’s $60 million award highlights the improving efficiencies and falling costs of compute power. At the time of its deployment, Blue Waters came at a cost of about $200 million. At two to three times Blue Waters’ performance, the new TACC system – which will come online next year and run for at least five years – will scale up to nearly 40 petaflops of application performance for less than a third of the cost. That said, the TACC system will be about twice the cost of the current most powerful NSF-funded supercomputer, Stampede2, which was deployed last year at TACC at the University of Texas at Austin with a peak performance of 18.3 petaflops.
Stampede2, based on systems from Dell and Cray, includes 1,736 Intel “Skylake” Xeon nodes and 4,200 “Knights Landing” Xeon Phi nodes, the third generation of Intel’s Many Integrated Core architecture.
That said, the systems the NSF is funding now shouldn’t be confused in terms of price or performance with those being backed by the DOE, which has a much larger funding pool to draw from. Summit, the latest system to top the Top 500 list of the world’s fastest supercomputers, is housed at the DOE’s Oak Ridge National Laboratory and has a performance of 122.3 petaflops (and 187.6 petaflops peak performance), with 4,356 nodes – each of which has two 22-core Power9 chips from IBM and six Tesla V100 GPUs from Nvidia. The nodes are connected via Mellanox’s InfiniBand network.
Number three on the list was Sierra, at the DOE’s Lawrence Livermore National Laboratory. It’s a system similar to Summit, with 4,320 nodes powered by two Power9 CPUs and four Tesla V100 GPUs and connected by the same Mellanox technology. It delivers 71.6 petaflops of performance, and 119 petaflops at peak.
Both were the result of a $325 million DOE contract awarded to IBM, Nvidia, and Mellanox in 2014. In addition, the DOE has been a leading driver of the United States’ exascale efforts, most recently via the announcement in April of up to $1.8 billion in funding for at least three new exascale systems that will be deployed at DOE national labs between 2021 and 2023.
That said, what the NSF has been able to fund shouldn’t be discounted. Blue Waters is used by thousands of scientists and engineers to run works associated with everything from studies into the evolution of the universe to molecular dynamics. As noted in The Next Platform, Blue Waters made news last year with research performed on the system in such areas as supercell and tornado research and natural gas and oil exploration. In addition, an analysis of the workloads performed on the supercomputer in its first three years of work found that more than two-thirds of node-hours were used by mathematical and physical and biological sciences group. In addition, the number of science fields accessing Blue Waters more than doubled during that time.
Similarly, Stampede2 also is used by thousands of researchers across the country and is run by a group comprising compute experts from such institutions as UT Austin, Clemson, Cornell, the University of Colorado at Boulder, Indiana University, and Ohio State University.
Few details of the upcoming TACC system have been released, though part of winning the NSF award is the requirement that TACC create a plan for the design of a Phase 2 system, which the funding agency is calling an upgrade of the first design. The NSF called for a Phase 1 system that will be able to run a broad array of data- and compute-intensive applications and used Blue Waters as the basis for those organizations competing for the award. Along with the Opteron chips and Nvidia GPUs, Blue Water boasts XE6 nodes that each of 64 GB of memory, XK7 nodes with 32 GB of memory, a dedicated storage system of 26 petabytes of usable online storage and 38p petabytes of usable nearline tape storage for longer-term storage needs.
With Stampede2, each of the Xeon Skylake nodes includes 48 cores and 192 GB of RAM and each of the Knights Landing nodes provide 68 cores, 96 GB of DDR RAM, and 16 GB of MCDRAM. In addition, the system uses Intel’s Omni-Path network and two Lustre file systems with a storage capacity of 31 PB.