NSF Puts $10 Million Into Composable Supercomputer

If they are doing their jobs right, the high performance computing centers around the world in academic and government institutions are supposed to be on the cutting edge of any new technology that boosts the performance of simulation, modeling, analytics, and artificial intelligence. Not the bleeding edge, where the hyperscalers and national labs live, but back a little bit from the riskiest part of the blade.

And so it is with a nod of approval that we are seeing disaggregated and composable infrastructure start to get a toe-hold in HPC, with upstart composable fabric maker Liqid once again scoring a big deal to test out the ideas embodied in its Matrix fabric and Liqid Command Center controller.

In this case, the Liqid disaggregation and composability software and PCI-Express switching fabric is at the heart of a prototype of a system called Accelerating Computing for Emerging Sciences, or ACES for short, that will be built by the National Science Foundation. The ACES machine will be created and used by researchers at the University of Illinois, Texas A&M University, and the University of Texas, and it will be installed at Texas A&M alongside the “Grace” 6.2 petaflops supercomputer built by Dell, which is comprised of 800 all-CPU compute nodes using Intel “Cascade Lake” Xeon SP processors plus 100 hybrid CPU-GPU nodes employing the same Xeon SPs plus a paid of Nvidia “Ampere” A100 GPU accelerators. (The system also has eight nodes with large memory, at 3TB instead of 384GB per node, and eight nodes for inference that have Nvidia T4 GPU accelerators.)

As you might expect with two Texas universities involved, there was a pretty good chance that Dell would be the primary contractor for the ACES machine, and indeed it is. The exact configuration of the hardware in ACES has not been settled as yet, but what is known is that the host processors will be Intel “Sapphire Rapids” Xeon SPs processors, and it looks from the NSF award documents that they will be variants with HBM2 memory on them, which we discussed here, and that obviously employ PCI-Express 5.0 controllers that will then link into a PCI-Express 5.0 fabric. The compute engines in the ACES system will also include Intel Agilex FPGAs and “Ponte Vecchio” Xe HPC GPU accelerators.

In addition, because heterogeneity and experimentation is an important mission for the ACES prototype, the machine will also include Aurora vector engines from NEC, IPU engines from Graphcore, and custom compute ASICs (about which very little is known) from NextSilicon, an Israeli chip startup that has raised $200 million and is now valued at $1.5 billion — not bad for a company no-one knows anything about.

The compute elements and storage in racks will be connected by PCI-Express switched fabrics and managed by Matrix, and the nodes will also connect to each other and to external storage via 400GB/sec Quantum2 NDR InfiniBand interconnects. The plan is to have banks of  Optane storage in the racks and to have an external Lustre parallel file system connected to the cluster.

The ACES machine is managed under the auspices of the NSF’s Office of Advanced Cyberinfrastructure and has a $5 million award for the hardware and a $1 million a year award to run it and provide power and cooling to it between 2022 and 2026, inclusive. The plan is to have ACES up and running by September 2022.

The heart of the system is the Liqid Matrix stack, and as we have discussed before, the malleability of a supercomputer might be more important than its raw feeds and speeds in the years to come, and the ACES prototype tests this idea out in the field. For those of you not familiar with Liqid, we did a profile on the company after it dropped out of stealth in June 2017, talked about its three big system wins at the US Army last fall, and then gave a kind of mission statement for disaggregation and composability as 2021 got rolling that included thoughts about the second wave of composability, which is being championed mostly by Liqid and GigaIO.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Time-to-Research and Time-to-Market are accelerated by disaggregating resources (SSDs,GPUs, FPGAs, NICs) that are sometimes stranded resources inside servers, and which can be better utilized when externalized in separate enclosures across high-speed fabrics, which can then programmatically and/or dynamically be scheduled and made available to time-critical and/or high-priority applications that urgently need them.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.