The need for high-powered computing isn’t going away. Enterprises trying to corral the massive amounts of data they’re generating and adopting emerging technologies like machine learning are demanding the sort of HPC capabilities that not too long ago were reserved for research and educational institutions. And the advances in the rapidly evolving quantum computing space are only whetting the appetite for what will come next.
However, as with other industry sectors like data science, AI, and cybersecurity, that challenges not only are technological. There is the need to make these capabilities as widely available as possible and to close the gaping skills gap that threatens to keep them out of reach for many organizations.
Ken Durazzo, vice president of the Office of the CTO Research Office at Dell Technologies, tells The Next Platform that quantum computing will be one of the most transformative technologies of the last 50 years, driving significant shifts in scientific research and other areas.
“We’re going to see a whole bunch of really transformative type of technologies come about if we’re able to receive the potential,” Durazzo says. “There are two parts to that potential piece. One part is that the industry is going to have to continue to develop the technologies like superconducting qubits, trapped ions technologies, etc., to fully materialize the computational power. But the other part is we’re going to have to industrialize the workforce. We’re going to have to start building the workforce.”
Making HPC and quantum available to a broad range of organizations runs the spectrum from more instruction at the college and high school level and training those already in the industry to offering the technologies as managed or cloud services. It’s building a skilled workforce and lifting the burden of buying and managing the hardware and software from enterprise IT and letting them focus on the work that needs to be done.
“There is a generally a big lack of skill set coming out of the universities for HPC expertise,” Armando Acosta, director of HPC product management at Dell, tells The Next Platform. “People age out and retire, you don’t have the right set of skills to manage and maintain a cluster efficiently optimized in tune on your own environment. What we have is some customers saying, ‘I need HPC. It’s essential for my workloads. It’s essential for me to develop my product. But at the same time, I don’t want to build clusters. That’s not what I’m into. I’m into my codes and optimizing my code so that I can get to my results.’”
At the SC22 supercomputing show this week, Dell is unveiling offerings aimed at easing enterprises’ paths into both HPC and quantum. One way is through the company’s new Apex High Performance Computing, which offers HPC capabilities as a fully managed service in a single-tenant environment, consumed through a subscription model, and located in an organization’s datacenter or a colocation facility.
The service includes hardware via Dell Validated Design Solutions that comes in small (1,728 cores and 336TB of network files storage), medium (4,224 cores and 672TB NFS), and large (9,216 cores and 1.28PB NFS) running Intel Gold 2.0GHz or 2.6GHz chips, with options for one or two Nvidia A100 GPU accelerators. There also is container orchestration through Kubernetes or Apptainer/Singularity, a cluster manager, and job scheduler.
The Dell Validated Solutions systems include a compute-intensive configuration, with 512 GB for digital manufacturing, and memory-intensive, with 1,024 GB for life sciences.
“What you see now is the environments are getting much, much more complex,” Acosta says. “What we’re trying to do with the managed service is give you the software stack that enables you to be productive. Building the cluster, it’s a necessary evil. Managing and maintaining the cluster is a necessary evil, but the value is not until you run an application, you run a job, and you get to some new insight that you’ve never had before. We’re trying to accelerate that time to value.”
It dovetails with the growing use of the cloud for HPC workloads. According to researchers at Hyperion, the cloud section of the global HPC market is still smaller than on-premises, but it’s growing much faster – at about 17.6 percent a year – and will pass $11 billion in revenue by 2026.
Organizations with massive amounts of data or significant data governance issues will likely stay with running HPC in their own datacenters, giving the cost of migrating all the data to the cloud, he says. Others will likely do the same while relying on the cloud for bursting situations. That said, other organizations that have manageable workloads or simulation needs that they want to accelerate will opt for the cloud.
Other organizations are offering similar services. For example, Hewlett Packard Enterprise two years ago introduced HPC on its GreenLake hybrid cloud platform and earlier this year made HPC a key element in its GreenLake expansion plans.
With quantum, Dell is taking a slightly different tack. In partnership with quantum startup IonQ, Dell created a scalable hybrid classical-quantum platform organizations can use to run quantum workload simulations either on-premises or in the cloud. The Dell Quantum Computing Solution is a combination of Dell’s PowerEdge 750xa rack servers aimed at GPU-intensive AI workloads with IonQ’s quantum processing units (QPUs).
<<Dell SC quantum>>
It also includes Dell’s Qiskit Runtime platform for executing classical-quantum workloads and IonQ’s Aria trapped ion system, as well as Kubernetes and Cirq and PennyLane open-source quantum software.
Running simulations is a key step in building the skills that will be necessary as quantum technologies evolve, Durazzo says. Dell has been working on quantum computing and has used simulations as a key way to build its expertise.
“That toolset has been an extremely powerful toolset for us to be able to quickly get hands on a keyboard, learn about how to program quantum machines, and learn how to build algorithms that could be accelerated on quantum computers,” he says, adding that keyboard learning is one of several necessary steps. “It’s enhancing or cementing those learning experiences through experimentation. Finally, once they identify an application that is likely to help them to identify the app that’s more likely to be accelerated by quantum, then they build proof-of-concepts and finally move to productization. We believe right now is the time to catalyze the industry around that whole workforce building and we think that quantum simulation is absolutely a really powerful tool in order to get there.”
The hybrid classical-quantum system will likely be the model for quantum computing in the coming years, Durazzo says. Applications will run on classical infrastructure and will have algorithms that will be accelerated by the quantum hardware and software.
“We’ve developed this platform to be horizontally scalable, to allow for scalable access to those virtual computers, as well as to allow an applications developer to write once and run the application either in the virtual simulators, work in the physical QPUs without having to modify the application algorithm so they have an easy way of implementing,” he says. “We created a flag in the manifest which allows the developer to choose where they wanted to run the app. We’re building a whole bunch of intelligence into that quantum processing layer, which will do far more eventually down the road.”
A broad range of tech vendors are offering quantum simulators, including Intel, IBM (which this month introduced its 433-qubit Osprey quantum processor), Microsoft Azure, Google Cloud, Fujitsu, and Amazon Web Services.
Hyperion researchers are predicting a fast-growing quantum computing space – which has more than 100 suppliers – that will expand by almost 22 percent a year between 2021 and 2024, when it will hit about $900 million. Cloud and hybrid cloud environments will account for about 64 percent of the market for the next three years, they said.