Peter Ungaro, senior vice president and general manager of HPC and mission critical solutions at Hewlett Packard Enterprise and longtime CEO of supercomputer Cray before HPE bought the company for $1.3 billion in 2019, spoke with The Next Platform earlier this year about the dawning exascale era the world is about to step into.
Given HPE’s number-one position in the HPC and supercomputing space – thanks in large part to the marriage of HPE and Cray – and its central role in the development of the United States’ first three exascale systems, with the first planned for next year, it would have been easy for Ungaro to boast about what the vendor is doing in the rarified air of the world’s fastest supercomputers and what exascale computing will mean for the country’s HPC institutions and academic research facilities.
Instead, he spoke most passionately about what exascale could mean for mainstream enterprises and even smaller companies that are struggling to bring the massive amounts of data they’re creating under control and to leverage emerging technologies like artificial intelligence (AI), automation and analytics to draw critical business insights from that data. New hardware and software coming from the development of the exascale systems – such as Slingshot interconnect and Cray Programming Environment software for AI and HPC workloads – will be available to all organizations, not just those at the top of the food chain, and mean that in the not-too-distant future, enterprises will have systems in their datacenters that have the same technologies and capabilities as these supercomputers, he said.
“We’re trying to build supercomputers so they look and feel like they’re from the cloud, but they scale and run and perform like a supercomputer,” he said. “We’re really blending those two worlds … to make them much more broadly applicable [to enterprises] than supercomputers in the past.”
HPE took another step in that direction this week, unveiling a plan to offer its vast HPC portfolio as a service on its GreenLake hybrid cloud platform, marrying the hardware and software offerings to another of the vendor’s key growth strategies. The goal is to enable enterprises to run pre-configured HPC cloud managed services either on premises or in colocation facilities and pay for them on a per-use basis, giving enterprises access to large amounts of compute power and storage capacity in a cloud-like environment without prohibitive upfront or operational costs.
“One of the things that we’ve been working on is building great solutions in a [capital expenditure] model, [with] the standard acquisition model that we’ve always had,” Ungaro told journalists during a conference call. “Over the past few years, we’ve started to move to a model where we can do consumption-based acquisition so you don’t have to do it by capex. What this is talking about here – and I think this is a huge step forward for us – is really being able to do it in much more of a cloud services model, which is a much more integrated environment overall and a much more increased capability. What we’re hearing from customers is that they want to have the same kind of user experience and environment that they’re getting in the public cloud.”
As we’ve written, HPE is not alone in such efforts. A growing number of established datacenter tech vendors, from Dell Technologies to Cisco Systems to Pure Storage, are offering more of their portfolios as a service as more organizations adopt hybrid cloud models. Even with all the talk about the cloud, only 20 percent to 30 percent of workloads are in the cloud. The rest remain on premises – due to a range of reasons, from data gravity and latency to security and compliance – and many will never make their way to the cloud. Given that, enterprises are looking for a more cloud-like environment in their datacenters, said Keith White, senior vice president and general manager of GreenLake, who came to HPE in late 2019 after years working for Microsoft on its Azure cloud business.
“In essence, they really want the cloud experience and that cloud experience means everything’s automated, it’s self-serve, it’s available to me very quickly and I have access to that,” White said. “They want cloud economics. They want to pay for what they use. They don’t want to have to write big checks up front and they want to have that additional capacity available for them very quickly. And they want cloud management. They want someone to manage all this for them on the backend, keeping up with capacity, performance, patches, those types of things, which frees up their valuable resources so that they can spend time on those critical business needs.”
Offering technologies as a service also is a way to help bring HPC – with all its complexities and costs – and AI and machine learning, which require high levels of compute, available and affordable to more organizations.
“We’re approaching this whole area fundamentally differently than traditional cloud providers,” Ungaro said. “We start with the leadership position in the market and then bring that capability to a cloud infrastructure rather than starting with a cloud infrastructure and trying to apply that to HPC. Our path is to, over time, really enable everyone to gain access to our purpose-built silicon, storage and software technologies, technologies such as Slingshot, our high-speed interconnect to reduce congestion, our Cray, Apollo and ProLIant servers that give us high-density heterogeneous compute to deliver leading performance, our HPC Performance Cluster Manager to use management at scale. To have high capacity and performance GPUs you can use as accelerators to meet specific needs in both high performance computing and AI, because more and more people are using high-performance computing infrastructures to solve problems because of how challenging they are.”
HPE announced last year announced plans to offer its entire portfolio as a service by 2022 and has been chipping away at that plan ever since. At the center is GreenLake, which was launched in 2017 and which has grown from about 350 customers three years ago to more than 1,000 now. It’s designed to enable enterprises to launch cloud services on premises, in colocation sites or at the fast-growing edge. In the third quarter, HPE saw as-a-service orders jump 20 percent year-over-year. Now the vendor is bringing its expanding HPC lineup – HPE’s HPC and Mission Critical Systems business saw third-quarter revenue grow 25 percent, to $975 million – to the platform
The company will launch the new HPC services starting in the second quarter 2021 and will offer them in small, medium and large bundles of compute (including CPU and GPU options), storage (such as its ClusterStor for parallel storage) and networking. initially will offer the HPC service its Apollo and ProLiant systems, paired with storage and networking, and later down the line at the larger Cray systems into the mix, Ungaro said. The service also will include HPC software for handling workload and cluster management and supporting and orchestrating containers.
The services, which enable organizations to scale their deployments up and down, will use a metering system to charge customers based on compute and storage.
“Think of that as how many gigabytes some are using or how many cores or those types of scenarios,” White said. “They basically pay us a certain amount for what they use. We really focus on sort of the key aspects of storage and the compute usage that the system has on itself.”
The HPC services also will come with a range of HPE services, including the GreenLake Central software platform or managing the services, a self-service dashboard for managing the HPC clusters, Consumption Analytics to control cost and usage and various HPC, AI and application services for putting HPC workloads into containers. Organizations also will have access to HPE’s PointNext cloud consulting services.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.