HPE Adapts Boosted Alletra Storage For GreenLake Utility Consumption

Nearly two years ago, Hewlett Packard Enterprise rolled out a new battle plan for storage and a new kind of storage array to implement it. This week, that new storage array – the Alletra line, one of the fast-growing part of the HPE business – is getting a “Sapphire Rapids” Xeon SP update.

With the Alletra line, HPE argued that most enterprise storage environments were too complex, with many different kinds of storage and all of it needing to be managed and maintained independently and with disparate management tools. This hodge-podge storage approach meant that data scientists, who have to extract value from data, could not get their hands on the data fast enough, and were making tradeoffs around efficiency, performance, and resiliency as they did their data analytics.

The Unified DataOps strategy that debuted in the middle of 2021 and that wraps around the Alletra storage involves cloud-native control features and artificial intelligence and relies heavily on its GreenLake as-a-service hybrid cloud platform. It included a new line of storage systems dubbed Alletra. At the time, HPE described the Alletra systems as a “paradigm shift for data infrastructure from edge to cloud” that through the vendor’s new Data Services Cloud Console delivered a “cloud operating and consumption experience wherever data lives.”

HPE in 2019 was the first major OEM with a plan to offer its entire product portfolio as a service, with GreenLake being the foundation of its efforts. Others have since started similar journeys, such as Dell with Apex, Lenovo with TruScale, and Cisco Systems with Cisco+. The goals are similar: Create a cloud-like as-a-service environment for organizations that includes access to enterprise hardware and software through subscription and other flexible consumption models. At the same time, the vendors become larger players in the fast-growing hybrid cloud space.

HPE says its GreenLake strategy is working. In its Q4 of fiscal 2022, total as-a-service orders rose 68 percent year-over-year and annualized recurring revenue (ARR) increased 25 percent. In the fourth quarter, HPE’s storage revenue hit more than $1.3 billion, a 6 percent year-over-year jump and the company said its Alletra orders and revenue both grew by more than 70 percent from the third to the fourth quarter, adding that the product family carries a greater mix of subscription software and services than its other storage lines.

The Alletra numbers were built on the 5000 (for non-all-flash workloads), 6000 (high availability), and 9000 (latency-sensitive with 100 percent availability and all NVM-Express flash) systems that have rolled out since 2021. For years HPE storage server systems had been under the umbrella of the Apollo systems, but they broke out with their own Alletra branding two years ago to differentiate the cloud-native nature of the infrastructure, according to Stephen Bacon, senior director of the big data category at HPE.

“The Alletra brand equals cloud-native data infrastructure,” Bacon tells The Next Platform. “We are fundamentally unifying on the data infrastructure portfolio, enabling customers to address the full spectrum of their data requirements with that portfolio.”

That portfolio now includes the 4000 series, introduced this week to coincide with Intel’s long-awaited launch of its “Sapphire Rapids” Xeon SP processors. He says HPE is “unifying on the data infrastructure portfolio, enabling customers to address the full spectrum of their data requirements with that portfolio, whether it be employing server-based with new Alletra 4000 offerings or array-based with the existing Alletra 5000, 6000, and 9000 offerings.”

The two systems – the 1U all-NVM-Express Alletra 4110 and 2U hybrid-NVM-Express 4120 – are aimed at mid-size to large enterprises with a range of use cases, from real-time processing and stream and batch analytics to machine learning, data lakes, video surveillance, and medical imaging, he says.

An example is a transportation authority Bacon says he spoke about video surveillance. The company wants to attach cameras to all their vehicles to record road conditions while they drive on the streets to judge road conditions and decide what needs fixing. They need an infrastructure that enables them to quickly collect and crunch the data, he says, adding that it’s also an example of the growing impact of edge computing on the big data picture.

“That’s certainly been one of the more profound manifestations of change in the recent past and we absolutely expect that to continue forward and propagate further change in what it then demands out of data infrastructure, whether a data infrastructure at the edge and core data centers or the cloud,” Bacon says. “We would never have conceived of that [transportation authority example] in the past. … It’s becoming profound. The nature of change at the edge is becoming one of the greatest drivers for change in the data landscape. That’s not just at the edge but what it demands of core infrastructure in addition.”

Both the Alletra 4110 includes two of Intel’s new Xeons, with the 4120 giving an option for one or two, with up to 5 TB of DDR5 memory. They also support GPU and FPGA accelerators.

The 4110 is aimed at such workloads as data stores for machine learning, distributed and NoSQL databases, high-performance software-defined storage (SDS), and hyperconverged infrastructure. The 4120 include analytic data lakes, SDS, converged data protection, and deep archives.

Both support Enterprise and Data Center Standard Form Factor (EDSFF) and SFF SSDs. The 4110 includes up to 20 EDSFF or SFF NVM-Express SSDs, with up to 315 GB/sec of PCI-Express 5.0 bandwidth. The 4120 has up to front 24 LFF with 4LFF, 12 EDSFF, or six SFF drives in the rear. It can also hold up to 488 SFF drives in the front with 12 EDSFF or six SFF in the rear, with up to 225.6 GB/sec of PCI-Express 5.0 bandwidth for NVM-Express, 24 Gb/sec SAS, or 6 Gb/sec SATA drives.

Data security also is key, Bacon says, with HPE adopting a zero-trust policy, an increasingly popular framework given the rising complexity of cyberthreats that essentially says that any person or device trying to connect to the systems has to be verified and trusted before the connection is allowed. There also is a five-factor authentication process for connecting devices to GreenLake and AES-256 encryption to protect the data within the platform.

Self-management in GreenLake comes via HPE’s Compute Ops Management tools while REST APIs enable management operations to be automated.

The goal is to create a data infrastructure that comes with the cloud-like flexibility and ease of use offered through the GreenLake platform without the unpredictable costs associated with public clouds, he says. The infrastructure also needs to be able to quickly grow and adapt to a volatile and difficult-to-predict data landscape.

“The one thing that we can certainly predict is that data will become more and more and more demanding than it has been in the past, than it is today,” he says. “The fact that data demand will become more demanding is the one thing that we and the industry can predict with confidence, the dimension of which becomes highly debatable unless having an infrastructure that’s fundamentally built for data applications.”

 

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.