The Evolution Of Hyperconverged Storage To Composable Systems
April 12, 2018 Jeffrey Burt
Hyperconverged infrastructure in some ways is like the credit card in those old TV ads: in this case, it’s everywhere that enterprises want to be. HCI put compute and storage on the same cluster, tightly integrate them with networking and unified management tools and essentially give enterprises a private cloud for the datacenter as well as pushing compute out to the edges in a consistent manner.
HCI also promises a bunch of other things beneficial to enterprises, including streamlined management, lower costs, faster speeds, and easier scalability than traditional IT systems to better address the rise of cloud computing, analytics, machine learning and other emerging workloads, all of which is contributing to the rapid growth of the market that is predicted by some to grow 50 percent or more a year, from about $4.5 billion this year to almost $6.5 billion in 2019.
As we noted here at The Next Platform, HCI is distinct from what hyperscalers like Facebook, Google and Amazon Web Services are doing as they completely disaggregation servers and storage resources. Scale has its own issues – hyperscale, others.
The enterprise scale issue – hundreds of nodes at best – is why most OEMs over the past few years have rolled out their own hyperconverged portfolios, with many of them combining their hardware offerings with specialized hyperconvergence software from vendors like Nutanix. Dell, armed with VMware through its pricy acquisition of EMC, is the world’s top HCI vendor, with Nutanix coming next and Hewlett Packard Enterprise and Cisco in a statistical dead heat. While many system makers partner with the likes of Nutanix, others have decided to buy software firms to build hyperconverged solutions that more tightly integrate the hardware with the software. For example, Cisco Systems bought Springpath last year after partnering with the software maker in 2016 for the OEM’s HyperFlex hyperconverged systems.
HPE made a similar move in 2017, growing its hyperconverged capabilities with the $650 million acquisition of SimpliVity, a startup similar to Nutanix that brought with it hyperconvergence software as well as partnerships with other OEMs. HPE had built out its converged infrastructure offerings and was selling some hyperconverged solutions that included products from its ProLiant servers, Apollo lineup of HPC systems and its StoreVirtual software. With SimpliVity, HPE gained a high-profile company upon which it could grow it hyperconverged ambitions.
But as HPE builds out its hyperconverged infrastructure capabilities, its also working to expand its Synergy composable infrastructure portfolio, which McLeod Glass, vice president and general manager of SimpliVity, Composable and Software Defined Product Management unit for HPE, told The Next Platform is the natural next step in the evolution of HCI.
Over the past year, HPE engineers have worked to integrate SimpliVity into the larger company, Glass says. The vendor last year combined the SimpliVity hyperconvergence platform software with the ProLiant DL380 server (below) to create the SimpliVity 380, an HCI offering that includes the company’s 4000 and 6000 series all-flash storage. The solution offers enterprises options to address multiple workload sizes, built-in resilience, backup and disaster recovery capabilities, and the ability to form local backups and restore up to a Terabyte-size virtual machine in less than 60 seconds, he says.
“If you look at what’s driven hyperconverged, it’s about customers looking for an easier way to manage their infrastructure,” explains Glass. “They’re working ways to streamline new workloads and optimize the cost of their infrastructure. They’re looking for the ability to do things and to get the economics you get with cloud only with a little more control. And that’s been the real drivers behind hyperconverged. … Our whole view of the market is that it’s going to be hybrid, you’re going to have workloads and assets that are going to be on-premises, you’re going to have stuff that’s going to be in the cloud and at the edge as well, and that hyperconvergence is an on ramp and first step in terms of really delivering on that software-defined infrastructure and the capability to really start to manage at a level above where we’ve traditionally seen that happen.”
As we have mentioned before, the network edge is becoming a focus for enterprises that are looking for ways to take advantage of the massive amounts of data that is being generated by the billions of connected devices and systems. Tech vendors are offering infrastructure resources and software that can be deployed at the edge to more quickly and efficiently collect, storage, process and analyze data closer to where it’s generated. Like other OEMs, it’s a key focus of HPE.
“The edge is the word outside the data center, and it is where digital transformation begins,” HPE president and chief executive officer Antonio Neri said during a conference to discuss the latest financial numbers. “It is where enterprises interact with their customers where employees come together and where companies manufacture their products. We are seeing a data power evolution happening at the edge as customers leverage the unprecedented amount of data being created to drive their businesses. We have highly differentiated offerings in this space, including wireless LAN, network switching and converged edge systems that bring together compute, storage, security and artificial intelligence.”
Glass pushes back at the idea that the hardware is becoming increasingly less important than the software and firmware running on top of it in the hyperconverged space. The tight integration of the hardware and software and the capabilities such as hardware-based security features are important parts of any hyperconverged solution.
“Anybody who thinks that they don’t have to manage some aspects of the hardware, that there’s not value associated with the hardware, hasn’t managed the infrastructure very much,” Glass says. “The integration in terms of support, the integration in terms of how you manage all aspects of the firmware and all the other software pieces associated with your core valued-added software stack, having all of that integrated into a single solution and having that delivered, that’s delivering the simplicity that customers want. There’s a reason why a lot of people want to have their solutions on ProLiant servers. Yes, there’s a ton of value in the software, but when you bring that value of the software tightly integrated with the value of the infrastructure, you can build even more compelling and more economically-friendly solutions for our customers.”
HPE’s SimpliVity 380 combines the 2U ProLiant DL380 with an update of SimpliVity’s OmniStack brought together an array of datacenter resources, not only compute, storage and networking, but also backup, replication, hypervisor, deduplication and WAN optimization. The vendor in March rolled out a number of upgrades to the SimpliVity 380 offering, including an XL configuration that offers more storage capacity and most cost-efficient backup for virtualized environments, version 2.1 of HPE SimpliVity RapidDR that includes a fallback feature to automated disaster recovery and support for 600 VMs in a single recovery plan, and support for HPE SimpliVity in the company’s OneView Global Dashboard for enterprise datacenters. The plan going forward for HPE is to expand the use of the SimpliVity software into other ProLiant systems and the company’s storage portfolios, Glass said.
However, as much attention as hyperconverged infrastructure is getting, it’s a stepping stone toward composable infrastructure, Glass said, which HPE is building out through its Synergy offerings. To be sure, HPE isn’t the only vendor chasing composable infrastructure. Companies like TidalScale, HTBase and Liqid also are in the market, but HPE gets most of the attention.
“We think that’s the next step in the continuation of where hyperconverged is today,” says Glass. “It’s a movement into composable infrastructure, where you have pools of resources that you can control or manage via software through a common API, delivering a very robust solution from that standpoint. The overall market is going to continue to evolve to where customers are demanding that their solutions be easy to implement, that we move toward less time on operation and more time on innovation. Today it’s mostly workgroup types of applications – kind of core applications – and I think you’ll see more what you would classify as business-critical applications on hyperconverged type of infrastructure. If you look at composable today, it’s absolutely an environment where we see customers moving business-critical workloads and applications onto already.”
To Glass, composable is about creating a fluid set of resource pools that enterprises can manage, deploy and use through software and not have to worry about the underlying hardware. The company launched its Synergy effort in 2015 and has continued to update it, such working with VMware to creating an offering the brings together Synergy and VMware Cloud Foundation software-defined datacenter solution. HPE is seeing a wide range of enterprises embracing Synergy – from manufacturing and oil and gas firms to genomics research companies – and may are adopting Synergy with SimpliVity. DevOps is another use case.
“It’s the ability with the ecosystem that we have, we’ve got a lot of customers that very quickly can stand up a DevOps environment and a core set of capabilities that allow them a platform-as-a-service-type of implementation within their on-premises private cloud with Synergy and composable infrastructure,” Glass says.