Why Hyperconvergence Hasn’t Yet Taken Off At The High End

We spend a lot of time at The Next Platform thinking about technologies that trickle down from on high – whether they come from HPC centers or hyperscalers – and gradually go mainstream and end up in the datacenters of large enterprises. Not every technology starts at the top and cascades down, however. Sometimes, they start in the middle and move up.

Or, in the case of hyperconverged infrastructure, which mashes up virtualized servers and virtualized storage area networks onto the same clusters, the idea gets its inspiration from the likes of Google (or so many of the hyperconvergers say in their marketing messages) and has really taken off among midrange companies looking to simplify their infrastructure or large enterprises that are aiming hyperconverged systems at very specific and often greenfield workloads.

This is parallel, in a way, to the emergence of Beowulf clustering software for running scientific simulations and the Message Passing Interface (MPI) for distributing calculations across a cluster that emerged in the mid-1990s. The difference with hyperconvergence is everyone is trying to get rich and no one has yet had the nerve to go open source to try to gain some advantage.

At first, the killer app for hyperconvergence was virtual desktop infrastructure, which means hosting a PC image or its applications back in the datacenter and streaming it out over the network to a PC, tablet, or smartphone. VDI used to require a real SAN because you needed to be able to do live migrations, but once the SAN cord was cut and the price of an overall VDI setup came way down, not surprisingly implementations skyrocketed.

It is perhaps a coincidence that VDI became commercially viable (thanks in part to a change in Microsoft Windows desktop licensing) around the same time that the hyperconvergence upstarts like LeftHand Networks, Nutanix, Maxta, Pivot3, SimpliVity, Scale Computing, and others were getting their products polished and ready for the enterprise. The IT market has lucky timings like this throughout its history.

“Enterprises want smaller fault domains and they are not installing huge clusters. A typical large enterprise deploys hundreds of terabytes of capacity on these hyperconverged systems, not petabytes.”

EMC acquired ScaleIO for somewhere between $200 million and $300 million two years ago, and as far as we are concerned has the real scale-out, hyperconverged story to tell that might be Google-class, with a purported ability to scale to thousands of nodes in a cluster. VMware has joined the hyperconverged fold more recently, with its vSAN and has an installed base of 500,000 server virtualization customers to chase, but VSAN only scales to 64 nodes. And Nutanix has some large enterprises who are deploying their appliances in the hundreds – spending six, seven, or eight figures at a pop – to deploy big relational databases, ERP applications, and e-commerce software.

“I don’t think hyperconvergence solves all problems, but it solves a lot of problems,” Rob Strechay, director of product marketing and management for software-defined storage at Hewlett-Packard, tells The Next Platform. HP wants to get a bigger bite out of the hyperconverged systems market, which grew by 162.3 percent in 2014 to $373 million and which is expected to grow by 116.2 percent this year to hit $807 million. There are over two dozen companies chasing the hyperconverged dollars, and more will no doubt enter the fold.

LeftHand Networks, which was not only a pioneer in iSCSI storage but a company that created an architecture that could support both disk and flash seamlessly and eventually also pioneered the idea of software-based SANs running on servers back in 2007, which was initially called a virtual server array in the industry. HP bought LeftHand in October 2008 for $360 million, and at the time its virtual SAN was called SAN/IQ and it was already running on HP ProLiant servers and using Xeons for its storage brains. The software is now called the StoreVirtual VSA, and for some bizarre reason the big reports that talk about the hyperconvergence market seem to ignore HP. (Nimboxx and Compuverde did not make the latest IDC Marketscape cut, either, and readers have to pencil in their own assessments of these vendors to get a more complete picture.)

The thing is, HP probably has the largest hyperconverged base of customers in the world right now, who run VMware’s ESXi software atop the StoreVirtual VSA, with tens of thousands of shops doing this. This StoreVirtual VSA software runs atop any X86-based server and also works with Red Hat’s KVM and Microsoft’s Hyper-V hypervisors, and importantly, it has multi-site resiliency, flash optimization, and data tiering built in. HP has also shipped well over 1 million freebie licenses of the StoreVirtual VSA software (capped at a maximum of 1 TB of total storage) just to let customers play around with it, and this has quietly helped drive its expansion, too.

Taking On Nutanix

To get a bigger piece of the market, HP has to take on Nutanix and its partner, Dell, which resells the Nutanix on top of its PowerEdge-C hyperscale systems. Nutanix has cooked up its own variant of the KVM hypervisor, called Acropolis, so it can lower the cost of its overall server-SAN halfblood, and rather than push the Nutanix Xtreme Computing Platform, as its stack is now called since a rebranding in June, or being too aggressive about pushing VSAN on its ProLiant machines, HP is going to try to sell its StoreVirtual VSA on its own Apollo 2000 hyperscale-class machines, which provide a kind of density that other hyperconverged appliances often do not. Moreover, HP is going to compete on price with Nutanix.

The resulting appliance has the very unwieldy name of the HP ConvergedSystem 250-HC StoreVirtual, and as we suggested to Strechay, HP should really have a big rethink on its product names once HP Enterprise splits off from the PC business.

The new CS-250 is based on the Apollo 2000 chassis, which puts four server nodes into a single chassis. Specifically, the Apollo 2000 enclosure can have up to four ProLiant XL170r server sleds, which have two I/O slots and takes up a 1U of height and is half wide. These are configured with two “Haswell” Xeon E5 processors each, either the eight-core E5-2640 (which clocks at 2.6 GHz) or the twelve-core E5-2680 (which clocks at 2.5 GHz). The nodes come with 128 GB of main memory, which can be increased to either 256 GB or 512 GB, depending on the needs of the workloads. As for storage, each node can have a hybrid setup with four 1.2 TB SAS drives and two 400 GB SSDs, or a capacity configuration with six 1.2 TB drives. So that gives up to 96 cores, up to 2 TB of memory, and up to 28.8 TB of disk in a single appliance. The disk capacity of the appliance can be extended with JBOD enclosures, and these will look like other nodes in the StoreVirtual VSA storage pool. The ProLiant XL170r nodes have two 10 Gb/sec Ethernet ports to link to each other and to the outside world. The CS-250 appliances come with three 4 TB StoreVirtual VSA licenses per node.

HP will be selling three-node configurations of the CS-250 starting on September 28 with a list price for configured nodes of $121,483. This price includes the VMware vSphere Enterprise edition server virtualization stack. (We are not clear on what the fourth node slot is supposed to be used for, but it is presumably for extra storage or perhaps a redundant spare.) A four-node configuration will ship on August 17; no pricing was given, but somewhere around $160,000 is probably a good guess. HP calculates that this pricing makes it 49 percent more cost effective than other hyperconverged setups.

Importantly for VMware shops, the CS-250 comes preconfigured for VMware’s ESXi 5.5 and 6.0 hypervisors and hooks into the vCenter management console through HP’s own OneView management tool. By the way, you can’t fire up VSAN on this iron, but there is nothing preventing you from buying raw Apollo 2000 machines and doing it yourself if you want to.

As for scalability, Strechay says that the StoreVirtual VSA can scale across eight of these CS-250 appliances, for a total of 32 nodes, which gives a maximum of 768 cores, 16 TB of memory, and 230 TB of disk capacity in a single server-storage hybrid. The largest customers that HP has running StoreVirtual VSA software tend to have up to two dozen modes, which is not pushing the upper limit of this software, but it is not the thousands of nodes in the ScaleIO software offered by EMC, either. Just because a server-SAN hybrid can scale to thousands of nodes doesn’t mean it should.

“Enterprises want smaller fault domains and they are not installing huge clusters,” explains Strechay. “A typical large enterprise deploys hundreds of terabytes of capacity on these hyperconverged systems, not petabytes.”

This stands to reason. Even when customers have racks and racks of traditional SANs, they are not generally interconnected into some kind of uber-SAN. Each SAN is linked to its relevant servers and supports its specific workloads. This is no different, in concept.

Moreover, enterprise customers building virtualized infrastructure at a large scale are, for the moment, inclined to stack up ConvergedSystem 700 blade servers and 3PAR StoreServe 7200 disk arrays and dial up the amount of compute and storage capacity they need. Large enterprises are still thinking in terms of racks and they are not, as yet, thinking of hyperconverged storage.

The interesting bit for us to contemplate, then, is creating a very large cluster that can be partitioned like many virtual SANs and virtual compute clusters and reconfigured on the fly as those workloads change. This, we think, would be useful and would help perhaps to spur adoption of hyperconverged solutions among large enterprises.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


  1. I can understand why HP dropped EVO from their catalog, it was inflexible and honestly a poorly designed platform and for the price point they placed on the system, there was no way for them to be competitive against the other players who are pushing it. None of the EVO systems have much in the way of additional value add that would make it a competitive play vs the other established HCI systems.

    For HP, moving forward with their own HCI solution that allows them to wrap their software stack around it is the way to go and be relevant in the HCI space. That said, I’ve never once seen HP win a competitive deal against the likes of Nutanix or SimpliVity with their Lefthand VSA solution. That may change with the new systems they are providing, but I think it will be tough for them to gain traction against the other solutions that are far more mature and offer more value and specific integration across multiple hypervisors.

    I think its foolish to claim HP has a lead marketshare in HCI. Giving away the lefthand stuff, especially at the sizing point that they have, cannibalizes the SMB space where HCI fits really well. Its like Dell claiming they ship more storage than everyone else because they count the disks inside servers, its a shell game to make marketing claims. The reality is, no one really looks at the VSA model as a hyperconverged system, and more to the point, that solution isn’t competitive.

  2. “This stands to reason. Even when customers have racks and racks of traditional SANs, they are not generally interconnected into some kind of uber-SAN. Each SAN is linked to its relevant servers and supports its specific workloads. This is not different, in concept.”

    It’s not just about fault domains; it’s about supportability (who do I have to tell at the Change Control meeting if I want to do an upgrade) and compatibility – support matrices aren’t massively wide; you could run old hardware on new technology, but not get vendor support.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.