There are a lot of ways to build or buy hyperconverged server-storage hybrids, and EMC is working with sister company VMware to bring yet another way to market that could, in the end, be the preferred way for many enterprise customers.
The new VxRail systems being launched through EMC’s VCE hardware business also set the stage for a server-based storage cluster based on Dell PowerEdge iron when the $67 billion acquisition of EMC by Dell closes later this year.
Last year, EMC took over the VCE partnership set up by itself, hardware supplier Cisco Systems, and server virtualization juggernaut VMware. When Cisco launched its Unified Computing System blade servers back in 2009, the Project Acadia partnership, as the Virtual Computing Environment Company was known, was set up to be a system integrator and reseller of their respective compute, storage, and virtualization elements, creating prefabricated clouds called vBlocks.
With the vBlocks, the storage is standard SAN or NAS fare from EMC, while the compute and networking was based on the UCS machines. EMC and Cisco did their own sales for the vBlocks, which were manufactured in factories in Franklin, Massachusetts, and Cork, Ireland, and VCE supplied the one throat to choke for support that enterprise customers always want. Two years later, the sales teams at Cisco and EMC were moved over to VCE and it started operating like a tier one supplier in its own right, and last year the whole thing was absorbed by EMC as Cisco finds itself competing more and more with VMware in networking. With the launch of the VxRAIL hyperconverged appliances today, VCE is back in its role of integrator of EMC hardware and VMware software, but the storage is all coming from VMware’s latest Virtual SAN 6.2, which we detailed last week when it debuted.
Even before this launch, VSAN had quickly become the volume leader in the hyperconverged arena, blowing past upstart server-SAN cluster maker Nutanix, which got this whole market rolling when it uncloaked from stealth mode in the summer of 2011 and essentially has forced VMware to create, extend, and aggressively sell VSAN. Nutanix came out running its hyperconverged storage atop VMware’s ESXi hypervisor, but expanded to support Microsoft’s Hyper-V and last year launched its own “Acropolis” variant of the open source KVM hypervisor so it can break free of its dependence on VMware’s stack and also keep money from going into rival VMware’s coffers. (Nutanix launched with the phrase “Ban the SAN” as its manta, but these days it is chanting “Now ban VMware.”)
In a twist, Dell is a big distribution partner for Nutanix software on its hyperscale systems, and it is also very keen on selling VMware’s VSAN. But over the long haul, after the EMC deal closes and it controls VMware, we can envision Dell focusing on VSAN stretched across its own PowerEdge hardware in its enterprise and hyperscale form factors. We can also see the VCE skills and management tools being brought into the PowerEdge fold, but not in a way that will disrupt existing customers who are relying on Cisco servers and switching.
Chasing This Year’s $1.5 Billion Opportunity
The one thing that VMware, VCE, EMC, and Dell all will no doubt agree on is that they want to catch the hyperconvergence wave, which is growing fast and which could constitute a big chunk of the server and storage market a few years from now. Very likely on the order of several billions of dollars per year, and more if companies really start building private clouds with integrated block and file storage in earnest.
The VxRAIL appliances from VCE, in a sense, give us a glimpse of such machinery. Like many other hyperconverged appliances, VCE is starting from a density optimized server chassis that crams four server nodes into a single 2U enclosure. Trey Layton, chief technology officer at VCE, did not divulge who is the server manufacturer that VCE is using, but it is not a Cisco machine for sure because the company does not sell such density optimized iron. Layton says that there are a number of different server and storage options for VxRACK, and furthermore, that one of those setups is precisely the same iron that is used to run VxRAIL. We know that the VxRACK iron comes from Quanta Computer, so we know the VxRAIL iron does, too. Layton did tell The Next Platform that once the Dell acquisition closes, VCE will have the option of creating VxRAIL machines based on Dell iron as well.
The intent of the VxRAIL appliances is to bring VMware’s VSAN software to a broader set of customers, including not only large enterprise datacenters but also remote offices and small and midrange customers. As far as The Next Platform is concerned, it is the large enterprises that are the most interesting possible targets, and there is no question that the VxRAIL setups will be able to meet the scalability needs of most enterprises without having to resort to vBlocks or VxRACK, which has direct attached storage on the compute nodes and runs EMC’s ScaleIO scale-out clustered block storage on the nodes as well as ESXi for compute. This makes VxRACK a mix of server virtualization and scale out storage, bit it is architecturally a little different from VMware’s VSAN even if, at scalability above 1,000 nodes, VxRACK is a lot more scalable thanks to the ScaleIO software.
“VxRAIL is optimized for single vSphere cluster deployments, which scales to 32 nodes with vSphere 5 or 64 nodes with vSphere 6,” Layton explains. “When you are talking about VxRACKs, you are talking about hundreds of nodes, but included is the top of rack networking infrastructure, as well as the spine infrastructure.”
According to VCE, the largest VxRACK customer has deployed 28 racks of gear, which is quite a bit, we admit. But this is still something of an arbitrary distinction between VxRAIL and VxRACK, seeing as though cluster sizes are the same on the ESXi hypervisor and therefore the vSphere server virtualization stack will have the same fault domain. (Yes, the ScaleIO block storage can scale further. We understand that.) Our point is that for enterprise customers, scaling up to sixteen appliances and 64 nodes per rack to deliver an average of 3,200 virtual machines is a pretty big hyperconverged setup on which to deploy applications for most large enterprises. And in a single management domain, the VxRAIL setup could have ten of these racks linked together to span 640 server nodes and 32,000 VMs. Just to give you a sense of scale, inside of Microsoft’s actual Azure public cloud, as The Next Platform recently revealed, a pod of compute capacity is 20 racks with 960 nodes, which is then virtualized by Hyper-V.
To our way of looking at it – and no doubt to those shopping for hyperconverged storage – the new VxRAIL systems are a means for companies to buy VSAN appliances with everything but the networking preconfigured and with the storage hyperconverged with the compute on the same nodes, all from VMware and not relying on ScaleIO block storage. Customers can create pods of VSAN storage and vSphere compute, and that is precisely what we expect many customers will do despite the fact that VCE is pitching this initially as an edge cloud infrastructure solution rather than the scale-out setup required in the datacenter.
The VxRACK systems from VCE have single node servers in either 1U or 2U form factors as well as the four-node machine used in the VxRAIL appliances. The nodes run either the vSphere 5 or vSphere 6 software virtualization stack as well as the new VSAN 6.2 virtual SAN software and come with vCenter licenses for managing the whole shebang. Server nodes can be configured in different ways across a cluster but have to have the same configuration inside of a four-node server enclosure. The servers are based on two-socket “Haswell” Xeon E5 v3 processors from Intel, and here are the different options available in both all-flash and hybrid flash/disk setups.
The lifecycle management software that VMware cooked up for its initial EVO:RAIL clusters has been moved over to the VxRAIL appliances, and existing VCE customers who have deployed on either EVO:RAIL or VSPEX Blue appliances can convert these systems to VxRAIL as well. The VxRAIL setups also have EMC’s RecoverPoint disaster recovery and data protection software running inside VMs and can back up VMs and their data to either the vCloud Air cloud run by VMware or other public clouds.
A base VxRAIL appliance with four nodes, including the hybrid disk/flash setup with a modest amount of compute, memory, and the entire VMware software stack costs $60,000. We asked for fatter configurations of the flash and hybrid setups and what editions of vSphere and VSAN were included in these base and full configurations, but VCE had not supplied them at press time. (See our coverage of VSAN 6.2 and the pricing for various editions at this link.)
“When you look at standalone arrays, I think the costs are even more favorable for hyperconverged storage than many people’s models currently assume,” says Skip Bacon, vice president of storage and availability products at VMware. “We often do not look at the cost of the special purpose networking, for example, and if you assume the compute costs are the same across both, then this becomes really compelling.”
None of the presentations we have seen from the hyperconverged storage players has shown how they stack up in terms of performance to each other and to using real SANs talking to virtualized servers, as the vBlocks do, for instance. What VMware has said, as we showed in the VSAN 6.2 coverage, is that the all-flash nodes running the software can deliver 100,000 I/O operations per second per node with sub-millisecond average latency, and then the hybrid setups can deliver 40,000 IOPS.
In talking to VCE, we suggested that, with the advent of the Azure Stack of software from Microsoft in testing now and coming out later this year that VCE might also consider doing something we might call AxRAIL and AxRACK, but Layton said that for now, VCE is exclusively using VMware virtual compute and storage. Dell, however, has no such exclusivity and will no doubt offer private clouds based on Azure Stack, much as it sells custom servers to Microsoft for the actual Azure public cloud. Ditto for OxRACK and OxRAIL, which would be OpenStack variants of the same idea.
Once the Dell deal for EMC is done, it will be interesting to see what VCE does and where it ends up inside of Dell’s enterprise organization.