Any new system architecture gets its best chance of being adopted in the enterprise when it is paired with a greenfield workload that is more or less isolated from other software running in the datacenter. So it is with hyperconverged architectures, which are mashing up server virtualization clusters and shared storage and getting traction because companies want to try this new thing out but they don’t want to bet the server farm.
It is not an accident that the rise of Unix coincided with the commercialization of the Internet. Once Unix got its foot in the door, it quickly expanded to other workloads because the open systems movement provided a modicum of platform portability compared to the proprietary and incompatible systems that came before them; once Linux entered the picture, it took over the same role and added open source to the movement, pushing Unix into legacy status. The rise of Windows in the datacenter followed the same path, albeit among companies who were familiar with Windows on the desktop and wanted the same environment on their servers to, in theory, make their lives easier and to have an alternative to Unix as well. The new applications compel a trial on a new platform, and then it tries to take over.
VMware is not expecting for its EVO:RAIL preconfigured cluster appliances, running its ESXi/vSphere server virtualization stack and its Virtual SAN (VSAN) software to take over the world, knocking out real storage area networks that are made by the likes of its parent company, EMC. At least it will never say so publicly. But provided that the company keeps adding features to the VSAN software to give it similar features that “real” SANs have and it offers that functionality at a reasonable level of performance and with an improvement in price/performance, then no matter what VMware or EMC say, the EVO:RAIL hyperconverged platforms or homegrown stacks of vSphere plus VSAN will indeed displace server clusters and outboard SANs.
Before that can happen, VMware needs to scale up the EVO:RAIL configurations to support larger workloads and loosen up the appliance configurations a bit so companies have some alternatives. And that is precisely what VMware is doing this week.
The original EVO:RAIL machines were based on density-optimized machines that packed four two-socket Xeon E5 server nodes into a single 2U chassis; the nodes were based on six-core Xeon E5 processors and memory could scale up to 192 GB. The nodes had one 32 GB flash stick or one 146 GB SAS drive for ESXi booting, one 400 GB flash disk for VSAN read and write cache, and three 1.2 TB SAS drives for VSAN storage. With the updates announced this week (probably not coincidentally ahead of the Nutanix NEXT user conference, which starts on Tuesday), VMware is offering customers the options of both “Ivy Bridge” and “Haswell” variants of the Xeon E5 processors in their two-socket servers, and they can pick machines which have processors with six, eight, ten, or twelve cores. The nodes can scale from 128 GB to 512 GB of main memory. The VSAN configuration also comes in two setups now. In option one, customers can have 14.4 TB of raw storage per appliance plus 1.6 TB of flash across the four nodes, just like before, or they can beef that up with 24 TB of disk (five disks per node instead of three) and a doubly fat 800 GB SSD per node for a total of 3.2 TB of flash.
With the fatter setups, Mornay van der Walt, vice president of the EVO:RAIL group at VMware, tells The Next Platform that with a total of eight appliances clustered together using vSphere and VSAN software and the maximum memory, flash, and disk storage, a 32-node cluster will be able to support 1,600 general purpose virtual machines (meaning for generic server workloads) or 2,400 virtual machines for hosting virtual desktops (which are lighter than your typical server instance). The original EVO:RAIL systems launched at VMworld last August topped out at 400 generic server VMs and 1,000 virtual desktops, so the scale has been significantly enhanced. An average VM has two virtual CPUs (vCPUs) according to VMware’s benchmarking and sizing, with a server VM having 6 GB of virtual memory and 60 GB of virtual disk, compared to 2 GB of virtual memory and 30 GB of virtual disk for a VDI instance.
Van der Walt says that VMware had always intended to loosen up the hardware configurations for the EVO:RAIL appliances and that feedback from customers and partners alike was to do it sooner rather than later.
“The reason that we are adding these configurations is that there is lots of demand. We took a crawl, walk, run approach with configurations, but customers want more,” explains van der Walt. “They are buying into the idea of an appliance, brought to them by VMware and their preferred EVO:RAIL partner. That is very attractive to them compared to some of the other offerings that are coming from some of the smaller hyperconvergence startups.”
That is not where the scalability ends for the vSphere plus VSAN hyperconvergence stack. The EVO:RAIL configurations predate the launch of VSAN 6.0 back in February, which when combined with vSphere 6.0 scales out to 64 hosts in a single cluster, double that of the VSAN 5.5 software that underpins the current EVO:RAIL appliance clusters. By shifting to newer iron as well as the vSphere 6.0 and VSAN 6 software, which van der Walt says VMware will do later this year, the EVO:RAIL appliances will double the capacity again by scaling out to 64 nodes; aggregate I/O operations per second in a cluster will increase by around a factor of 4.5 to 90,000 IOPS, too. While VSAN software has encryption, caching, and snapshotting – all features of real SANs – it does not yet have de-duplication, data compression, and synchronous replication like real SANs do or many of its rivals peddling hyperconvergence software or appliances do. VMware’s customers are no doubt interested in such features, but EMC sells a lot of disk arrays, too, and does not seem to be in a mood to hurry VMware’s efforts along.
That said, it is not only reasonable to expect VMware to add such features, and perhaps even move on to heftier server nodes based on its Haswell Xeon E7-4800 and E7-8800 processors launched in early May or the Haswell Xeon E5-4600s launched just last week. There is a good reason for believing this might happen.
Van der Walt says that after looking at the data from its 500,000 customers worldwide, if you look at server virtualization on plain vanilla servers, VMware’s ESXi hypervisor was deployed on around 70 percent of the nodes. But on converged infrastructure, where servers, storage, and switching are brought together and unified under a single management framework, VMware’s hypervisor penetration is more like 85 percent. So what will it look like during the hyperconvergence phase? “When we get to hyperconverged infrastructure, something like 90 to 95 percent are running vSphere,” says van der Walt. Not all of those vSphere hyperconverged systems are using VSAN, of course. Many of them are running Nutanix, which is the juggernaut of hyperconvergence at this point in the development of the market. It is early days, and despite the ramp for Nutanix, VMware has billions of dollars in the bank, those 500,000 customers, and an installed base of probably close to 50 million virtual machines these days on its side.
Van der Walt says that while Nutanix has some core virtualization appliances that are configured in a similar way to the EVO:RAIL appliances, at the higher end of its product lines “they are storage heavy and light on compute” where they are trying to position the Nutanix appliances as SAN replacement. This, says van der Walt, and he is adamant about this, is not the market that VMware is chasing with EVO:RAIL. This is, however, the market that VMware is going after with its larger EVO:RACK configurations, which put 320 nodes – a federation of ten clusters with their own 32-node VSAN arrays underpinning it – all under a single vCenter management domain that scales from 12,000 to 20,000 VMs. VMware has said nothing about EVO:RACK since it was previewed last August, but we would suggest that VMware can make this a whole lot simpler and just use four-socket nodes instead of two-socket nodes in the EVO:RAIL setup, and using the E5-4600 v3 processors that just came out would keep the price/performance in line. That would get a 64-node cluster up to around 6,400 generic server VMs and 9,600 VDI instances. This is almost as far as the low-end of the future EVO:RACK setups. If you double it up again with eight-socket E7-8800 v3 servers, you can push it to over 12,000 generic server nodes and close to 20,000 VDI images without having to resort to federating the management domains for vCenter. Sometimes a mix of scale-up and scale-out is best – which is one of the reasons why four-socket servers are so popular for server virtualization. It also allows for much larger VM images should an application require more oomph.
This will probably not happen as part of the official EVO:RAIL program, but this effort is but one of the three prongs that VMware is using to attack the nascent hyperconverged infrastructure space. The second prong is to go to VSAN-ready appliance makers, who can tailor their offerings for heavier compute and storage, and the final answer is for customers to just license vSphere 6.0 and VSAN 6.0 today, but their own hefty systems, and not wait for VMware to eventually get around to it. And this is precisely what van der Walt expects some customers to do.
In the end, if VMware and its hardware partners don’t sell hyperconverged systems to customers in the large configurations that many of them need, then Nutanix, SimpliVity, Scale Computing, Pivot3, and Maxta certainly will try. And like it or not, some of those installations – including those based on VSAN – are absolutely going to do work that would otherwise end up on a real SAN. That’s just the way this industry is headed.
Be the first to comment