VSAN Ready For All Enterprise Workloads

The hyperconverged upstarts may have created the market for server-storage hybrids to support clusters of virtual machines, but it may be VMware that benefits most from this movement as customers look to simplify their infrastructure for supporting virtualized applications in their datacenters.

One of the sticking points for server virtualization when it was first being widely deployed in enterprise datacenters is that the cool features like vMotion live migration required sophisticated and expensive storage area networks. This an old-school way of doing things, putting shared storage with fast network links in the center of systems that may not actually be sharing processing work. The new style of creating systems often puts compute and storage in the same clusters, usually made up of commodity X86 systems, and the wave of hyperconverged system sellers that beat VMware to market often used VMware’s own ESXi hypervisor to virtualize their servers and storage arrays and put them on the same clusters.

VMware was certainly not the first vendor to offer hyperconverged storage – Nutanix, SimpliVity, Scale Computing and others got out in front adding virtual storage arrays from inside ESXi VMs and spanning server clusters. But it has seen a very enthusiastic adoption curve for its Virtual SAN software, which started shipping ten months ago. So far, VMware has more than 1,000 paying customers for its server-storage hybrid, and the scalability and performance enhancements that are embodied in its vSphere 6.0 server virtualization stack (which The Next Platform details in this separate report) are going to make VSAN more appealing to a wider audience of vSphere users.

Nutanix, the biggest hyperconvergence upstart, now has a $300 million run rate as of the end of 2014 (that is revenue from hardware and software) and has 1,200 customers. It is early days for hyperconverged server-storage half-bloods. And if the size of the vSphere Enterprise Plus base is any indication, there are about 250,000 potential customers in the VMware base alone who could be shopping for software like VSAN now that it has become more mature and acceptable. Over time, provided the pricing and performance are right, many VMware shops could move off real SANs for virtual ones. And for those who want to keep their real SANs, VMware is launching Virtual Volumes, or VVols, that will provide the same kind of policy-based storage management that VSAN offers but on external physical arrays based on disk, flash, or hybrid architectures.

Virtual SAN A Slow And Steady Ramp

VMware, which has over 500,000 customers who collectively have over 45 million virtual machines running atop the ESXi hypervisor, has been measured in its development and conservative in its selling of VSAN, and there are a number of reasons for that. First, VMware is owned by EMC, which has a vested interest in selling high-end physical SAN arrays, particularly to datacenters that are used to buying SANs to support both bare metal applications and those running in virtual machines. Second, the VSAN software is among the most sophisticated code that VMware has ever created, and with it being embedded right into the ESXi hypervisor kernel, the company wanted time to stress test it on relatively small clusters before scaling up to bigger iron.

VMware got its start with server-storage hybrids back in 2011 with the launch of the vSphere 5.0 stack and the Virtual Storage Appliance, which created a shared storage pool across three physical servers with mirroring of data across the nodes for performance and availability. Fast forward to 2103, and VMware created a hybrid flash-disk setup that used the flash as a read cache to accelerate the performance of the VSAN. The beta version of the original VSAN, which ran inside the ESXi 5.5 hypervisor and which came out in the summer of 2013, could span eight nodes. And during the beta testing, more than 12,000 customers signed up to put VSAN through the paces, showing the kind of demand there might be for a server-storage hybrid for supporting VMs on a cloud. The VSAN 1.0 release, which has been rebranded to VSAN 5.5 to be consistent with its initial ESXi hypervisor release level, was architected to stretch across up to 32 nodes in a single storage domain, and at first VMware hinted that the production version might only go as far as 16 nodes. But VSAN launched at 32 nodes, which not coincidentally was the same size as a single vCenter Server management domain with the vSphere 5.5 stack. VMware could host up to 4,000 VMs on that fully loaded VSAN setup with 32 nodes, which with the servers using 4 TB disk drives yielded around 4.4 PB of total storage capacity.

The performance of the initial VSAN software was significantly enhanced by the flash storage in the server nodes, but this flash was only used for caching, not for persistent storage like the disk drives were used. A four-node VSAN cluster could handle about 250,000 I/O operations per second (IOPS) on 100 percent read workloads using 4 KB file sizes; with a mix of 70 percent reads and 30 percent writes with the data 80 percent randomly accessed, the four node VSAN cluster could do about 80,000 IOPS. (VMware measured performance using the IOmeter benchmark test.)

The neat bit is that as the cluster was extended to 8, 16, 24, and 32 nodes, that performance scaled almost perfectly linearly. The net result was that a 32-node VSAN setup could processor 2 million IOPS for a 100 percent read workload and about 640,000 IOPS for the 70-30 mixed read-write workload. The VSAN 5.5 software consumes about 10 percent of the aggregate compute capacity in the cluster as it runs, but because it is running inside the ESXi hypervisor and close to the iron, the overhead of this virtual SAN software is lower than alternatives than run inside of VMs by a factor of 2X to 4X because VSAN doesn’t have to wade through all that extra software.

While these were pretty good performance figures, VMware was still very careful not to pitch the initial VSAN release as a suitable replacement for an actual SAN for any and all virtual workloads. The company targeted the usual entry point for its ware – test and development environments – as the ideal initial workload, with disaster recovery using vSphere Replication and virtual desktop infrastructure (VDI) – the streaming of PC images and applications from the datacenter out to desktops, laptops, tablets, and smartphones – using Horizon also being good fits for VSAN 5.5.

With VSAN 6.0, which has its release number in synch with the vSphere server virtualization stack even though it is the second release of the virtual SAN software, VMware is opening up VSAN to a broader set of workloads and customers.

“With VSAN 5.5, we focused on those three key use cases,” explains Mark Chuang, senior director of product management for the software defined datacenter group at VMware. “But with VSAN 6.0, you are going to see us get a lot more aggressive and saying that with the performance and scale improvements, it is ready for all enterprise applications.”

With VSAN 6.0, VMware is scaling up to 64 hosts in the cluster, which is capable of supporting up to 8.8 PB of usable capacity in a hybrid configuration that uses flash cards as cache for persistent storage on disks. The big change with VSAN 6.0 is that now the flash memory in a server node can be used as persistent storage, and if customers are really interested in boosting the IOPS, they can go with an all-flash configuration that marries PCI-Express flash cards or NVMe flash drives with properties aimed at write-intensive work with more durable, capacious, and less expensive SSD drives for read-intensive, persistent storage in the VSAN cluster.

Here is how the feeds and speeds stack up:

vmware-vsan-compare

 

As you can see, VMware has doubled the node count and also doubled up the number of VMs per host, which would make you think that the number of total virtual machines supported by the cluster would rise by a factor of four. But the documentation we have seen shows the VSAN 5.5 software topping out at 3,200 VMs for 32 nodes and the VSAN 6.0 software topping out at 6,400 VMs across 64 nodes. That is a factor of two increase in total VMs, not a factor of four as you might expect given the raw numbers. Why this is the case has not been explained as yet, and we are looking into it.

Also, depending on which presentation VMware is using, a server node in an all-flash configuration can deliver 90,000 IOPS or 100,000 IOPS on a 70-30 mixed read-write workload. The difference between the two numbers is minor, and the important thing is that either number is considerably higher IOPS per node than VMware was getting with its initial VSAN, and some of that is no doubt due to the software as well as the underlying hardware. (Flash drives keep getting denser, faster, and cheaper, and servers keep adding more cores, freeing up more capacity to run actual workloads instead of storage software.) The other important thing about these early benchmark numbers is that the all-flash version of the VSAN 6.0 setup has latencies are in the sub-millisecond range and they are consistent across the cluster. Enterprise customers like consistency as much as they like speed.

If you dig around a bit, you might come across the What’s New: VMware Virtual SAN 6.0 document out on the VMware site. Some further details about VSAN 6.0 performance are revealed here. This document says that the all-flash VSAN 6.0 configuration is limited to 32 nodes and does not scale to the full 64 nodes, contrary to the chart above that VMware gave to The Next Platform for some more detailed feeds and speeds. That document says that on a 32-node VSAN 6.0 cluster, read-only data serving can be handled at a rate of 4 million IOPS, twice that of the initial VSAN 5.5 software, and on mixed read-write workloads (again, with the traditional 70 percent read, 30 percent write mix) the hybrid setup can deliver 1.2 million IOPS across 32 nodes, which is again double that of the VSAN 5.5 benchmarks that VMware ran.

Gaetan Castelein, senior director of storage product management and marketing at VMware, confirmed that VSAN 6.0 will indeed span 64 nodes in an all-flash configuration when it is generally available and this document above will presumably be updated to reflect that. Castelein also confirmed that the performance increases cited above were for identically configured server nodes and therefore the performance improvements were entirely due to software changes or the addition of more nodes. To be precise, VMware tested the VSAN 5.5 and 6.0 software using Dell PowerEdge R720 servers, which had Intel “Ivy Bridge” Xeon E5-2670 b2 processors running at 2.5 GHz and LSI-9207-8i storage controllers; VMware used a mix of Intel S3700 and Micron Technology P420m flash drives in the machines.

The new VSAN software has other performance tweaks, including faster snapshotting of clone VMs as well as a deeper level of snapshotting, allowing for up to 32 clones per VM instead of the two clones per VM for VSAN 5.5. Another feature that VMware has added is rack awareness, which allows for system administrators to ensure that datasets and their clones are separated into fault domains that do not share a common rack or power distribution unit within a rack. This way, if there is a power failure, the data is distributed across the VSAN cluster. VMware has also added checksum and encryption that uses features in the X86 processors (presumably Intel Xeons), and is allowing for customers to use JBOD direct-attached disk enclosures to hang off blade servers that provide the compute for the VSAN array and for the virtual machines that make use of it. Up until now, blade servers were limited to the storage that could be mounted on them.

VMware is not changing the price for VSAN with the 6.0 release, but Chuang tells The Next Platform that the all-flash configuration will carry a premium licensing fee over the hybrid flash-disk variant of the software. VSAN 6.0 costs $2,495 per socket for the standalone edition. Bumping up to an all-flash system will cost an additional $1,495 per socket, and if you want to add vSphere Data Protection Advanced to it, it costs another $1,095 per socket. (VMware says “CPU” in its pricing, but it means socket, not core, when it says that.) Customers who are running VDI workloads can license VSAN by seat, for $50 per seat for the base hybrid flash-disk version and an additional $30 if they want to go all flash. Customers have to buy vSphere server virtualization software, which is not bundled with the VSAN license. Pricing has not yet been announced for vSphere 6.0, but with the vSphere 5.5 release, VMware charges $3,495 per socket for the full-on vSphere Enterprise Plus stack, which has all of the bells and whistles, or $2,875 per socket for the less-capable but still usable vSphere Enterprise bundle. Add it up, and the VMware software and support bill for a hyperconverged server-storage cluster with 64 nodes could easily crest $1 million.

That said, as VMware pointed out last year when VSAN first went into production, after factoring in operational benefits and lower capital costs, a VSAN cluster should cost about 50 percent less than a real SAN with similar capabilities. The rule of thumb from VMware this time last year was that a configured machine should cost around 25 cents per IOPS on a setup optimized for throughput and around 50 cents per GB on a setup optimized for capacity. Whether these rules still hold, we will try to find out.

vmware-vvols-vendors

One of the neat features of VSAN that is making a break for it is called Virtual Volumes, or VVols for short, and this is the part of the storage stack that VMware created for VSAN that lets the management tools and hypervisor manage storage at the VM level instead of the logical unit (LUN) level in a physical storage array. The way this works is that the VMDK virtual disk drive that defines a virtual machine and its contents becomes the granular unit of data management for a disk array, rather than the LUN. In the past, storage admins had to work with server admins to provision LUNs and volumes on arrays.

The VSAN software does not have synchronous replication between clusters and it does not have deduplication and data compression like real SANs do. And so, for some use cases, VSAN will still not be as good as a real SAN. That’s why 29 storage partners have lined up to add support for vSphere VVols to their disk arrays. This work has been going on for some time, and a dozen vendors will have VVols support ready sometime in the first half of this year.

With VVols, the same policy-based provisioning of storage for VMs that is part of VSAN will be extended out to physical SANs. And that will give customers a single pane of glass from which to manage their storage – something that will also give VMware that much more stickiness in the datacenter, since storage will be provisioned from the hypervisor out, not from the disk array up.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.