While hyperscalers and HPC centers like the bleeding edge – their very existence commands that they be on it – enterprises are a more conservative lot. No IT supplier ever went broke counting on enterprises to be risk adverse, but plenty of companies have gone the way of all flesh by not innovating enough and not seeing market inflections when they exist.
VMware, the virtualization division of the new Dell Technologies empire that formally comes into being this week, does not want to miss such changes and very much wants to continue to extract revenues and profits from its impressively large enterprise base of 500,000 customers. We have presented a detailed analysis of VMware’s finances since the Great Recession here and have talked generally about its embrace and extend strategy with regard to containerized infrastructure there.
There is no question that VMware hit “peak virtualization” somewhere around a year ago, and the company’s top brass has been perfectly honest about it, to their great credit, but the expansion into virtual networking with NSX and into virtual storage with VSAN is helping to cushion the blow. In addition to its two container stacks, vSphere Integrated Containers and Photon Platform, there is another product that has a chance to help VMware maintain and extend its server virtualization base, and that is the vSphere Integrated OpenStack, or VIO, platform.
VMware is not the only company that is mashing up its ESXi hypervisor with the OpenStack cloud controller to create a VMware-compatible private cloud stack. Platform9 is doing it, too. In both cases, the combination of VMware server virtualization and the OpenStack orchestration and automation layer, driven by cloudy APIs that at least have a familiarity to those who use Amazon Web Services (even though they are by no means compatible with the APIs that drive AWS) allows VMware shops to keep what they know and move to where they want to go. This is precisely the approach that VMware is taking with vSphere Integrated Containers, of course, and we think a portion of the VMware base – how much remains to be seen – will go with VIO and VIC precisely because it mitigates risk while providing new function. (VMware created an integrated variant of Hadoop called the vSphere Big Data Extensions, based on Project Serengeti from way back, but no one talks about this much anymore.) You can get all the benefits of an OpenStack cloud and Docker containers this way, so long as you want to keep paying for ESXi, vSphere, and vCenter. Which ain’t cheap. But neither is anything in this IT world. Open source software is not free, either, since it takes very expensive experts to make it work well. In a sense, a VMware perpetual license is outsourced expertise wrapped in the comfort of a familiar platform.
With VIO, VMware is installing the key components of OpenStack in virtual machines and running them atop its ESXi hypervisor while putting the distributed compute of OpenStack on an ESXi substrate, thus creating an OpenStack cloud that orchestrates ESXi VMs through normal OpenStack APIs. Arvind Soni, group product line manager at VMware, has been working with OpenStack since the company started taking it seriously back in 2013, tells The Next Platform that there has been “tremendous momentum” around VIO and that “it has been received very well by the customer base.”
When VIO was first launched in 2015, it was free to customers using vSphere Enterprise Plus and higher editions of the VMware server virtualization stack. Optional support contracts for supporting for the OpenStack parts of VIO costs an additional $200 per server. The 1.0 release of VIO came out in early 2015, followed by a 2.0 release later in the year. The VIO 2.5 release came out earlier this summer, based on the “Kilo” OpenStack release and allowing for a testbed OpenStack setup that deployed the cloud controller elements in a single VM. The 3.0 release, based on the “Mitaka” OpenStack code base that came out in April, just debuted at VMworld. So VIO is about as current as enterprises can expect it to be without it being on the bleeding edge, and Soni says that 3.0 is probably the first enterprise-grade release of VIO, ready for companies to put into production.
For commercial-grade OpenStack, the options are pretty much Red Hat, Canonical, Mirantis, Cisco Systems, IBM, Hewlett-Packard Enterprise, and Rackspace Hosting at this point, and all of these organizations were ahead of VMware in getting production releases to market. Red Hat has a vast base of Linux users to pitch its OpenStack distro to, and Cisco has a sizable server installed base now, too. IBM and HPE are pitching their PowerVC and Helion OpenStacks to their server customers, too, but we have no idea how much traction they are getting. Mirantis is the last of the free-standing OpenStack distros, unless you count Canonical, which has done well with OpenStack, but Rackspace has the technical chops to compete with its hosted variant and maniacal support.
Soni says that VMware is winning head-to-head engagements against these alternatives now. The benefit of tight integration with ESXi and vSphere is resonating with the VMware installed base, specifically because VMware controls that platform from top to bottom, including NXS and VSAN. The idea is to get away from something that needs “perpetual tinkering,” as Soni put it. VIO lets vSphere and vCenter and ESXI do their jobs and lets OpenStack do its job.
What customers are not doing, by the way, is trying to graft the freebie ESXi hypervisor onto OpenStack. They can, of course, do that, and in fact this is how VMware first engaged with OpenStack and VIO was a reaction to this. What we don’t know yet is how VMware customers are adopting VIO, and frankly it is still early days for cloud orchestration in the enterprise.
VIO may not be a canonical, greenfield installation of OpenStack, but that may be a benefit, not a demerit. There has been plenty of grousing about how difficult it is to keep OpenStack updated once it is stood up. VIO helps with OpenStack upgrades – and up until now has required vSphere Enterprise Plus – specifically because VIO makes use of vSphere Distributed Resource Scheduler for cluster patching. Having a virtual substrate for the OpenStack controllers makes patching OpenStack easier. With the VIO 2.5 release, customers that have NSX virtual switching (which integrates the VMware virtual switch with OpenStack’s Neutron network controller) and that already have their own cluster schedulers told VMware that they wanted to be able to use the cheaper vSphere Standard edition, and now they can. So the cost of VIO has just come down quite a bit, and Soni says VIO is now cost competitive with other OpenStack distros.
Through early June, VIO has close to 10,000 downloads, according to Soni, but VMware does not have good visibility into production use of the VIO code beyond that because it is freely available and many customers are using it for proofs of concept as they kick the tires before making a decision to go with VIO over a different implementation of OpenStack or using the vCloud/vRealize alternatives from VMware.
VMware has some initial survey data that shows there are customers that are moving VIO into production, but most of the production VIO sites that VMware knows about come through its direct sales force pushing it into big accounts that want to move from relatively simple server virtualization to more sophisticated cloud orchestration. As was the case for server virtualization, the software development and test environments are among the first that are moving to VIO, with true production workloads to follow once the software has proved itself with the programmers. There are only several thousand OpenStack clusters in the world in production, so the number of VIO clusters in production is probably on the order of several dozens to a few hundred. The point is this: A sizeable percentage of those 500,000 ESXi/vSphere customers in the world represent a huge total addressable market and, more importantly, allows VMware to maintain that virtualization base. One thing for sure: If customers move to another OpenStack distro, they are probably moving to the KVM hypervisor and VMware is going to get the boot in the long haul as the test/dev environment gets stood up and then the production cloud follows suit.
This would be very bad for VMware, obviously.
The real question is how many of VMware’s customers will move from server virtualization to cloud, and how many will choose OpenStack as their virtualization layer.
“The way we look at it is that there are customers at different stages,” explains Soni. “Some customers are done with virtualization and they are happy with vCenter and a little automation. Then there are customers who are looking for something a little more advanced, and they are looking at IT automation tools, and there are others still who are a little more at the forefront, who are closer to the paradigm of infrastructure as code and DevOps. Customers are in different evolutionary stages. But every customer we talk to wants to reduce the time it takes to provision infrastructure and deploy applications. Everybody is demanding that it take minutes or seconds, and no one wants to wait a week. But we are going to give it a little bit more time to get some quantifiable numbers.”
We are more impatient than that. The universe being what it is, with an 80-20 rule where the 20 percent are the most advanced, VMware could be looking at 100,000 VIO customers over the long haul. We shall see.
A good summary of OpenStack and VIO strategy. If I may, I’d like to suggest a minor correction; the product name is VMware Integrated OpenStack and not vSphere Integrated OpenStack.
I can totally understand the cause for confusion though; having VIC=vSphere Integrated Containers and VIO=VMware Integrated OpenStack, isn’t really helpful.
Good write up, though.
Disclaimer: I work on the VIO product.