Growing Hyperconverged Platforms Takes Patience, Time, And Money
August 22, 2016 Timothy Prickett Morgan
In this day and age when the X86 server has pretty much taken over compute in the datacenter, enterprise customers still have their preferences and prejudices when it comes to the make and model of X86 machine that they deploy to run their applications. So a company that is trying to get its software into the datacenter, as server-storage hybrid Nutanix is, needs to befriend the big incumbent server makers and get its software onto their boxes.
This is not always an easy task, given that some of these companies have their own hyperconverged storage products or they have a knot of existing partnerships that they are entangled within. Nutanix is the leader so far in this hyperconverged space, which it helped create six years ago, and by the count of Howard Ting, senior vice president of marketing at the company, there are now 37 different hyperconverged storage players in the market, all chasing venture funding and IT budgets. So getting the attention of the big server OEMs that still have a lot of sway in the enterprise accounts that are going to buy hyperconverged storage can be a challenge.
Luckily, Nutanix has a lot of momentum and thousands of existing customers who have deployed its products, first on homegrown appliances that were based on Supermicro iron and then on appliances based on Dell gear through an OEM agreement inked two years ago and then from Lenovo in its own agreement signed late last year. Dell is in the middle of acquiring storage giant EMC for $67 billion, and will soon have its own portfolio of hyperconverged storage and Hewlett Packard Enterprise has its own virtual SAN software and seems uninclined to make a deal to peddle Nutanix software in its ProLiants. But Cisco Systems, which has seen its server revenues flatten in recent quarters after stupendous growth for the past seven years, wants to keep finding new workloads for its UCS systems and Nutanix wants to reach new customers who have chosen Cisco’s UCS platforms are their core servers.
So the two have come up with a “meet in the field” arrangement that falls short of a reseller agreement or an OEM agreement, but which will, according to Ting, allow for Cisco UCS channel partners to get a software-only license to the Xtreme Computing Platform, which has had several prior names and is now called simply Acropolis, that is tuned specifically for Cisco’s C Series rack servers in the UCS family. (Interestingly, the B Series blade servers that put Cisco on the map as a shaker and mover in the server space back in early 2009 are not supported to run the Nutanix stack. But, similarly, regular PowerEdge servers from Dell cannot run the Nutanix software either – you have to buy a specific appliance based on Dell’s hyperscale-class PowerEdge XC family – and the full line of System x machines from Lenovo are not enabled to run Nutanix, either. And as we pointed out, HPE’s ProLiants, which are the most popular servers in the world, cannot run the Nutanix stack, at least not yet. (Dell and Lenovo have a full OEM software agreement with Nutanix and pay royalties on the Acropolis converged and virtualized compute and storage platform.) And all of the major server OEMs want to sell VMware’s VSAN alternative to Nutanix, so they keep their options open.
Ting tells The Next Platform that it has done enterprise license agreements for software-only licenses to the Acropolis software for key customers at large accounts, but that this is the first time it has formalized an agreement with a server maker that will see it push this licensing through their own channel. Others could follow, including HPE, which may want a similar deal if it sees Nutanix taking off.
The shift from running the Nutanix stack on top of VMware’s ESXi server virtualization hypervisor to running it on a homegrown variant of the KVM hypervisor that Nutanix calls Acropolis (and now the whole platform is called that) has been a tectonic move for the company. The Acropolis iteration of the Nutanix stack was unveiled in June 2015, which we detailed here, and within a year’s time, the Acropolis hypervisor has gone from a zero percent share of the Nutanix base to account for 15 percent of the customer installations. More companies run Nutanix on Acropolis than on Microsoft’s Hyper-V, which is also supported, although most of the other 85 percent of the customers running Nutanix are still doing so on top of ESXi, so beating Hyper-V was not all that hard.
The main reason why companies are shifting to the Acropolis hypervisor and away from ESXi is simple: This removes the very high cost of using VMware virtualization as the substrate on the virtual infrastructure that mixes compute and storage. Another factor that will speed up the move from the ESXi hypervisor to the Acropolis variant of KVM is a conversion tool that Nutanix debuted last June and that started shipping in volume this June.
Another interesting software feature that is driving sales for Nutanix is its one-click upgrade feature, which is the most popular thing that enterprise customers are attracted to when they by the platform. Nutanix does two major releases a year with minor releases in between those times, and it has to be simple to do an upgrade for any distributed platform to not cause headaches. (OpenStack and Hadoop are notoriously bad at this, for instance.)
“One of the important things that you need to do to match the public cloud experience is to make upgrades seamless and invisible,” Ting says. “So we have put a lot of effort behind one-click upgrades. This is why we had 42 percent of our customer base upgrade to our 4.6 release within 100 days of it being available. You know how big a number that is, because most enterprise software makers might see 3 percent to 5 percent of their customers upgrade in the first 100 days of their releases. Our rate of adoption is extremely high – probably industry best.”
Here at The Next Platform we are always interested in what the largest customers are doing and how they are pushing the scalability limits of any platform. Nutanix is in the middle of preparing to go public and has not updated any financial or customer stats since filing its S1 with the US Securities and Exchange Commission back in December 2015. The updated S1 filing for the quarter ended in April came out last week, and you can see it here. In that document, you will see that Nutanix had 3,100 customers as of April of this year, and had posted $305.1 million in sales for the first nine months of its fiscal 2016 year, up 82 percent from the year-earlier period. Losses widened to $126.1 million in the nine months of 2016, compared to $84 million in losses in the same three quarters of fiscal 2015. It is clearly expensive to start a new market, and this is why we have venture capitalists willing to take a risk that Nutanix can be a $1 billion player in what might be a $3 billion to $4 billion hyperconverged market some years hence.
What Ting can tell us is that Nutanix has evolved from hyperconverged storage to a full “enterprise cloud platform” and it is therefore getting more business from the Global 2000 accounts that it (and many other vendors) crave. We think Nutanix probably has well over 3,500 customers now, and probably somewhere close to 50,000 nodes in the field, but that is just a guess based on past growth curves and thin data at that. Many large customers have hundreds of nodes – on the scale of the size of Hadoop clusters or HPC systems at large organizations – and that some have over 1,000 nodes and the biggest having over 1,500 nodes. These are not small clients by any stretch of the imagination, but you also have to remember that HPE and Dell sell several hundreds of thousands of servers each quarter.
Ting adds that the deployments at large enterprises follow two different patterns. The first group start out with a half dozen to a dozen nodes, usually running one workload, and then over time this small cluster grows to 50 or 100 nodes running multiple workloads. Other large enterprises just go for it and install a few hundred nodes from the get-go, either because they have a lot of workloads they want to converge onto a single clustered platform or because they have one big workload that requires such iron.