The people behind the OpenStack cloud controller do not spend a lot of time worrying about the competition, whoever or whatever that might be. Instead, they are eager to make the OpenStack orchestration and management tool more than the relatively simple cloud controller that NASA and Rackspace Hosting launched nearly five years ago. More than anything else, they want to build a set of tools that companies put into production because they get value out of it.
This is the goal of most open source software projects, of course, and as one of the most popular collections of code to come along since the Linux operating system and rivalling the Hadoop data analytics platform in terms of both hype and pragmatic expectations, there is a considerable amount of scrutiny regarding OpenStack and its community. As that community convenes its developer and customer summit in Vancouver this week, The Next Platform sat down with Jonathan Bryce, executive director of the OpenStack Foundation, and Mark Collier, its chief operating officer, to talk about how OpenStack is progressing in organizations around the world.
OpenStack has come a long way in its few short years, and its goals have changed considerably over that time as the software stack and the priorities among early adopters have shifted. This is one if the keys to its success.
When OpenStack was launched back in July 2010, almost immediately sealing the fate of the alternative open source Eucalyptus and CloudStack cloud controllers that were already in the market, NASA and Rackspace had put a pretty big stake in the ground. The OpenStack backers wanted to create something that operated on the scale of the Amazon Web Services public cloud – which the Eucalyptus and CloudStack projects and VMware’s proprietary vCloud controller could not do. Being even more precise, NASA and OpenStack put some numbers on it to make sure everyone was paying attention, claiming that OpenStack would eventually scale to 1 million host machines and support up to 60 million virtual machines. Those scalability goals have not been met, and no one is suggestion that they need to be at this point in the evolution of OpenStack. Compatibility with AWS was an initial goal, too, but that went by the wayside as OpenStack came into its own and created its own APIs to control virtualized compute, storage, and networking.
“We think of it in terms of barriers to adoption. It is rarely in the form of a technology barrier. If you look at companies that have been successful with clouds, they have changed their culture in terms of how operations works and how developers are allowed to embrace experimentation and bring down the walls that silo different parts of organizations. Trying to transform into a software development culture is really hard, and there have been stories about private clouds that have not succeeded.”
Collier tells The Next Platform that the OpenStack community is not thinking about the competition from VMware, Microsoft, and other open source tools like Eucalyptus and CloudStack as it pushes ahead.
“We think of it in terms of barriers to adoption,” explains Collier. “It is rarely in the form of a technology barrier. If you look at companies that have been successful with clouds, they have changed their culture in terms of how operations works and how developers are allowed to embrace experimentation and bring down the walls that silo different parts of organizations. Trying to transform into a software development culture is really hard, and there have been stories about private clouds that have not succeeded and the reason is cultural. That’s why we are getting companies that have been successful in making this transition to share, because the biggest barrier to getting OpenStack even more widely adopted is explaining to companies who they need to hire and what they need to do themselves or outsource to the ecosystem. These issues are not all technological and they involve company culture.”
What we always want to know is how many OpenStack clouds have been deployed as proofs of concept and in production, and Bryce concedes that as an open source project that has many downstream distributors, there is not a great way to track any numbers. The OpenStack Foundation does an annual survey of users (the ones that it can identify and that are willing to participate in the survey), and Tim Bell, group leader of the operating systems and infrastructure group at CERN, will be releasing results of the latest survey this week at the OpenStack Summit. This survey will give some trendlines that will be useful, no doubt, but it does not tell us how pervasive OpenStack has become – or not, as the case may be.
Collier is the sporting type, though. “There is no doubt in my mind that there are thousands of OpenStack clouds running around the world, and obviously the public clouds are not a secret and we have them on six continents now, and in many cities and in many more points of presence than you can get from Amazon. On the private cloud side, we don’t have the same visibility, but one set of data points that are encouraging to us is that we saw a big shift in the last year for people running in production.” Two years ago, says Collier, about 20 percent of those surveyed said they were using OpenStack in production, and a year ago it was about a third. And giving The Next Platform a sneak peek at the upcoming survey results, Collier says that in the latest survey for 2015, about half of the OpenStack installations among those surveyed are being used in production. “It has come a long way in two years,” says Bryce.
Big name companies and organizations are deploying OpenStack, and not just as a compute or storage cloud controller. One of the hottest areas of deployment is among service providers, telecommunications firms, and enterprises to use OpenStack to control their network function virtualization stack, which is just a pretty way of saying OpenStack is controlling the software that is being ripped out of specialized appliances in Layers 4 through 7 of the network infrastructure and plunked onto virtualized X86 iron. It doesn’t hurt when large enterprises stand up and count themselves publicly.
Wal-Mart will be at this week’s summit talking about how it has deployed OpenStack to manage over 100,000 cores running on thousands of servers. Best Buy, another big retailer, is a user, and in financial services, TD Bank, Fidelity Investments, and American Express are committed OpenStack users in the financial services sector, where almost all of them have deployed OpenStack somewhere in their organizations at this point. The media industry in general is a big user of OpenStack, with Comcast, DreamWorks, and Time Warner Cable, Disney talking about their deployments, and the HPC community (with CERN way out in front) is beginning to come around to OpenStack as well, says Collier.
To our way of thinking, the fact that OpenStack is embracing new technologies that should be controlled and orchestrated is what will cause a hockey stick ramp of adoption. Two technologies that stand out as needing to be addressed were bare metal provisioning and software containers before that ramp could commence.
Not everything will be run from atop a KVM, Xen, ESXi, or Hyper-V hypervisor. The OpenStack community has been working on the Ironic bare metal provisioning software for the past couple of years, and it is production-grade with the “Kilo” release of OpenStack that was announced at the end of April ahead of the summit. Rackspace has deployed a gussied up version of Ironic to deploy workloads on its OnMetal service, which provides cloud-like utility pricing and rapid configuration for whole physical servers. There are many workloads – many of them in the HPC and data analytics areas – that need every bit of compute and network performance that the underlying servers in a cluster can deliver, and putting them on top of hypervisors is non-starter for many organizations. With Ironic, the Nova compute controller that has been managing hypervisors and virtual machines can now manage the deployment of workloads to physical servers, and usefully, it can distinguish between different kinds of machines – say those that have GPU accelerators and those that do not – and deploy workloads to them as appropriate.
At this point in the development of OpenStack, being able to provision bare metal and deploy applications to it as if it was a VM is much more important than scaling to 1 million machines and 60 million VMs in a single OpenStack cloud. Ditto for the support of various kinds of software containers, notably Docker.
While OpenStack has Nova driver that allows it to deploy Docker containers inside of VMs, and others have come up with ways to orchestrate LXC containers using Nova, the OpenStack community wants to have a separate container management system, and that is Project Magnum. As The Next Platform reported last week, Project Magnum is coming along and will initially interface with Docker Swarm and Google Kubernetes, which are tools that manage clusters of Docker containers. Kubernetes will support the AppC container format put forth by CoreOS, and will also likely support LXC and other container formats. Adrian Otto, who is the principal architect at Rackspace and the project team leader for the Magnum container-as-a-service effort, told The Next Platform last week that Magnum could be ready for primetime around the “Liberty” release of OpenStack this October. A lot depends on how the community decides to steer the project at this week’s summit.
The key point is that with support for virtualized servers, bare metal servers, and software containers, OpenStack will be able to deploy applications in the most popular manner that most customers will want. VMware does not do bare metal provisioning, and seems to be allergic to the idea, and Microsoft already supports bare metal provisioning of hypervisors using its System Center Virtual Machine Manager add-on and could go so far as to support bare metal provisioning of its upcoming Nano Server operating system if enough customers push for it. (Windows Azure no doubt already has bare-metal provisioning, just like it has been running Nano Server for some time already.) OpenStack can embrace everything, but VMware and Microsoft have to walk more carefully. And that is the best reason to think that despite the challenges of orchestrating the needs of thousands of customers and hundreds of interested vendor participants in the OpenStack community that OpenStack will continue to adapt and see accelerated adoption.
One last thing: The other hot item in the Kilo release is the support of erasure codes for data protection in for the Swift object storage service. This is something that OpenStack users have been waiting for, and Bryce says that it “has the potential to radically alter the economics of object storage” on OpenStack because it will mean not having to replicate data to ensure its durability.
Other storage innovations will drive OpenStack, too, and as an example, Bryce cites an unnamed customer that moved to seven racks of Open Compute servers and Open Vault high-density storage servers like those employed by Facebook and ran the Cinder block storage on top of that. “They were able to take a risk on this new architecture for storage and their developers don’t even know what’s going on underneath,” explains Bryce, adding that the performance and bang for the buck are “pretty amazing” compared to the vendor storage this company had been using. The setup has a hypervisor with only one virtual machine per compute node, and the virtualization is just used for the sake of having better management of the software running on the nodes. This customer puts a commercial Hadoop distribution on each of the server nodes in the cluster, and then uses Cinder to mount multiple volumes into the virtual machines, basically striping across the volumes to get very zippy storage performance on the underlying HDFS file system. “This setup is actually beating the performance of their prior Hadoop system, which was directly on bare metal.”
Isn’t that ironic?
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.