Building The Stack Above And Below OpenStack

It has been six years now since the “Austin” release of the OpenStack cloud controller was released by the partnership of Rackspace Hosting, which contributed its Swift object storage, and NASA, which contributed its Nova compute controller. NASA was frustrated by the open source Eucalyptus cloud controller, which was not completely open source and which did not add features fast enough, and Rackspace was in a fight for mindshare and marketshare against much larger cloud rival Amazon Web Services and wanted to leverage both open source and community to push back against its much larger rival.

OpenStack may not have played out exactly the way that either NASA or Rackspace had anticipated, but that is often the case with open source projects. (Look at how Hadoop has benefitted Cloudera, MapR Technologies, and Hortonworks more than it ever did Yahoo, which created it, or Google, which gave it the idea to. Or, for that matter, how Google is trying so hard to foment a Kubernetes container world, balancing its own desires for control against the openness required to build a community.) It is hard to call OpenStack anything but a success in a difficult IT environment with thousands of competing interests amongst its many contributors.

Now that the OpenStack Summit in Barcelona has settled down, we thought it would be a good time to take stock in the state of OpenStack, particularly in an IT environment where different parts of the infrastructure stack are competing for control. As we have pointed out before, there is much debate about whether OpenStack, Kubernetes, or Mesos will end up being the uber-controller in the datacenters of the future – at least those that rely upon open source software. (Microsoft and VMware have their own ideas about this, as does Amazon Web Services.) With the “Newton” release of OpenStack out and the community working on the “Ocata” release, due next April, and the “Queens” release on the horizon for a year from now, a lot has been accomplished and OpenStack has taken yet another step towards being a more polished, enterprise-grade product. Which, ultimately, is what open source projects all aspire to.

OpenStack has come a long way from the early days, when it consisted of Nova and Swift and the Glance virtual machine imaging system, and Eucalyptus and CloudStack, which were contenders for the open source cloud controller crown, have more or less gone the way of all flesh. We are fond of pointing out that the original goal for NASA and Rackspace was to create a control plane that could span 1 million host machines and 60 million virtual machines. (That is about as many virtual machines as the current base of VMware users – all of them in the entire world – has, just for perspective.) Thus far, no single OpenStack user has needed anything that scales quite that far – is that zettascale or yottascale? – and so the focus has been on making something that can scale across a few tens of thousands of nodes at most and several hundreds of thousands of cores for the largest customers who might use OpenStack.

openstack-installed-base-oct-2016
The OpenStack customer base keeps fairly up to date with releases

Back in those early days in 2010, OpenStack had those three projects and consisted of 124,000 lines of code being managed by fewer than 100 contributors, most of them from Rackspace and NASA. By late 2014, when the “Juno” release came out and there were 11 key services available with OpenStack, the project had swollen to 41 projects, 2.65 million lines of code, and over 4,000 contributors. Only two years later and there are nearly 64,000 people participating in the OpenStack community, and the 57 projects (not all of them in the main trunk yet) that make up OpenStack have over 20 million lines of code. Complain as people might about what OpenStack is lacking, you have to admit that this has been one of the most successful open source projects in history, at least among those that deploy enterprise-class software that does complex things.

To get a feel for what is happening in the OpenStack community and the installed base, we had a chat with Jonathan Bryce, a former Racker who has been executive director of the OpenStack Foundation for the past several years.

Bryce says that there is an advantage in having thousands of OpenStack deployments in public and private clouds. This is a big user base, and a lot of the organizations that deploy OpenStack are very involved in the projects and give real-world feedback about what features they need and what ones they do not. They are also not shy about telling project team leaders what is working and what is not. The Neutron virtual networking, says Bryce, has come a long way, which used to be a “problem child” and is now one of the more powerful aspects of OpenStack within the past two releases and is even more polished than in the prior “Mitaka” and “Liberty” releases.

As an example, Bryce tells us the story of an unnamed payroll processing company that was a relative newcomer to OpenStack and that wanted to bring in containers. (Both ADP and Paychex use OpenStack, so we don’t know which one he is talking about.) If this company did containers separately from OpenStack, it would have had to revalidate all aspects of that container stack and its interconnection with networks and storage and auditing and compliance tools. Having had OpenStack, this company built the container workflow on top of OpenStack, and re-use Neutron for networking and Cinder for block storage and gave them a head start on bringing in the emerging technology without having to do all that validation work.

openstack-emerging-tech-oct-2016
Containers are the most popular thing that OpenStack customers are looking for when it comes to emerging technologies, according to the latest OpenStack user survey

With the Newton release, containers are supported on Mesos, Kubernetes, and Docker Swarm running on top of OpenStack with either virtual machines or bare metal, all leveraging the OpenStack security. This release brings them all together, with varying degrees of harmony.

“People are going to want a little of all of these technologies,” says Bryce. “If we do the integration really well, the demand for computing power is not shrinking at all and it is not going to any time in the next couple of decades. So there is opportunity for all of these projects and technologies. What I think is dangerous is if we try to carve off all of our own areas. Then we will end up stifling the innovation and growth, and it prevents users from getting to the business outcomes that they want.”

There is a lot of hubbub about how to build the next cloud platform and what to use as a base foundation, and everybody is questioning how to do this best and whether you start with OpenStack, Kubernetes, or Mesos as a base and then build services on top of them.

openstack-containers-oct-2016
The Kubernetes container orchestrator is by far the most popular platform-level layer to add atop OpenStack these days, but only a minority of sites have any such layer at all

“It is the question of the year,” says Bryce with a laugh. “A lot of people are thinking about this, and from my perspective, there is actually way more benefit from co-existence and collaborating and using these technologies together than using them as competitive islands with different approaches. This is based on what I hear from users and see myself, that companies have such a wide variety of workloads that there is not a single technology that is going to meet all of their needs. So what they are looking for are the technologies that will meet their needs and work together in such a way that it doesn’t punish them for using multiple technologies. The opportunity that those of us who are in the infrastructure world face is to connect these new and existing technologies together that best meets the variety of workloads that users have.”

Way back when OpenStack was first starting to take off in 2012 and 2013 and the industry was lining up behind it as the heir to Eucalyptus and CloudStack (which both already had customers and momentum), we used to think that OpenStack would be forced to bring in a platform cloud layer of some time because people would naturally want these things to mesh well. It looked like Cloud Foundry might have been the natural thing to actually merge with OpenStack, but we are now starting to think that OpenStack might be compelled to bring Kubernetes in underneath OpenStack – not on top of it – and make it part of the stack, more formally than what CoreOS, Canonical, Red Hat, and SUSE Linux are doing with their Linux distributions. We don’t think this is a bad thing.

openstack-install-type-oct-2016
The majority of OpenStack installations are production and only a small portion are now proofs of concept compared to prior years

“I don’t see this as a bad thing, either,” Bryce concurs. “There is so much hype around this, and any time there is a lot of hyped technology, then different interests view it as a zero sum game instead of a positive sum game. They think that it is a land grab, like for instance if Kubernetes is a layer that runs on top of OpenStack, then somehow that means OpenStack loses, or if OpenStack is required for Kubernetes then that means Kubernetes loses. I think that is the wrong way to look at it. They are different technologies with different purposes, and when you put them together they help each other in a lot of ways. Kubernetes is an application tool, and OpenStack is an infrastructure tool, but OpenStack is itself an application so you can use Kubernetes to run and operate OpenStack, and once you are doing that, you can use OpenStack to do multi-tenant and self-service Kubernetes for your end users. Trying to break it up into these distinct competitive layers is counterproductive.”

Ironically, Bryce says that the big goal is to create a community around these technologies, not a profitable company, and that this is an important goal (compared to VMware vCloud or Microsoft Azure Stack, for instance), and that this differentiates certain technologies from others out there.

It is hard to imagine how to mesh Mesos and OpenStack well, but Kubernetes can run on top of Mesos or OpenStack or underneath OpenStack, and it is likely that various options will be available for those wanting to install private clouds or build public ones.

OpenStack can be deployed on bare metal using a Linux operating system, as has been done from the beginning, with hypervisors on compute nodes and adjacent storage nodes, or they can deploy the OpenStack control plane inside of Docker containers managed by Kubernetes using Ansible playbooks Project Kolla. (About 40 of the OpenStack services have been containerized so far.) OpenStack provisions virtualized or bare metal compute nodes, and if you want to run a container service on top of OpenStack (which is distinct from the containerized implementation of the OpenStack control plane itself), then you load up Project Magnum, which is very similar to the Google Container Engine service that the search engine giant has made available on its public cloud. The Magnum software is a bit more mature than Kolla at this point, concedes Bryce, but it is helpful to remember that Magnum is only two years old this month. The supercomputing facilities at CERN have several hundred thousand cores under management by OpenStack, and have tested Magnum at scale.

The Newton release of OpenStack, which started shipping at the end of October, had a big focus on getting the networking and container support polished. It remains to be seen what will happen with the Ocata and Queens releases.

The interesting bit that came out of the recent survey is why organizations are saying they are installing OpenStack. About 72 percent of those polled said that saving money over alternatives in the market for building clouds was the number one reason, and the second and third most important reasons were the ones we expect: increasing operational efficiency and helping to innovate faster by getting applications deployed quicker. We would love to see how much money OpenStack shops save compared to alternatives and just how more agile they are.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.