Google Fosters Another OpenStack Kubernetes Mashup

Because Google is such a wildly successful company and a true innovator when it comes to IT platforms, and because we know more about its infrastructure at a theoretical level than what has been built by other hyperscalers and cloud providers, it is natural enough to think that the future of computing for the rest of us will look like what Google has already created for itself.

But ironically, only by going into the public cloud business could Google have to change its infrastructure enough to actually have to make it look more like what large enterprises will need, and that means not just supporting containers and virtualization, too, as the Google Compute Engine public cloud does. The thing is, though, that the vast majority of computing is still done on premises for the companies of the world, and Google needs to help foster a software stack that can support traditional virtual machines as well as new-fangled containers. To that end, Google, OpenStack distributor Mirantis, and chip maker Intel, which is keen on promoting private cloud computing to drive its Data Center Group for the next decade, are teaming up to containerize – some would say civilize – the OpenStack cloud controller so it can run properly on a substrate of the Kubernetes container controller, bringing together the new and old worlds of virtualization in a single platform.

With the effort announced this week by Mirantis, which will containerize its commercial implementation of OpenStack so it runs atop a homegrown and embedded version of Kubernetes, the industry will get a second option as a mashup between these two virtualization environments. A year ago, we pondered which software would ultimately control the clusters of the future – OpenStack, Mesos, or Kubernetes – and told you that there was work underway to bring OpenStack on top of Kubernetes, a plan that Intel revealed back in March of this year with Linux and commercial Kubernetes supplier CoreOS and OpenStack distributor Mirantis. With that initiative, Kubernetes was clearly being put in charge of OpenStack:

intel-cloud-day-giffee

While the software is different from the Borg controller that Google uses internally, the approach is similar. Google lays down a container substrate built into its homegrown Linux for its internal applications, and adds a virtual machine substrate on top of that based on KVM for situations where increased resource isolation or security are required, as is the case with the Google Compute Engine public cloud.

CoreOS has created its own Google-like software infrastructure, called Tectonic, which implements a container management system based on Kubernetes and a slew of other tools it has created that turns it into a true platform. Mirantis could have just adopted Tectonic as its own Kubernetes layer, but has instead chosen to implement its own Kubernetes layer and use it to underpin its Mirantis OpenStack 10 release that will be coming out in the first quarter of 2017, Boris Renski, co-founder and chief marketing officer at Mirantis, tells The Next Platform. Renski says that the effort doesn’t have a codename, but that it will essentially turn OpenStack into a VM PaaS, or virtual machine platform as a service, layer running on top of Docker containers that are controlled by Kubernetes instead of a collection of independent management components running on separate physical servers that have to be upgraded and maintained simultaneously and often with great annoyance.

Making OpenStack easier to consume and maintain is a big benefit, of course. There is always plenty of gnashing of teeth about how OpenStack is relatively easy to install but very difficult to operate, upgrade, and troubleshoot once it has been stood up. Renski says that Google is keen on creating an open source stack that is compatible with Google Compute Platform, with a similar Kubernetes container management system and KVM layer, so the search engine giant can better compete to be the on-premises private cloud at large enterprises. Having a single virtualization stack that can provide both VM and containers is something that Mirantis and CoreOS want to provide, and Google and Intel are both thrilled to see competition here to drive innovation and then adoption. (Red Hat will no doubt have a similar effort at some point and a resulting unified Kubernetes-OpenStack platform if this idea takes off.)

“In the beginning, we were thinking about using Mesos as a kind of core substrate. But the short answer to your question is that we don’t give a damn. Ultimately, what we are trying to do is embrace the standards that are emerging in different spaces, and just like back in the day there was OpenStack and Open Nebula and CloudStack and Eucalyptus and they all converged on OpenStack, we are making a bet on Kubernetes.”

The idea behind this mashup between Kubernetes and OpenStack is simple: Companies are not yet ready to give up on computing in their own datacenters, but they want a single substrate that can span public and private clouds. The vendors that can provide tools that are the most similar between these two locations of computing will be able to eat market share, and it is important to remember that it is still early days in cloud computing. Microsoft has cooked up Azure Stack, due to be released later this year in the wake of the Windows Server 2016 launch in October, as its on premises analog to the Azure public cloud, and there are always persistent rumors that Amazon Web Services, the juggernaut of the public cloud, will sell pieces of its infrastructure for customers to run in their own datacenters. (This is a bit of heresy for AWS, which believes the only cloud is public and the only one that matters is its own.)

“I think that all three vendors understand that the next big territory up for grabs is on premises, and according to Gartner, over 95 percent of all workloads in the datacenter are still on premises and 99 percent of workloads that have been virtualized are still running inside of VMs,” says Renski. “They are all scrambling to have an on premises story.”

Intel gets to sell chips no matter where the cloud lands, but thinks that any organization that has between 1,200 and 1,500 servers should be building their own private clouds, and says that at that scale they can operate efficiently enough to justify the investment in systems and datacenters. But as Renski put it, and we would concur, the real issue here might be that Intel doesn’t want to end up with only a handful of customers who have all the buying power. It would rather have a dozen customers that command 20 percent of Xeon chip revenues and another 50,000 that make up the other 80 percent. This will minimize computing and economic efficiencies, perhaps, but it will preserve Intel’s revenue and profit growth.

The Mesos job scheduler has a framework for adding the Kubernetes container management system on top of itself, and OpenStack has a project called “Magnum” that provides an abstraction layer so OpenStack APIs can allow for things like Kubernetes, Docker Swarm, or Mesos can plug into the cloud controller. (You can see our analysis of Magnum here.)

“In our view, Magnum is kind of a flawed value proposition because a lot of the attractiveness of tools like Kubernetes is actually their APIs, and masking them behind something like Magnum does not make sense,” says Renski. “So the reverse approach, where you have pure Kubernetes as they underlying substrate and tools so customers can talk to the overlaying OpenStack APIs or directly to Kubernetes APIs is a much cleaner way to do this. Not to mention the fact that it mirrors the whole Google design pattern.”

By the way, Mirantis and CoreOS are working together on the “universal scheduler” that allows for Kubernetes to control OpenStack, and both companies are contributing the code for this to the respective projects and using it as the basis of their respective integrated commercial platforms. These are just two different implementations of the same idea, based on the same upstream code.

Another thing to realize: In both cases, with the future Mirantis OpenStack and CoreOS Tectonic stacks, only the management components of OpenStack are being containerized, not the KVM hypervisors and the actual VMs that run application software, which are still laid down on bare metal. If customers want to lay down containers underneath their hypervisors for additional security, they can of course do this, as Google does with its Compute Engine public cloud. But this being on premises computing, that will very likely not be necessary. Over the long haul, says Renski, the security in Kubernetes and Docker will be sufficient such that even this won’t be necessary.

So what about using Mesos as a substrate for OpenStack? This is another direction that Mirantis could go, if customers pull it in that direction.

“That is a very interesting question,” says Renski with a laugh. “In the beginning, we were thinking about using Mesos as a kind of core substrate. But the short answer to your question is that we don’t give a damn. Ultimately, what we are trying to do is embrace the standards that are emerging in different spaces, and just like back in the day there was OpenStack and Open Nebula and CloudStack and Eucalyptus and they all converged on OpenStack, we are making a bet on Kubernetes.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.