No Slowdown in Sight for Kubernetes

Kubernetes has quickly become a key technology in the emerging containerized application environment since it was first announced by Google engineers just more than three years ago, catching hold as the primary container orchestration tool used by hyperscalers, HPC organizations and enterprises and overshadowing similar tools like Docker Swarm, Mesos and OpenStack.

Born from earlier internal Google projects Borg and Omega, the open-source Kubernetes has been embraced by top cloud providers and growing numbers of enterprises, and support is growing among datacenter infrastructure software vendors.

Red Hat has built out its OpenShift cloud application platform based on both Docker container technology and Kubernetes to bring automation to deploying, scaling and managing containerized applications, IBM is building its private cloud offering atop containers, Kubernetes and the Cloud Foundry framework, and Cisco Systems is integrating Kubernetes into its Application Centric Infrastructure (ACI) software. Docker in October announced it is integrating Kubernetes into its container platform and giving customers the choice of using Kubernetes, Docker Swarm or both for orchestration without having to make changes in their Docker operations.

The speed with which Kubernetes has grown among cloud providers like Amazon Web Services, Microsoft Azure and Google Cloud Platform and in the enterprise has surprised Ted Dunning, chief application architect for data platform provider MapR Technologies, who doesn’t believe pace will slow anytime soon. The capabilities in Kubernetes surpass what can be found in other container orchestration technologies, and Google is pushing hard to expand what the technology and is not hampered by the consensus-building processes that tend to slow other open-source projects.

“It is gathering adoption at a stunning rate,” Dunning told The Next Platform. “Friends within Microsoft, for instance, say that a huge fraction of the new clusters just spawned in the Azure Container Service are being managed by Kubernetes. Certainly among our more advanced customers, we’re seeing dramatic adoption, in Europe and in the U.S. Where there is a previous solution, such as Mesos, adoption is a little slower, but for virgin fields, the adoption is just stunningly fast.”

Helping to drive that adoption is the simplicity and ease of use inherent in Kubernetes.

“The simplicity of the Kubernetes model in terms of how things are managed and how you specify what you want is pretty persuasive,” he said. “You have a simpler and pretty uniform model for describing things. The idea of affinities and disaffinities, things that will just be respawned, a simplification of a lot of responsibilities, where in Mesos you have to glue things together. With Mesos, you have to use something like ATOS [to get an] overall dashboard, you have to get Marathon for long-running things that get respawned, you need some sort of time-management system, perhaps Chronos, and you have to put together a pretty fair number of moving parts, and those parts have a history of some reliability issues. The quality of Kubernetes code, on the other hand, is pretty much irreproachable. And you can add things that are not widely adopted yet that will have a big impact, such as Helm. Helm is a pretty interesting system specification language that lets you model the cradle-to-grave deployment process of software with two extra steps beyond the compiling [and] build steps that you normally see. … It’s really, really nicely done. I think that is going to be a growing impact on people as they really try to put really serious productions workloads into Kubernetes.”

Helm is an example of the way Kubernetes is designed to quickly accept new features and capabilities. Now under the purview of the Cloud Native Computing Foundation (CNCF) and being developed by Google, Microsoft, Bitnami and a developer community, the technology lets users better define, install and upgrade complex Kubernetes applications through the use of charts, which can shared with others and published. Helm was designed as a match set with Kubernetes, and is an example of what can be done with the support of companies like Google and Microsoft.

Docker wasn’t design for large-scale product use and Mesos seems to be just “kind of floating along and seems rudderless,” Dunning said. “It doesn’t seem to have an aggressive sort of engineering team behind it, and certainly doesn’t seem to have two of them. Kubernetes has two of them in Microsoft and Google. … My experience in open-source projects is mostly in non-centrally driven projects – Apache-style projects – with consensus building, whereas Google is driving it at the pace they think it needs to go and they don’t really worry about being consensus-building. They’re driving the project.”

Organizations are beginning to understand what Kubernetes can do for them. A large software company that is a customer of MapR created eight exploratory teams to investigation the various container and orchestration offerings, and all eight teams came back with the recommendation to deliver all of the company’s products in Kubernetes.

“MapR has been using Kubernetes for years internally, but we hadn’t really put down a marker in terms of what we were going to be delivering to customers,” Dunning said. “The very, very dramatic speed in which this customer-slash-partner made their decision – and they’re very cautious European sort of folks – really stunned us. They started putting a lot of pressure on us to say that we would deliver our file system and everything as Kubernetes containers. That required some technical work, and we talked to Google and we talked to Microsoft, and I was stunned at how incredibly gung-ho they were about that. They were willing to lay down engineering time, and that sort of commitment really makes a difference in adoption. We were able to get changes made in Kubernetes’ main line in matter of weeks, or two months, roughly. That was significant, and really facilitated what we were going to be able to do and to deliver.”

The need for Kubernetes is also being driven by the fast adoption of containers. Virtual machines were a boon to enterprises more than a decade ago, but now they’re looking for something that scales better, is easier to manage and is less costly. Containers, which are “radically different” from VMs, are the answer, he said.

“For one thing, the number of containers that you can have running is huge compared with VMs,” Dunning said. “A rack of computers might have a 100 VMs in it, but it could easily be running 10,000 containers. A big reason for that is that you’re not replicating the kernel, you’re sharing all of that, so the containers are not that much more expensive than the applications running inside them. It’s kind of what VMs always wanted to be as a facile, quick way of managing running systems, except that containers are those facile, agile things, and VMs are kind of sluggy and big and they’re hogging a lot of memory and they’re running separate kernels and it’s a pain in the butt.”

The technology is far from set. Going forward, containers need to mature. They work well now within a friendly environment and if there is one company running them, but they’re not as isolated at VMs, and that is beginning to happen, he said. It also needs to become easier to interact with Kubernetes from a command line and “it would be nice if it got as easy to do parallel program execution as it is now to run just an ordinary Linux program. There’s a fluidity there that I think is a bit lacking.”

However, “what has to happen with Kubernetes more than the development of it is the catching up of the user community to what Kubernetes can do already,” Dunning said, noting that organizations aren’t yet designing their applications to take advantage of everything containers and Kubernetes offers.

“I know that they’re not using the federation capabilities of Kubernetes,” he said. “Kubernetes is designed to federate multiple executions clusters, and that federation can be local [and] global sort of things. You can have a bit of cloud here, a bit of cloud there, a little bit of on-premises stuff, and we can execute pretty uniformly across that whole set. That I don’t think people are really capitalizing on, but that’s pretty exciting because it leads people to the possibility of cloud-neutral computing. I know the cloud leader is working very, very hard to make many kinds of lock-in happen. But I think that Kubernetes and cloud-neutral computing, the ability to very easily define functions on streams and just have them run abstract away the cloudiness of it, is going to really radically change that marketplace.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.