Future Clouds Could Be Just Containers On Bare Metal

If operating systems or server firmware had better isolation and workload scheduling software, the last decade of server virtualization in the datacenter might never had happened. And in the long run, all of those virtual machines and the hypervisors that juggle them might also go the way of all flesh as the flexible configuration and workload isolation that came with clouds – meaning orchestrated VMs on clusters of servers overlaid with hypervisors – is available through a mix of bare metal provisioning and containers.

It could happen. Someday, a cloud may not have virtualization as we know it except in the rare cases where extra abstraction levels are necessary to better secure the places where applications run and data resides and where companies are willing to pay the extra performance penalty to provide that increased level of security.

The OpenStack community is preparing for that day should it come to pass, even though it might seem to obviate the need for something like OpenStack. In conjunction with the “Rocky” release of the OpenStack cloud controller last week, we sat down with Mark Collier, chief operating officer at the OpenStack Foundation, and Jonathan Bryce, the organization’s executive director, to talk about what the future holds for virtualization, bare metal, and containers.

Cloud controllers like OpenStack were originally created to provision hypervisors onto clusters of servers and then the VMs that contain whole operating systems and running applications atop those hypervisors. This is a lot of overhead, and Google created the initial Linux containers – based on namespaces and cgroups – to avoid paying the VM overhead tax and yet still provide some isolation between workloads running across its clusters. Docker containers are derived from these ideas at Google, and like many Google technologies, have been open sourced and widely embraced. Google took many of the ideas behind its Borg and Omega cluster controllers, which have container orchestrators as part of their feature set, and open sourced a Go variant of this software as Kubernetes, which is rapidly becoming the orchestrator of choice for Docker.

According to preliminary results of the latest OpenStack user survey – the results of which will be published at the OpenStack Summit in Berlin in November – somewhere around 20 percent to 25 percent of customers are using the Ironic bare metal plug-in for OpenStack in production. That is up from 15 percent in the 2017 survey and 11 percent in the 2016 survey. Five years ago, says 95 percent of workloads were using one hypervisor or another – mostly Xen (controlled by Citrix Systems) or KVM (controlled by Red Hat), with a smattering of VMware ESXi – and only a smattering of shops were using bare metal provisioning. Taking a very educated stab in the dark, based on anecdotal evidence from hundreds of customers, Bryce estimates that somewhere between 70 percent and 75 percent of workloads at OpenStack sites are virtualized today, with the remaining 25 percent to 30 percent of workloads running on bare metal. This is an important distinction because OpenStack customers sometimes have many clusters and dozens to hundreds to thousands of workloads, and sometimes have mixed clusters with both bare metal or virtualized hosts.

There are early examples of customers going big with bare metal on OpenStack. Yahoo, which is part of the Oath business mashup of the search engine and online application provider and AOL and which is owned by Verizon, runs the Ironic plug-in at scale, with over 1 million cores under management, running various applications, for example. Adobe has an OpenStack cloud with more than 100,000 cores that is run by four people, which the company says is 30 percent cheaper to run than the same level of infrastructure on a public cloud. Enterprise SaaS software supplier Workday has a 50,000-core OpenStack cluster that is expanding to a 300,000 core footprint to support its rapidly expanding business.

The size of clusters at OpenStack shops continues to grow as the number of workloads on them increase and as the workloads themselves grow in terms of capacity and users, and that means, for the moment, that the number of hypervisors and VMs on OpenStack clouds is also growing, even as the container is becoming the way of packaging up and deploying software.

As for containers, both virtualized and bare metal hosts are being equipped with containers and their orchestrators, with the Docker container dominating and with Kubernetes, Docker Swarm, and Mesos with its own add-ins being the dominate way to orchestrate. Precisely how prevalent containers are for applications is something the OpenStack Foundation will be trying to figure out. But suffice it to say, a portion of the workloads are being containerized and this share will only grow over time. And it may grow so much, and bare metal and container environments get sophisticated and secure enough, that virtualization is no longer needed except for legacy support.

“At some point, the number of virtual machines may start to shrink, but for right now, VMs are extremely prevalent and the way that software can be run in an immediately useful and compatible way,” says Bryce. “The key is that with OpenStack, you have the ability to use whatever technology makes sense in the environment, and that could be running directly on bare metal, using containerized applications, or supporting VMs. For years to come, that is where the majority of IT shops will be. This is true of the big public clouds, too – their customers deploy mostly virtualization. So the real power is to have all of these technologies available in a single platform. That said, the easier it is to deploy bare metal, the more it will take over.”

There are a number of new or improved OpenStack projects that aim to make bare metal provisioning and cluster management to get easier. With the Rocky release, the highlights of which you can see here, there is a new RAMdisk deployment interface with the Ironic bare metal controller that allows for large scale clusters to have their images loaded into and booted from main memory rather than local storage, and this will significantly speed up the deployment of bare metal servers. Ironic also has an interface to control BIOS settings in the physical servers and configure all kinds of settings, such as SR-IOV peripheral bus virtualization. The Cyborg interface to GPU accelerators now allows for FPGAs to be accessed and reprogrammed from within the OpenStack framework, including a REST API for doing this programmatically and automatically. Cyborg is obviously useful for both HPC and machine learning workloads. The TripleO feature is improved with Rocky as well, allowing for better fast forward upgrades. Under normal circumstances, you have to upgrade in order through the releases of OpenStack to get current, but with TripleO, you can jump two or three releases and get at least that current in that one step. Oath will be talking about its experience with fast forward upgrades, moving from the “Juno” to the “Ocata” release, at the upcoming Berlin conference.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Google did not “create” the initial Linux container. OpenVZ/Virtuozzo was released and product-ized circa 2003; Proxmox, LXC, Linux-VServer were all built starting ~2005 and released in the 2008 time frame. Linux namespaces were available in 2002 and cgroups in 2007.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.