OpenStack Aims Magnum At Containers
May 11, 2015 Timothy Prickett Morgan
Software containers are different from virtual machines and the hypervisers that host them, and they need a very different set of management tools to use them in large scale enterprise, hyperscale, and cloud environments. You can’t just global replace a hypervisor with Docker and expect everything to work. It is a bit more complex than that.
Perhaps more importantly, Docker and other container approaches like OpenVZ, Rocket, LXC, and others are more about packaging and managing software and their runtimes, and increasingly are being used to break down and isolate modularized chunks of applications, called microservices, so they can be upgraded, managed, and migrated around a cluster of servers individually and yet still work collectively. This ability to manage stacks of software without imposing too much virtualization overhead on system and network performance is one reason we think we could see widespread adoption of containers in the HPC and analytics spaces where bare metal has prevailed for so long.
In the OpenStack world, this is where a relatively new project called Magnum, which is being spearheaded by OpenStack co-founder Rackspace Hosting, is playing an important role. It will be a hot topic of discussion at the upcoming OpenStack Summit in Vancouver in two weeks for sure. Adrian Otto, who is the principal architect at Rackspace and the project team leader for the Magnum container-as-a-service effort, talked to The Next Platform a bit about Magnum and how we can expect it to mature. (Otto is also the project lead for Solum, a platform cloud layer for OpenStack that Rackspace launched with Canonical, Cloudsoft, Cumulogic, Docker, eBay, and Red Hat back in October 2013.)
This was about the same time that techies inside of the OpenStack community were kicking around how they should manage containers, which in the Linux world at least were assembled atop kernel features called control groups (cgroups for short) and namespaces, both of which are akin to the technology that search engine giant Google invented to isolate workloads on its own massive infrastructure and which Google helped foster and put into the Linux kernel over the past eight years.
Initially, and not necessarily with everyone in agreement, it was suggested that the existing Nova compute controller in OpenStack should be extended to control containers in the Linux environment, explains Otto, but eventually the techies working on the issue came to the realization that containers were different enough from VMs and their management systems different enough from hypervisors that they warranted their own project.
“OpenStack is not opinionated about what type of virtualization or networking people choose, and we are also not opinionated about what container technology they choose.”
The result is Magnum, and what it does is interface OpenStack with the container management systems that shepherd containers on clusters. This includes Docker Swarm for Docker containers and Google Kubernetes, which also can be used to control Docker containers but which is also getting support for the AppC container format espoused by CoreOS. It also has hooks into the flannel virtual networking service for containers that CoreOS has cooked up as part of its Tectonic container management system. While these container management systems are sufficient for what they do, they need a cloud controller system like OpenStack that wraps around them for a number of reasons. First, companies building clouds may want to have many instances of Swarm and Kubernetes running on their clusters, providing isolation by customers (if they are a public cloud) or workloads or business units (if they are an enterprise). Something has to manage the resource allocation for each individual Swarm or Kubernetes collective.
Second, Magnum acts as an interface that masks the very different management styles of different container tools. “OpenStack is not opinionated about what type of virtualization or networking people choose, and we are also not opinionated about what container technology they choose,” says Otto. And that means if someone wants to plug in container management systems that support LXC, OpenVZ, rkt, or other container formats and runtimes, this will be possible because of the pluggable architecture of Magnum.
Magnum support will not just be limited to Linux, either, even though the current crop of containers do leverage cgroups and namespaces in one form or another, according to Otto. Microsoft, as we previously reported, is bringing two different types of containers – Windows Server Containers and Hyper-V Containers – to Windows Server 2016 (formerly known as Windows Server 10) and both can be managed by the Docker Engine. And it stands to reason that Magnum will eventually be able to wrap around Docker or Microsoft’s own container management tools to manage the mix of container and hypervisor technologies from Microsoft from within an OpenStack cloud.
“I think it is going to take a number of years before people are really using this properly, but I think it is going to be really important,” Otto says of the Docker-friendly version of Windows Server that will be equipped with the Windows analogs to cgroups and namespaces.
The idea of driving different kinds of virtualization from within a single cloud controller is not new. Rackspace itself has two different drivers for the Nova compute controller: one that spins up a Xen virtual machine for its public cloud and another that spins up bare metal instances using the Ironic feature of OpenStack that is ready for primetime with the “Kilo” release of OpenStack, also known as the 2015.1.0 release, that came out last week. Similarly, Rackspace has a different host aggregate driver for creating Windows and Linux machines because the way that these two types of servers are stacked on its clusters are different.
“Magnum asks Heat, which is the OpenStack orchestration service, for a compute instance and it does not care what type of compute instance is provided. It might come from Nova as a bare metal server, it might be a Nova virtual machine running on Xen or KVM, or it might be running inside of another container so I can have my container operating environment, or COE, running inside of a container. This might sound confusing, but there are really good reasons why you might want to do that. So we do not care what virt driver the underlying cloud uses.”
At the moment, Rackspace is running an internal preview of a service that is based on container technology, which is based on the libvirt-lxc driver that was part of the prior “Havana” release of OpenStack. As the name suggests, this driver for Magnum was able to reach in and control LXC containers, and this product could be available on the Rackspace Cloud later this year.
Looking ahead, Otto says that Rackspace has to figure out how to hook containers into virtual networks, and while no decisions have been made, the idea would be to take a similar approach that was used with virtual machines. And that would be to use virtual switches to link containers to each other without having to add another layer of management up on top of them. The big issue that will be decided at the summit is how the Neutron virtual networking feature for virtual machines will integrate with the virtual networking for containers; in many cases, companies will run containers inside of VMs to provide better security and resource isolation than can be done with containers alone.
Magnum is still a young project in the OpenStack collective, only having accepted its first commits in November 2014 and only launching its first release at the end of March. Otto says that Magnum is at about the same level of sophistication that the Nova controller was back in the early days of the OpenStack project, but given all the work that is going into it now, and the experience that OpenStackers have with evolving Nova, Magnum could be production-grade by the “Liberty” release of OpenStack this October. It all depends on what directions the community decides to take at the summit in a week and how long it takes to code up what they decide upon.