Locking Down Docker To Open Up Enterprise Adoption
February 10, 2017 Timothy Prickett Morgan
It happens time and time again with any new technology. Coders create this new thing, it gets deployed as an experiment and, if it is an open source project, shared with the world. As its utility is realized, adoption suddenly spikes with the do-it-yourself crowd that is eager to solve a particular problem. And then, as more mainstream enterprises take an interest, the talk turns to security.
It’s like being told to grow up by a grownup, to eat your vegetables. In fact, it isn’t like that at all. It is precisely that, and it is healthy for any technology that seeks widespread adoption that hopes to shape the IT landscape.
This is precisely what has been happening to Docker containers in the past several years as the technology moves from development by hyperscalers to enthusiastic trials by those on the bleeding edge to real-world deployments among large enterprises that like the leading edge and a little less risk. Docker, the software company steering its eponymous container runtime and the management tools that wrap around it, has a more ambitious goal than making containers as secure as bare metal servers or virtual machines with hypervisors. Nathan McCauley, director of security at Docker, tells The Next Platform that the goal is to make a containerized platform based on Docker the absolutely safest way to create and deploy application software.
“I don’t think you can ever say that security is done, but if you take Tony Stark’s Iron Man suit as an analogy, suits can always get better and you can always add more features,” says McCauley. “For Docker, there are plenty of security features that we can add, and we are sequencing them so we can build on features as we go. If you look at the past two years of security enhancements, a lot of it has been building up to make applications safer. From my point of view internally, I feel like we are just getting started in terms of what is possible. But with that said, I think container-based deployments are more secure than basically any other way you can deploy applications today. So we are already ahead of alternatives, but because containers so fundamentally change the application paradigm, where they move the unit of deployment down to the application, you have tons of opportunities that can improve security.”
At this point, as happened with virtual machines and their hypervisors a decade ago, financial services firms, government agencies, healthcare providers, and other highly regulated industries that deal with sensitive information are demanding that new applications developed be deployed in containers and in some cases legacy applications are being retrofitted for containerized environments. “These organizations see containers as a way to create a secure software supply chain across hybrid and multi-cloud environments,” says David Messina, senior vice president of marketing at Docker. “We are at the point that security is becoming one of the biggest benefits of Docker adoption, and enterprises are validating that.”
This week, Docker updated its container stack – including the runtime, its Docker Compose container making tool, and the Docker Datacenter management tool – to gather up and manage the “secrets” that applications have as they run. Just like we all have passwords to access our devices and the applications that run on them or through them, applications have secret information that they need to keep with them to perform their functions. This includes things like passwords, encryption keys, API keys, authentication certificates, tokens for communicating with third party services, and so on. All applications have these secrets, and managing them centrally and securely has been the most requested feature asked for by enterprises, according to McCauley.
For a container platform to function properly, the first thing Docker had to realize is that it is a highly dynamic environment and this secret information that the applications need has to move with them as they flit around a cluster, and come and go as they flicker into and out of existence. This stands in stark contrast to most virtual machine environments, where a VM is fairly static – you set it and forget it. And even when the VM moves, these secrets are stored in the operating system inside the VM that actually runs the application and not in the VM itself. (We could argue about how insecure this approach is, but bare metal with a single operating system is no different in concept.)
It is a bit tricky to add secrets management to the container environment, says McCauley. For one thing, Docker wanted for this secrets management system to be the same on either Windows Server or Linux environments, and the same on private clouds and public clouds. If the security management software can’t span these environments consistently and seamlessly, like Docker containers themselves, then it really is not a solution at all.
Up until now, people add these secrets into the application source code, which is not a wise thing to do, according to McCauley, but it was practical as a temporary fix. In other cases, people have hacked secrets management onto the container environments with Chef, Salt, or Vault system management tools, but this is a bolt-on approach that is not native to the Docker stack. And while the Kubernetes container controller developed by Google and the open source community does some secrets management, McCauley says that every application in a Kubernetes environment can see every secret for every application – something he calls “open secrets” and that is not a compliment.
With the new Docker stack, these secrets are stored in the Docker runtime and are only available in main memory in the system. They are not ever stored on disk or flash storage, but they are accessed by a file system that runs in main memory to give a familiar access method to them, allowing for legacy applications as well as brand new code to use the same method to access these secrets. Moreover, with the Docker secrets management system, only the applications that are supposed to access any given secret can see those secrets, and applications cannot see the other secrets. The data comprising the secrets is encrypted with keys and transmitted to applications on each server node from a central repository, like this:
The Docker Compose container creation tool is used to manage the secrets, which are stored in a single distributed store and embedded in the local Swarm mode of each Docker runtime daemon on each server. Programmers can create fake secrets in Docker Compose to develop and test applications, and they can also have secrets set off to the side in a different location on the network. The secrets management functionality complements the role-based access control and container deployment policy management functions of Docker Datacenter in locking down the application as well as the containers themselves. All of this functionality comes for free to those who have support contracts for Docker Compose and Docker Datacenter. If you want a really gorpy explanation of how the container secrets management works, this is a good place to start.
The one thing we wanted to know is how Docker feels, from a security standpoint, about running Docker containers on top of virtual machine environments. After all, when Google created Cloud Platform, it laid down a Linux container, then it loaded a KVM hypervisor within it, and then spun up VMs atop that, and finally it runs Kubernetes to orchestrate containers running atop those VMs. If you run Docker on AWS, Azure, or Cloud Platform, you by default are running it in a VM environment. You can do similar things with the ESXi hypervisor from VMware, the KVM hypervisor from Red Hat, and the Hyper-V hypervisor from Microsoft in conjunction with container management systems like Kubernetes. This is not something McCauley thinks is necessary or desirable, although there are sound reasons why enterprises are doing it in the interim.
“Containers work across the board,” says McCauley. “If you want to deploy in cloud environments or on bare metal, the security gains you get are strictly better no matter where you are deploying. The common thread is that Docker is the technology to add to get more security. As for deploying Docker atop VMs in a greenfield situation, I don’t think that would be a sensible option. For many organizations, they understand that the security you get from containers is quite good, and that a lot of the things you want to secure are better dealt with by having a sane, secure software supply chain. Having really good control over what is getting into your infrastructure is one of the highest leverage moves, and if you have a secure software supply chain, it is very unlikely for you to get malicious code in your infrastructure. What you really care about at this point is how you manage services and secrets and things like that.”
Interestingly, the majority of Docker implementations in the field are running on VMs today, according to Messina, largely because the VM became the deployment vehicle of choice in the last decade as servers went from physical to virtual to make them more malleable and to drive up server utilization.
While the major server virtualization environments all can run containers atop them, and so can Mesos and OpenStack cluster controllers, in the long run we think that companies will want to deploy Docker containers on bare metal environments wherever possible and will accept virtualized infrastructure only when necessary. We wonder, in fact, when the big three cloud providers – Amazon Web Services, Microsoft Azure, and Google Cloud Platform – will spin up distinct and new clouds that are exclusively bare metal and exclusively running Docker containers. That day is probably coming, and it is probably this year for one of them and the others will follow suit fast.