Docker is the new kid on the virtualization and containerization block. The technology started out as a project of The Next Platform-as-a-service company dotCloud in 2013. It very quickly became the main focus of the company. While Docker utilizes virtualization technology in the Linux kernel, it uses the kernel to create containers, not virtual machines.
These technologies don’t just compete in a sense, but they can be intermingled in interesting ways to achieve different results in the management of application software and providing security and isolation for that software.
The Virtual Machine
In 1999, VMware released Workstation and the entire technology industry changed. The virtual machine (VM) has been at the center of the cloud computing explosion. The widespread use of virtualization led to amazing changes across the technology industry. The rapid adoption of VMs led to significant changes in processor architecture. Even more than that, the cloud-based platform providers of today could not exist without the virtual machine. Amazon, Digital Ocean, Linode and Joyent are just some of the cloud providers that depend on virtual machines.
A virtual machine is a completely virtualized environment that only abstracts the physical hardware. A VM comes with its own BIOS, virtualized network adapters, disk storage, CPU and a complete operating system. When a VM boots, it has to go through the entire boot process – just like a normal piece of hardware. While boot times in VMs are often lower than those tied directly to hardware, it can still take several seconds to minutes to boot, depending on several factors.
While a virtual machine abstracts away the hardware, container abstraction happens at the operating system level. Each type of container technology has an explicitly stated purpose that limits its scope. LXC, the initial technology Docker was built on, is scoped to specific instances of Linux. Citrix’s XenApp, typically used to automate the deployment and usage of productivity office suites to remote workers, sandboxes user spaces on Windows Server for each user. Each user shares the same operating system, kernel instance, network connection and base file system, each instance of the application will run within a separate user space. This significantly cuts back on CPU usage and overhead associated with running multiple operating systems because a new kernel isn’t being loaded for each user session. This is one of the major reasons why containers are often used for running specific applications: less CPU and memory usage than would be seen when using a VM. XenApp can support hundreds of users running off the same server, whereas a similar solution utilizing virtual machines can only support dozens.
Docker is a slightly different animal than XenApp. Docker was initially built on top of LXC. Similar to XenApp, each Docker container’s purpose is to run a single application. As such, the scope for a Docker container is built towards a particular application, as opposed to an entire operating system as is the case for LXC. The file system inside a Docker container is chroot’ed to provide an environment similar to a VM.
Docker further incorporates a sophisticated container management solution that allows for easy scripting and automation. Given the focus on execution time for containerized applications, the ease of scripting is even more important. For developers looking for a performance breakdown between a Docker container and virtual machines, the following is a speed comparison of start and stop times for the two different technologies:
|Average Start/Stop Times|
|Start Time||Stop Time|
|Docker Containers||< 50ms||< 50ms|
|Virtual Machines||30 – 45 seconds||5 – 10 seconds|
There is a cautionary tale hidden in the above data: containers and virtual machines are not to be directly compared. While they can be used to perform similar functions, they often thrive in different use cases. Docker has faster start and stop times which allows it to be used in production environments in a disposable manor. While virtual machines can also be used in production environments, starting and stopping new VMs can be extremely costly.
Direct comparisons between Docker and a virtual machines from a performance perspective may not be as useful as a comparison between Docker and a virtual environment management solution such as Vagrant. While Docker excels at managing and scripting containers, Vagrant does the same for virtual machines (and in some cases, Docker, as well). Due to the major speed differences in execution time that were discussed earlier, Vagrant is often used in managing deployment of development environments.
|Comparison Between Vagrant And Docker|
|● Virtual Machine Management● Kernel-Based Security Separation● Multiple Operating Systems||● Scriptable● Text File Configurable Environments● Portable● Developer Friendly● Resource Separation● Cloud-Based Contributed Repositories||● Light Weight● Speedy Execution Times● Linux Only● Application-Level Scope|
There is significant overlap in feature-set and use cases between Vagrant and Docker. Vagrant is a virtual machine management solution while Docker is a container management solution; there are bound to be overlaps.
There is currently heated discussion within the virtualization community about the security differences between containers and VMs. Out of the box with no alterations, a virtual machine is more secure than a container. The main reason is that the virtual machine has the advantage of having hardware isolation. Meanwhile, containers share kernel resources and application libraries. This means that if a virtual machine breaks down it is less likely to take other VMs with it.
Unfortunately, the same cannot be said for Docker containers at this point as there is no hardware isolation to speak of. Historically speaking, though, virtual machines didn’t achieve hardware-specific acceleration until 2006 when AMD and Intel both altered their chipsets to provide better support for the needs of VMs.
With that said, it’s very possible to make a containerized environment highly secure. As with any operating system, you can take traditional security precautions to lock down the environment inside a container. So while out of the box Docker and other container solutions may seem more insecure due to the lack of hardware isolation, that is not a permanent thing.
Using both Docker (or another container solution) in combination with a virtual machine is an option. Vagrant has enabled direct integration with Docker already. However, you can always spin up your own VMs in whatever way you normally do and run containers from inside the VM. By combining the two different technologies you can get the benefits of both: the security of the virtual machine with the execution speed of containers.
As with everything, knowing the capabilities of the tools in your toolbox is the most important thing. There are a number of different things to keep in mind when doing that. However, in the case of containers versus virtual machines, there is no reason to choose just one. It can be a perfect world and you can choose both.
Kurt Collins (@timesnyc) is Director of Evangelism at Built.io. Collins began his engineering career at SGI, where he worked on debugging its Irix, the company’s Unix operating system. Two years ago, Kurt co-founded The Hidden Genius Project along with eight other individuals in response to the urgent need to increase the number of black men in the technology industry. Mentoring young people interested in the technology industry is a personal passion of his. Today, he continues this work with tech startups and non-profits alike.