Microsoft’s Container Strategy Continues To Evolve

Containers have been getting a lot of attention in the enterprise over the past several years, thanks to them being an enabling technology for greater operational agility, especially in developing customer-facing applications and getting the code into production with greater speed. The result has been a rash of new container-based services on all the major public cloud platforms, as well as container support appearing in many on-premise systems, notably Red Hat’s OpenShift PaaS.

All of this must have presented a bit of a quandary for Microsoft and its far-reaching and largely integrated Windows Server stack because all of the action in the container ecosystem was centred on Linux. This was due to Docker building its container platform around the resource isolation features already within the Linux kernel such as cgroups and kernel namespaces. These features were largely developed by search engine giant and hyperscaler Google in its efforts to drive up utilization in a thinner and better way than could be done with standard server virtualization, and something that could be done because it controlled its own Linux platform. This has happened time and again with Google technology, as we have documented time and again here at The Next Platform.

Although Microsoft is still a giant in the enterprise IT space, as almost every company of any size is running Windows servers and its related systems software stack, the company is well aware of which way the wind is blowing: corporate workloads are slowly but surely drifting to the cloud, and Linux is the only growing operating system. Even Windows Server is in decline, even though it dominates servers in enterprise datacenters for many core workloads. And while there are many mission critical applications that are always likely to be kept on-premise for various reasons, Microsoft cannot afford to be complacent lest it find itself occupying a shrinking niche in the enterprise arena.

With developer interest in containers starting to gather more and more momentum, this must have looked like a distinct possibility to the people leading Microsoft’s future strategy. If workloads in the cloud were increasingly Linux-based and running in containers, it was inevitable that some enterprises would start to question whether their on-premise workloads should follow suit. What to do?

What Microsoft did was turn to the company then at the forefront of the container wave – Docker – and seek its help to get its container tools running on Windows. A partnership unveiled in 2014 would see Docker develop a Windows version of its Docker Engine – the runtime that creates and operates containers – while Microsoft would implement the underlying mechanisms in the operating system that Docker Engine relies on – the Windows equivalent of cgroups and namespaces.

Microsoft’s problem was that Windows was not developed with the concept of containers in mind. In a replay of what happened with virtualization in the Windows Server 2008 era, the company has had to play catch up with rival platforms and adapt Windows to offer the same capabilities.

Containers are essentially just a way of partitioning up a system into a number of sandboxed execution environments with their own resource limits, while all continue to share a single operating system kernel. But in Windows versions prior to Windows Server 2016, code could only run in one of two modes: User Mode and Kernel Mode. All applications and the Windows APIs ran in User Mode, while the core operating system components (and some drivers) run in Kernel Mode. To enable software containers to function in the same fashion as they operate on Linux, Microsoft had to introduce changes in Windows Server 2016 so that each container could effectively have its own User Mode.

There is also a special Host User Mode, which runs core Windows services and the container management functions for the Container User Modes. This is analogous to the way that Hyper-V operates, with a parent partition that runs the Windows Server operating system and a Virtual Machine Management Service to oversee all the child partitions holding guest operating systems.

With all these pieces in place, containers running on Windows Servers are a reasonable parallel to their Linux cousins: a sandboxed environment with a subset of the Windows services that runs atop a shared kernel. And with a Windows-native Docker Engine available, developers can now use the same Docker command line interface and APIs as with the Linux version of the platform.

Here is the block diagram of what the Linux Docker environment looks like:

And here is what it now looks like in Windows Server

Things are not entirely the same between Linux and Windows Server when it comes to Docker containers. With Linux containers, it is possible to create a container image that encapsulates little more than your application and just those system libraries that it has dependencies on, allowing them to be as small as just a few tens of megabytes. Windows, however, has developed over the years to have a lot of inter-dependencies between its various code libraries (DLLs), with the result that it is very difficult to have just those system services that your application needs.

The result of all this is that Microsoft made available two base images for users to download and use as their starting point when working with Windows containers: Windows Server Core and Nano Server.  Both are slimmed-down versions of the full set of Windows system services that cut out components considered non-essential for operating a workload inside a container, such as the user interface. Here is what Windows Server Containers looks like architecturally:

 

There is a further complication that does not crop up in Linux: because Windows Server Containers and the underlying host share a single kernel, the container’s base image must match that of the host server, or the application may not function correctly. In other words, the version of the Windows system services in the base image must match that of the host operating system, unless you run it in a virtual machine with Hyper-V – more of which below.

The base images in the latest Windows Insider preview build 17704 have a size of 3.38 GB for Windows Server Core and 232 MB for Nano Server, the latter apparently achieved by stripping out PowerShell, WMI, and other core services, although these can be added back by the user.

However, it appears that users have been finding these base images too restrictive. In June, Microsoft announced a third container base image, confusingly just called “windows,” which is currently only available to users that have signed up for the firm’s Windows Insider preview program.

In a blog announcing the new image, Microsoft explained that a number of customers are looking into moving their legacy applications into containers to benefit from container orchestration and management technologies like Kubernetes, but not all applications can be easily containerized, in some cases due to missing components like proofing support or the graphics subsystem which is not included in Windows Server Core.

This new container base image has a size of 8.07 GB, which would seem to suggest that it has most, if not all, of the full set of Windows services, and also seems to be somewhat stretching the Docker philosophy that containers should be lightweight and ephemeral, as required for a DevOps approach to application development. In fact, if customers are experimenting with moving legacy applications into Windows Server Containers, they are treating them more as an alternative to virtual machines, in the same manner as Canonical’s LXD container technology for Ubuntu Linux.

Microsoft tacitly acknowledges this, telling The Next Platform: “While Server Core and Windows images are both large compared to Linux they provide customers with a great opportunity to bring their existing applications forward into containers and are significantly smaller than the typical VHD/VMDK.”

Speaking of Linux, Microsoft still had to counter the issue of developers increasingly favoring Linux for cloud-native workloads because of the head-start Docker-style containers already had on that platform. Fortunately, a solution was already at hand.

When Microsoft started out looking at containers, one of the concerns cited by enterprise customers was an often reported lower level of isolation between workloads running in containers when compared with virtual machines. This is because every container on a host shares the same kernel, and vulnerabilities in system calls could potentially allow malicious code access to the kernel, and therefore every other container it hosts.

Microsoft chose to address these security concerns by creating a mechanism whereby containers could run inside a lightweight Hyper-V virtual machine for greater isolation. Known as Hyper-V Containers, this approach creates a VM with its own Windows kernel and a Windows Server Container. However, this differs from standard Hyper-V virtual machines in that it is launched through Docker and contains a special Guest Compute Service that links back to the core Windows services running in the Host User Mode.

For users, a key feature of the way Microsoft implemented Hyper-V Containers is that it can run the same container images as Windows Server Containers, so the decision as to whether to run a specific application in a standard container or using Hyper-V can be made at run time rather than when the code is developed.

Meanwhile, the same mechanism can be used to accommodate Linux-based workloads, by provisioning the VM with a minimal Linux kernel optimised to support containers. To complicate matters, this capability is currently only supported in a special Semi-Annual Channel release version of Windows Server. The first such version was Windows Server, version 1709, released in September 2017 as a build of the Windows Server platform with rapid release cadence, focused on specific customer scenarios such as those building with containers and microservices or evolving their infrastructure with software-defined data center and hybrid cloud capabilities.

But what of the cloud? Going back to 2014, when Microsoft and Docker first announced their partnership, one of the first fruits of this was Docker containers for Linux on Azure, which enabled containers to be run inside Linux-based virtual machines on Microsoft’s Azure platform.

Since then, Azure has sprouted multiple different ways of enabling customers to deploy applications using containers. The original Azure Container Service (ACS) is now being deprecated in favour of the newer Azure Kubernetes Service (AKS), which as its name suggests is based around the Kubernetes orchestration tool, while the older ACS supported DC/OS, Docker Swarm or Kubernetes. AKS only supports Linux-based workloads at present.

Both Linux and Windows containers are available on Azure Container Instances (ACI), which is described by Microsoft as a serverless way to run containerised workloads with that can scale automatically, without the user having to worry about managing virtual machines.

There is also the Azure Service Fabric, Microsoft’s own distributed systems platform designed for managing scalable microservices and container deployments, which can comprise Windows or Linux containers.

You have probably got the picture by now: Microsoft’s container support is a messy hotchpotch of different technologies and services that basically add up to the ability to support both Windows and Linux workloads in containers, in Azure and on-premise on Windows Server.

However, there seems to be a difference in the way that container support is developing on the two platforms. On Azure, Microsoft is pursuing a service based around Kubernetes that enables customers to run Linux workloads, in common with all the other major clouds, while also supporting Windows Containers on other services. On Windows Servers, Microsoft appears to be circling in on customers using containers as an alternative to virtual machines as the future pathway for supporting legacy workloads. Whatever way you want to use containers, it looks like Microsoft aims to have it covered.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.