Linux has gradually grown in importance along with the Internet and now the hyperscalers that define the next generation of experience on that global network. Most of the software running at the hyperscalers – with the exception of Microsoft, of course, is built upon Linux and other open source technologies. In turn, this means that Linux and open source have started to become more important in the enterprise arena, as trends such as cloud computing and large scale data analytics drove the need for similar technologies in the corporate datacenter.
Adapting the collection of open source packages that comprise a typical Linux build and making it suitable for enterprise consumption has led to carefully curated distributions that emphasise reliability and stability, plus paid technical support services and maintenance updates. These are typified by Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES), distributions that have a long product lifecycle of ten years and thirteen years, respectively.
The third major enterprise Linux, Canonical’s Ubuntu Server, follows a different path. New versions are developed and released on a six month cycle, roughly in synchrony with the release cycle of the OpenStack cloud framework that has been included with server builds of Ubuntu for several years now. Every two years, the April release of Ubuntu is a long-term support (LTS) version that is provided with updates for five years. The latest of these, 18:04 LTS, was delivered at the end of April.
This combination means that Ubuntu can be quicker to incorporate new features and technologies than many rival Linux distributions, while still offering the long-term stability that enterprise users need. In addition, while Ubuntu is typically backed by community support, enterprise customers can get paid Ubuntu Advantage technical support services from Canonical if required.
Ubuntu can thus be deployed and run entirely for free if you choose, or with paid technical support if you need this for production workloads. This contrasts with the other enterprise Linux distributions, which are based on free to download community supported distributions (Fedora in the case of RHEL and OpenSUSE in the case of SLES), but which are themselves only available to customers with paid technical support subscriptions.
This licensing model may be one reason why Ubuntu is widely used in the cloud and other large scale infrastructure operators. Canonical claims that the majority of workloads across all the major cloud providers are running on Ubuntu, whether that is Amazon Web Services, Microsoft Azure, Google Cloud, IBM Cloud, or Oracle Cloud.
Perhaps because of this, Canonical appears to see its future in supporting the cloud-native approaches that underpin the new multi-cloud environment that enterprises find themselves dealing with. Most organisations are already operating applications and services across more than one public cloud platform, and may have one or more private clouds running in their own data centres, and anything that can make it simpler to deploy and operate applications across this mixed estate is likely to be seized upon with enthusiasm.
One example is Kubernetes, the orchestration tool for managing containerised workloads and the clusters that they are running on. This has rapidly become the tool of choice for this task, and all of the major cloud providers now offer a Kubernetes-driven containers service, such as Google Kubernetes Engine (GKE), Azure Container Services (AKS), Amazon Elastic Container Service for Kubernetes (Amazon EKS).
With Kubernetes everywhere on the public cloud and many developers also using it to build and deploy distributed applications on-premise, Kubernetes and its APIs are now being pushed as the layer in the stack that can deliver portability across all these platforms.
Canonical offers its own distribution of Kubernetes, a pure ‘upstream’ version that is kept in alignment with the version in Google’s GKE, plus regular security updates, for which the firm does not charge extra to support for if a customer already has paid support for Ubuntu. Offering a pure version of Kubernetes offers customers greater operational flexibility, Canonical contends, and lets them stay in synchrony with the public cloud providers are running.
This contrasts with the approach taken by Red Hat, which has integrated Kubernetes into its OpenShift PaaS, hiding away much of the complexity of managing containers, but also taking away some of the choices from developers, according to Canonical.
VMware has a similar offering in the shape of Pivotal Container Service (PKS), which combines Kubernetes with BOSH, an open source tool that adds deployment and life-cycle management services. BOSH includes a Cloud Provider Interface (CPI) that can be configured to use different platforms as the infrastructure layer for PKS, with vSphere or Google’s Cloud Platform currently supported.
“Both VMware and Red Hat essentially play a lock-in game and say it’s our way or the highway,” Stephan Fabel, Canonical’s director of product management, tells The Next Platform. “If you have a Red Hat cloud you have to use RHEL, and you have to run Red Hat OpenStack because if you don’t it’s a problem. Then you have to use OpenShift, because if you don’t use OpenShift, it’s also a problem. So you get pushed really hard on this vertical stack where there are many different hooks between those layers where they’re trying to steer you one particular way, and that might sound really good to the boardroom, but developers don’t want to be forced into a specific paradigm.”
The other factor is cost, of course, and Canonical claims that the combination of Ubuntu and Kubernetes can meet the needs of organizations for one third the cost of RHEL, the Red Hat OpenStack Platform, and the OpenShift PaaS. This combination doesn’t provide the full capabilities of a PaaS, but other open source tools can fill in the gaps.
Meanwhile, a recent report published by 451 Research found that Canonical’s BootStack managed private cloud platform could be operated at a lower cost per virtual machine per month than 25 of the public cloud providers included in 451’s Cloud Price Index (CPI) comparisons. BootStack sees Canonical engineers deploy an OpenStack cloud for customers at the location of their choice, then operate it as a fully managed service. The report, Busting the Myth of Private Cloud Economics, was commissioned from 451 by Canonical.
For organizations operating HPC infrastructure, Canonical has a handful of open source tools it has developed to ease the deployment of complex environments at scale, and repeat those deployments again and again whenever necessary. These tools, such as Juju and MaaS, were developed for quickly standing up private clouds, but prove equally useful in an HPC environment where you may be running a complex simulation one day, then an application using big data analytics tools on a Hadoop framework the next.
MaaS, or Metal-as-a-Service, is intended to deliver cloud-like ease of provisioning to bare metal server hardware. It can discover hardware resources and automate tasks such as upgrading firmware and installing an operating system, using well-known tools like PXE, TFTP, and IPMI.
Juju was developed to handle the deployment and configuration of the rest of the software stack, including the application and any other applications and services it has dependencies on. Juju provides tools to create a model of the relationships between these various components, so it can apply the necessary configuration management scripts (possibly written using tools such as Puppet or Chef) to deploy it all.
Canonical claims that Juju and MaaS are key factors in its ability to offer its BootStack managed cloud service, mentioned above, with lower running costs than many public clouds, as it takes just two engineers and two weeks to service a customer’s request.
But MaaS and Juju have been available for a while. What Canonical has been working on recently is enabling support for GPU accelerator hardware in Ubuntu to accelerate, and exposing these to HPC workloads such as machine learning via OpenStack and Kubernetes.
“At the KVM layer, we are using the PCI pass-through. There are efforts underway to create virtual GPU abstraction layers at the OpenStack layer, and once they are baked in, we will also offer those. But once you are at the VM layer and you have exposed the device, we provide the Nvidia drivers, and we roll that out in a completely automated fashion, so when you install a Kubernetes cluster on an infrastructure that contains GPUs, it will auto-detect them and enable them,” Fabel says.
At the Kubernetes layer, Canonical deploys a Docker runtime from Nvidia, exposing the GPUs through standard APIs. Using Juju, complex application frameworks such as Kubeflow can then be rolled out in a fully automated fashion. Kubeflow is an implementation of Google’s TensorFlow computational framework for building machine learning models, packaged to run in containers and Kubernetes.
According to Canonical, this matters because a single high-end GPU costs many of thousands of dollars to tens of thousands of dollars, and any organization needs to make sure it gets the maximum utilization of them by minimizing the time taken to stand up applications and their supporting software framework.
Overall, this means Canonical is a somewhat different proposition from other enterprise Linux firms, leaning towards the cloud, cutting edge technology and deploying applications and services at large scale, while Red Hat and SUSE have long had a focus on providing a stable and reliable platform for companies whose priority is operating more traditional mission critical enterprise applications.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment