All kinds of convergence is going on in infrastructure these days, with the mashing up of servers and storage or servers and networking, or sometimes all three at once. This convergence is not just occurring at the system hardware or basic system software level. It is also happening up and down the software stack, with a lot of codebases branching out from various starting points and building platforms of one kind or another.
Some platforms stay down at the server hardware level – think of the Cisco Systems UCS blade server, which mashes up servers and networking – while others ride one level up in the infrastructure layer – like those of VMware or Nutanix in the enterprise or the various OpenStack virtual clouds and Mesosphere and Kubernetes bare metal clouds – while others include various middleware and other services to create application platforms – we are thinking about Cloud Foundry or the various Hadoop distributors such as Cloudera, MapR Technologies, or Hortonworks. You can build a Spark or SAP HANA in-memory platform, too. The list goes on and on, and these platforms are always embiggening themselves with new features and functions inside the existing code base and then adding new layers above and below them to expand vertically up and down the stack.
It is in this sense that Puppet, which started out in 2005 as a way to programmatically control the configuration management of servers and systems software and is one of the darlings of the DevOps movement, is in the process of becoming a platform in its own right and not just a tool that links application developers and system operators into a single workflow. (That is a lot harder than it sounds, and not just for technical reasons, but for emotional and political ones) Ansible, one of Puppet’s key competitors, was acquired by Red Hat and is now being embedded into its platform, and Chef Software is carving out its own slice of the software configuration and deployment automation space.
With over 1,100 customers buying its Puppet Enterprise distribution – and with 75 of the Fortune 100 being among those customers, which are “significant deployments” according to the company and which is a very respectable ratio considering the diversity of configuration management tools out there – and with more than 40,000 customers using the open source versions of Puppet (which doesn’t have many of the enterprise bells and whistles), Puppet is clearly a success story. But there are hundreds of thousands of companies that could be using Puppet who aren’t, even though company founder and former CEO Luke Kanies has been on a mission to make DevOps automation mainstream.
As we have pointed out before, just because something is inevitable does not mean it can move fast. Most enterprises are inherently conservative, and they have a measured pace that often frustrates technologies who not only are not afraid of change, but they seem to thrive on it. Kanies himself was feeling a bit of this burn, and a few months after we talked about the challenge of getting the next 50,000 customers to use automation tools, Kanies stepped down as the company’s CEO and called on Sanjay Mirchandani, the company’s first president and now its CEO, to pick up the banner and execute on the Puppet strategy. At the time, in the summer of 2016, Puppet had about 500 employees and an annual run rate approaching $100 million, and had raised $86 million in venture funding. Since that time, it has added over 10,000 open source customers, and the open source community around Puppet has grown by a factor of 2.5X and the number of community contributed Puppet modules has grown to more than 5,000, comprising 7.5 million lines of code.
In the interim, Puppet has spent some time identifying the issues with enterprise adoption of automation, and has done some acquisitions and development to make more of a platform and to make this transition easier. Omri Gazitt, chief product officer at Puppet, walked us through the issues that enterprises are facing. It boils down to these three items:
- It is hard to to know what you have
- It is hard to scale broadly and deeply, and
- The Dev and Ops parts of the infrastructure are still siloed
To that end, Puppet set out to create an asset management tool, called Puppet Discovery, that can roam the networks of machines and software and actually quantify and qualify what exactly is in the IT department that keeps the business humming. This tool, which went into tech preview back in October 2017, is shipping on May 8. It can see the servers, virtual machines, containers, and network devices out there, whether they are on premises or in a public cloud, and all of them can be equipped with Puppet agents and therefore be put under control of Puppet Enterprise and execute Puppet tasks. Importantly, the discovery bit is done without agents through APIs or SSH or WinRM, and remote task execution (more on that in a second) are done through SSH and WinRM.
As far as coping with the scale issue, Gazitt says, the right tool for this job is Puppet Enterprise, which has been given a task oriented metaphor to go along with the existing model approach that Puppet was founded on to make it more broadly applicable.
“A lot of our customers were successful with the model-driven approach that Puppet has always had, but they had a lot of one-off, task-oriented workflows they needed to put under control of Puppet and these were really awkward to try to do with Puppet as it was originally designed,” Gazitt tells The Next Platform. “Before Puppet Tasks was added, you could use Puppet to, for example, deploy a MySQL database or an Apache web server, but now you can kick off backups and restores, or restart an Apache web server or a MySQL database. This is a complete set of functionality to automate.”
Puppet Enterprise 2018.1, which ships on May 1, will also have models that allow for the management of Cisco Systems switches and routers that use its IOS operating system as well as those that use its ACI software-defined networking APIs, and Gazitt says that generally the company will be providing a more complete portfolio of models for network devices going forward. Puppet has also worked with Microsoft to do a better job of managing the diverse assets of Windows Server and the Azure public cloud.
As for scale, the Puppet tool now spans to over 100,000 devices under management, which is fairly large for any enterprise, and it also now includes role-based access control so different levels of developers and operators can have differentiated access to Puppet functions and data.
That leaves application delivery and the automation of Puppet models and tasks to be consistent with the other continuous integration/continuous delivery practices (CI/DC) that are evolving out there in SoftwareLand. The fact that the container is fast becoming a unit of software packaging and deployment is helping bridge the silos of Dev and Ops, but then again, we said that ten years ago about hypervisors and virtual machines (with different language) and it still didn’t happen right because the VM was never a standard, but rather was proprietary to a given hypervisor. This time around, Docker runtime is the hypervisor and Docker container is the VM, and no one is trying to do anything differently because we have all seen this movie before.
Last September, Puppet acquired Distelli, a maker of continuous delivery software tools based in Seattle, basically in the same software culture that Puppet grew up in from Portland, Oregon. The company’s product, called Pipelines, is being outfitted with its own container registry, known as Project Europa, which is distinct from Quay from Red Hat (by virtue of its CoreOS acquisition) or Docker Hub from Docker. This is kind of a meta-registry for containers that can span many public clouds and on-premises infrastructure. Importantly, Pipelines is being turned back on Puppet, and can now be used to automate the delivery of Puppet models and tasks as well as being used for the deployment of production applications in the enterprise.
With this, the automaters are being automated. And that, of course, is always the last bit and the hardest part.
This Pipelines functionality will be embedded within Puppet Enterprise with regards to Puppet models and tasks, and will give DevOps people a way to play with Pipelines and see how it works. The full-on Pipelines tool used for production application deployment is priced separately.
When I initially left a comment I seem to have clicked on the -Notify me when new comments are added- checkbox and now each time a comment is added
I receive 4 emails with the same comment. Is there a way you can remove me from that service?
Cheers!