Computing used to be far away.
It was accessed via remote command terminals, through time sliced services. It was a pretty miserable experience. During the personal computing revolution, computing once again became local. It would fit under your desk, or in a small dedicated “computer rooms”. You could touch it. It was once more, a happy and contented time for computer users. The computer was personal again. There was a clue in the name.
However, as complexity grew, and as networks improved, computing was effectively taken away again and placed in cold dark rooms once more far, far away for the users. It looked a little bit like the old times. Today, not everyone remains impressed with this development. If you look at today’s solutions as a spectrum, from say a “pizza box” in your own data center at one end, to a remote HTTP/GET/POST API on a public cloud SaaS offering at the other. There’s an obvious “missing middle”.
People want their computers back. They miss the middle ground. However, how do you make your remote rented computer feel as if you “own” it, while also importantly still enabling you to scale. It isn’t easy, and we have tried to solve this issue for a very long time.
To look a little closer at this “missing middle” part of the computing spectrum, The Next Platform sat down with Zachary Smith, co-founder and CEO of Packet. Packet’s mission is bold. “Build a Better Internet” they say. But by that, what they are effectively doing is making a play in the very middle of the computing spectrum. Boiling off the marketing layer, Packet aims at the most basic level to provide “hardware as a service”. They essentially supply computers that are physically far away that you can still “touch”. They do this by providing a low cost, scalable orchestration framework that you can reach into and in extreme cases even update portions of firmware on your “single tenant” server host. They are attempting to be an elastic cloud infrastructure without what they term as “The Virtualization Tax”.
Zac is an industry veteran coming off 17 years in the co-lo and hosting business, exiting his last company Voxel, a successful NY-based cloud hosting company, which was sold to Internap in 2011. In 2014, his team entered the highly competitive world of managed services, with what they hoped would be an offering with a unique wrinkle. Their vision was to build a bespoke, bare metal, single tenant hosting and provisioning business.
Packet appears intent on trying to sell any configuration their customers need. Driven by a demand that developers need to be “close to their metal”. Not only that, they also want to sell these configurations to a handful of highly technical customers and allow them to change any aspect of the configuration, firmware included.
Zac interestingly labeled his customer cohort as “millennial developers”. Folks who have never, or likely ever will set foot inside a physical data center, and are finding that “one size” doesn’t necessarily fit their ever demanding workloads. He understands who he is selling to, and he has heard loud and clear that his customers want to be able to “tinker”.
Packet clearly isn’t trying to fit everyone, and with 15,000 users are on their balance sheet representing only about 1,000 companies it is clear they are significantly smaller than most. Their “signature tenants” currently include CoreOS, Docker and Hashicorp, plus others who are obviously deep into the technology development space. Interestingly, Platform9 also use their kit, who then in turn then resell OpenStack. This shows what a fascinating world of layers on layers it is that we live in. Packet are seed funded and supported by the likes of SoftBank Corp and Dell Technologies Capital, and are showing a healthy 5x revenue increase year on year. They clearly aren’t currently consuming any sizable amount of the hyperscale business pie. But they do appear to have a bit of a niche carved out.
Or do they?
Penguin have been pushing their POD line for a number of years. This is a true “HPC as a service” Or should we say HPCaaS? Linode also have been in the business providing hosted compute for years. From a distance, there really isn’t much of an air gap between any of the offerings. Each are targeting developers who feel they need to tinker with equipment. The real question is, why?
Are some of our many and varied industry opinions incorrect? The continuum of services is indeed large. Consider say a physical on prem server to AWS Lambda as two extreme examples, each at different ends of the spectrum. The question of selecting compute for purpose is as we have said is becoming harder, but the market will decide. It always does.
Providing bare metal is also fraught with potential dangers. Security, reliability and provisioning is made even harder in multi tenant systems. Allowing firmware level access to the bare metal is also fraught with potential bad actors misbehaving either intentionally or unintentionally on a shared fabric. Packet are mostly aware of this and do have methods in place to provide “server hygiene” by controlling firmware and image integrity between provisioning cycles. At 60,000 claimed provisioning cycles per month, this is already a non trivial operation for them. The question of scaling this “server cleansing” operation to a huge customer base is still essentially unproven.
Packet also have focused on one unique area where large customers potentially bring an interesting and challenging set of use cases. This is around their critical support of ASN and BGP (minimum issue is a /24), so they are clearly also targeting traditionally large scale deployments who need globally aware control of the entire network stack, and without the friction of virtual private networks.
Packet, unlike some of the other huge players in this space, especially Microsoft who provide RDMA instances with low latency interconnects don’t yet have a specific “HPC” story. When pushed, they mentioned that they do (like 80% of the server market) use Mellanox hybrid silicon in their boxes. They were confident that “swapping the backplane”, was all that was needed to allow IB connectivity. Anyone who has battled with large shared IB fabrics know that subnet managers and the subtle nuances of IB verbs is where the real challenges lie. Couple that with “firmware” changes to the host adapters in flight and you have the potential to result in extreme levels of “excitement”.
The Packet team are also happy to run the “CapEx” part of your business, with a “bring your own pizza boxes” to the party approach. If you think of the continuum of services, this is more like a halfway house for recovering on prem data center builders used to, a build, buy, run, versus those on the path to full blown public cloud effort. For many with “cloud hangovers”, this may be a nice safe environment to stay in for a little while. However, they are up against folks like Brian Kucic of R Systems who have been in this business of “HPC as a service” for a good while longer. The R Systems motto of “we run systems, so you don’t have to”, now certainly resonates with many in the HPC business.
Price drives adoption, and Packet have taken the sharpener aggressively to their pencils. $/GB and $/speed and feed across the board appear competitive. Mostly driven by underlying commodity, non proprietary technology choices. This is a good thing.
We also asked Packet about their “hybrid” strategy, Packet replied. “Hey you can run ESXi or OpenStack right on our kit, and many of our customers do”. OpenStack with their recent updates for example are targeting bespoke hardware more than ever, this is a solid approach. Packet seemingly don’t want to any build walls (other than a few custom ACL) between your infrastructure and theirs. The marketing pitch is that you can bring anything you like to the party. Packet’s party treats don’t currently feature any self-serve bespoke accelerated silicon, but they do have a nice selection of fresh new AMD Epyc. When we asked what was in store for 2018, Packet’s immediate response was: “We want someone to be able to have their carefully orchestrated AI box with custom bells and whistles in our environment and for ‘it to not ‘suck’ as an experience”. We agree, that would be rather nice.
There are also some considerable parallels to be drawn between the likes of Packet, Linode, Rsystems and Penguin and up and the coming academic efforts in this space. The Massachusetts Open Cloud for example, is creating a self-sustaining at-scale public cloud based on the Open Cloud eXchange model. Obviously there is significant predicted demand and desire for this “Fiddling as a Service” concept. It resonates with CS folks the most strongly, they care deeply about the register settings.
The two other “large elephants in the room” now also include AWS Bare Metal with direct hardware access, and a recent entry by Chinese internet giant, Alibaba with their bare metal offering they call “Super Computing Cluster”. These are two huge investments in the “direct to hardware” business that are going to be exceedingly challenging to overcome, if only from a pure size and scale perspective. SCC from Alibaba for example, can present you with a 96 core, 0.5TB DRAM, 8-way P or V100 NVIDIA box from a drop down selector. Impressive. These are still virtualized to a degree, and not really “bare metal”, per se. However, more and more providers are supplying “lightweight hypervisors” to isolate you from the metal somewhat, while still allowing for a highly performing solution.So the ultimate question at the end of all this is why?
If you’re an over-stressed IT organization trying to deliver a time sensitive project, is it really the best idea to allow your developers to “tinker”? Maybe. Like all things it depends. There are clearly more adaptable “plug and play” components readily available for most jobs today that certainly don’t involve the dreaded word “firmware update”. However, sometimes maybe you do need to “touch the silicon”, even if that silicon happens to be 1,000’s of miles away. If you care deeply about the physical control aspects, and worry endlessly about “Virtualization Taxes”, or you work in a secure or regulated vertical, you aren’t ever going trust anyone else’s silicon, no matter how much they tell you it is secure.
This is clearly an interesting market to watch closely as it develops and gains speed, even if seems like this story has been playing out for nearly a decade (which it almost has).