How Edge Is Different From Cloud – And Not

As the dominant supplier of commercial-grade open source infrastructure software, Red Hat sets the pace and it is not a surprise that IBM was willing to shell out an incredible $34 billion to acquire the company. It is no surprise, then, that Red Hat has its eyes on the edge, that amorphous and potentially substantial collection of distributed computing systems that everyone is figuring out how to chase.

To get a sense of what Red Hat thinks about the edge, we sat down with Joe Fernandes, vice president and general manager of core cloud platforms at what amounts to the future for IBM’s software business. Fernandes has been running Red Hat’s cloud business for nearly a decade, starting with CloudForms and moving through the evolution of OpenShift from a proprietary (but open source) platform to one that has become the main distribution of the Kubernetes cloud controller by enterprises. Meaning those who can’t or won’t roll their own open source software products.

Timothy Prickett Morgan: Is the edge different, or is it just a variation on the cloud theme?

Joe Fernandes: For Red Hat, the edge is really an extension of our core strategy, which is open hybrid cloud and which is around providing a consistent operating environment for applications that extends from the datacenter across multiple public clouds and now out at the edge. Linux is definitely the foundation of that, and Linux for us is of course Red Hat Enterprise Linux, which we see running in all footprints.

It is not just about trying to get into the core datacenter. It’s about trying to deal with the growing opportunity at the edge, and I think it’s not just important for Red Hat. Look at what Amazon is doing with Outposts, what Microsoft is doing with Azure Stack, and what and Google is doing with Anthos, trying to put out cloud appliances for on premises use. This hybrid cloud is as strategic for any of them as it is for any of us.

TPM: What is your projection for how much compute is on the edge and how much is in the datacenter? If you added up all of the clock cycles, how is it going to balance out?

Joe Fernandes: It is very workload driven. Generally, the advice we always give to clients is that you should always centralize what you can because at the core is where you have the most capacity in terms of infrastructure, the most capacity in terms of your SREs and your ops teams, and so forth. As you start distributing out to the edge, then you are in constrained environments and you are also not going to have humans out there managing things. So centralize what you can and distribute what you must, right.

That being said, specific workloads do need to be distributed. They need to be closer to the sources of data that they are operating upon. We see alignment of the trends around AI and machine learning with the trends around edge, and that’s where we see some of the biggest demand. That makes sense because people want to process data close to where it is being generated and they can’t they can’t incur either the cost or the latency of sending that data back to their datacenter or even the public cloud regions.

And it is not specific to one vertical. It’s certainly important for service providers and 5G deployments, but it’s also important for auto companies doing autonomous vehicles, where those vehicles are essentially data generating machines on wheels that need to have made quick decisions that are as tell.

TPM: As far as I can tell, cars are just portable entertainment units. The only profit anybody gets from a car is all the extra entertainment stuff we add. The rest of the price covers commissions for dealers and the bill of materials for the parts in the car.

Joe Fernandes: At last year’s Red Hat Summit, we had both BMW and Volkswagen talking about their autonomous vehicle programs, and this year we received an award from Ford Motor Company, who also has major initiatives around autonomous driving as well as electrification. They’ll be speaking at this year’s Red Hat Summit. Another edge vertical is retail, allowing companies to make decisions in stores – to the extent that they still have physical locations.

TPM: I didn’t give much thought to the Amazon store where it has something ridiculous like 1,700 cameras and you walk in, you grab stuff, you walk out, it watches everything you do and it takes your money electronically. This is looking pretty attractive this week is my guess. And I thought it was kind of a bizarre two months ago, not shopping as I know and enjoy it. And I know we’re not going to have a pandemic for the rest of our lives, but this could be the way we do things in the future. My guess is that people are going to be less inclined to do all kinds of things that seem very normal only one or two months ago.

Joe Fernandes: Exactly. The other interesting vertical for edge is financial services, which has branch offices and remote offices. The oil and gas industry is interested in edge deployments close to where they are doing exploration and drilling, and the US Department of Defense is also thinking about remote battlefield and control of ships and planes and tanks.

The thing that those environments have in common is Linux. People aren’t running these edge platforms on Windows Servers, and they are not using mainframes or Unix systems. It is obviously all Linux and it puts a premium on performance and security, on which Red Hat has obviously made its mark with RHEL. People are interested in driving on open systems anyway, and moving to containers and Kubernetes, and Linux is the foundation of this.

TPM: Are containers a given for edge at this point? I think they are, except where bare metal is required.

Joe Fernandes: I don’t think that containers are a prerequisite. But certainly, just like the rest of the Linux deployments, it is going in the direction of containers. The reason is portability, having that same environment to package and deploy and manage at the edge as you do in the datacenter and in the cloud. Bare metal containers can run directly on Linux; you don’t need to have a virtualization layer in between.

TPM: Well, when I say bare metal, I mean not even a container. It’s Linux. That’s it.

Joe Fernandes: I think that that distinction between bare metal Linux versus bare metal Linux containers is more around do what those packaged as container images, or as something like RPMs or Debian and you need orchestration, do you need orchestrated containers. Right. And again, that’s very workload specific. We certainly see folks asking us about environments that are really small, that you might not do orchestration because you’re not running more than a single container or a small number of containers. In that case, it’s just Linux on metal.

TPM: OK, but you didn’t answer my question yet, and that is really my fault, not yours. So, to circle back: How much compute is at the edge and how much is on premises or in the cloud? Do you think it will be 50/50? What’s your guess?

Joe Fernandes: I don’t think it’ll be 50/50 for some time. I think in the range of 10 percent to 20 percent in the next couple of years is possible, and I would put that at 10 percent or less because there is just a ton of applications running in core datacenter and a ton running out in the public cloud. People are still making that shift to cloud.

But again, it’ll be very industry specific. I think the adoption of edge compute using analytics and AI/ML is still now just taking off. For the auto makers doing autonomous vehicles, there is no other choice. It is a datacenter on wheels that needs to make life and death decisions on where to turn and when to brake, and in that market, the aggregate edge compute will be the majority at these companies pretty darn quick. You will see edge compute adoption go to 50 percent or more in some very specific areas, but if you took the entire population of IT, it’s probably still going to be in the single digits.

TPM: Does edge require a different implementation of Linux, say a cut-down version? Do you need a JEOS-type thing like we used to have in the early days of server virtualization? Do you need a special,  easier, more distributed version of OpenShift for Kubernetes? What’s different?

Joe Fernandes: With Linux, the valuable thing is the hardware compatibility that RHEL provides. But we certainly see demand for Linux on different footprints. So, for example, RHEL on Arm devices or RHEL with GPU enablement.

When it comes to OpenShift, obviously Kubernetes is a distributed system, where the cluster is the computer, while Linux is focused on individual servers. What we are seeing is demand for smaller clusters, with OpenShift enabled on three node clusters. Three node clusters, which is sort of the minimum to have a highly available control plane because etcd, which is core to Kubernetes, requires three nodes for quorum. But in that situation, we may put the control plane and the applications run on the same three machines, whereas in a larger setup, you would have a three-node OpenShift control plane and then at least two separate machines running your actual containers so that you have HA for the apps. Obviously those application clusters will grow to tens or even hundreds of nodes. But at the edge, the premium is on size and power, so three nodes might be as much space as you’re going to get in the rack out at the edge.

TPM: Either that or you might end up having put your control plane on a bunch of embedded microcontroller type systems and compacting that part down.

Joe Fernandes: Actually, we see a kind of the progression. So there are standard clusters as small as you can get them. So maybe it’s control plane with one or two nodes. And then the next step we’ve moved into is a control plane and app nodes are the same three machines. And then you get into what I’d call distributed nodes, where you might have a control plane shared across five or ten or twenty edge locations that are running applications and talk back to a shared control plane. You have to worry about connectivity to the control plane.

TPM: If you lose the control plane or your connectivity to it, all it should mean is that you can’t change the configuration of the compute cluster at the edge.

Joe Fernandes: Not exactly, because Kubernetes is a declarative system, so it thinks that needs to start up containers on another node or start a new node. In a case where you might have intermittent connectivity, we need to meet to make it more tolerant so it doesn’t actually start that process unless it doesn’t reconnect for some amount of time. And then the next step beyond that is clusters that have two nodes or a single node, and at that point the control plane, if it exists, is not HA, so you’re focusing on high availability some other way.

TPM: You can do virtual machines on a slightly beefier server and have software resilience, but you have the potential of having a hardware resilience issue.

Joe Fernandes: Maybe their resiliency is between edge locations.

TPM: What happens with OpenStack at this point, if anything? AT&T obviously has been widely deploying OpenStack at the edge, with tens of thousands of baby datacenters planned, all linked by and controlled by OpenStack. Is this going to be something like use OpenShift where you can, use OpenStack where you must?

Joe Fernandes: We certainly see Red Hat OpenStack deployed at the edge. There’s an architecture that we put out called the distributed compute node architecture, which customers are adopting. It is relevant that customers virtualized application workloads and also want an open solution, and so I think you will continue to see Red Hat OpenStack at the edge and you continue to see vSphere at the edge, too.

For example, in telco, OpenStack has a big footprint where companies have been creating virtualized network functions, or VNFs, for a number of years and that has driven a lot of our business for OpenStack in telco because a lot of the companies we work with, like Verizon and others, they wanted an open platform to deploy VNFs.

TPM: These telcos are not going to suddenly just decide, to hell with it, and containerize all this and get rid of VMs and server virtualization?

Joe Fernandes: It’s not going to be an either/or, but we now see a new wave of containerized network functions, or CNFs, particularly around like the 5G deployment. So the telcos are coming around to containers, but like every other vertical, they don’t all switch overnight. Just because Kubernetes has been out for five years now doesn’t mean the VMs are gone.

TPM: Is the overhead for containers a lot less than VMs? It must be, and that must be a huge motivator.

Joe Fernandes: Remember that the overhead of a VM includes the operating system that runs inside the guest and the overhead of a container, where you are not virtualizing that hardware, you are virtualizing just the process. You can make a container as small as the process that it runs. And for a VM, you can only make it as small as the operating system.

TPM: We wouldn’t have done all this VM stuff if we could have just figured out containers to start with.

Joe Fernandes: You know, Red Hat Summit is coming up in a few weeks and we will be providing an update on KubeVirt, which allows Kubernetes to manage standard virtual machines along with containers. In the past year or more, we have been talking about it strictly in terms of what we are doing in the community to enable it. But it has not been something that we can sell and support. This is the year, it’s ready for primetime and that presents an opportunity to have a converged management plane. You could have Kubernetes directly on bare metal, managing both container workloads and VM workloads, and also manage the transition as more of those workloads move from VMs to containers. You won’t have to switch environments or have that additional layer and so forth.

TPM: And I fully expect people to do that. I’ve got nothing against OpenStack. Five years ago, when we started The Next Platform, it was not obvious if the future control plane and management and compute metaphor would be Mesos or OpenStack or Kubernetes. And for a while there, Mesos looked like it was certainly better than OpenStack because of some of the mixed workload capabilities and the fact that it could run Kubernetes better than OpenStack could. But if you can get KubeVirt to work and it provides Kubernetes essentially the same functionality that you get for OpenStack in terms of managing the VMs, then I think we’re done. It is emotional for me to just put a nail in the coffin like that.

Joe Fernandes: The question is: Is it going to put a nail not just in OpenStack, but in VMware, too.

TPM: VMware is an impressive legacy environment in the enterprise, and it generates more than $8 billion in sales for Dell. There is a lot of inertia with legacy environments – I mean, there are still System z mainframes out there doing a lot of useful work and providing value to IT organizations and their businesses. I have seen so many legacy environments in my life, but this may be the last big one I see this decade.

Joe Fernandes: You have covered vSphere 7.0 and “Project Pacific” and look at the contrast in strategy. We’re taking Kubernetes and trying to apply it to standard VM workloads as a cloud native environment. What VMware has done is take Kubernetes and wrap it back around the vSphere stack to keep people on the old environment that they’ve been on for the last decade.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.