Rethinking The Edge In A Multicloud World

For the past decade or so, unless enterprises wanted to build more of their own datacenters, the cloud providers were the only game in town to tackle new workloads at massive scale. That capacity was available under a pay-as-you-go consumption model, which was great, until the bill arrived.

In many cases, cloud is more expensive than doing it yourself and carrying the capital investment on your balance sheet. The difference between what it costs a cloud to build the infrastructure and what customers pay to rent it is profit, but the difference between trying to build it yourself and using a credit card to borrow cloud capacity is called agility. Balancing these two is the main task of IT organizations today.

Dell Technologies, Hewlett Packard Enterprise, and other datacenter hardware OEMs saw the changes and over the past few years have methodically built out platforms that offer their portfolios as a service and established partnerships with the hyperscalers to increasingly make their software available in the public clouds.

With a boost from advanced automation capabilities, what Dell with its Apex and HPE with GreenLake as-a-service initiatives offer is on par with public clouds in terms of ability, ease of use, scalability and flexible consumption models, according to John Roese, global chief technology officer for products and operations at Dell (shown in the feature image above). Organizations can now weigh which workloads can go the public cloud and which can stay on premises, which is important when dealing with data sovereignty and regulatory issues.

It essentially ends the debate over public versus private infrastructure and changes the definition of the fast-growing edge, Roese told The Next Platform during an interview at last week’s Dell Technologies World show in Las Vegas. For a long time, the edge was seen as the third leg of the IT infrastructure stool, a separate domain from on-premises datacenters and public clouds.

It’s an outdated view, particularly given that while 10 percent of data now is processed outside of the datacenter, that will switch to 75 percent by 2025, according to CEO Michael Dell. That acceleration will be helped by the growth of faster 5G networks.

Instead, organizations need to shift their thinking and view their rapidly decentralizing IT environments. To Roese, there are two models that define the edge going forward. One is the cloud-extension model, where the edge sits at the end of a data and application pipeline that begins in the public cloud. The problem is that most enterprises use more than one public cloud, which forces them to build different edges to support those environments.

A better model is what Dell calls edge-first, he said.

“We say, what if we flipped that around?” Roese said. “What if treat edge not as the end of the pipeline but the beginning of the pipeline. If all the data originates in my factory, I should land that on a platform under my control, in my private datacenter or in the edge itself. We can use that platform as the control point that would decide whether that data should flow to public cloud A or public cloud B or to handle things like reliability or to insert new technologies or to choose which software-defined edge services should live there.”

The infrastructure will continue to become more distributed, with the rise of MEC (Multi-access Edge Computing, or the telco edge) and the growing popularity of colocation facilities from the likes of Equinix, Switch and Digital Realty, where enterprises can run their private infrastructures but not have to own the datacenter. The edge becomes part of the on-premises IT, he said, adding that “edge is just the future of on-prem. A lot of the on-prem infrastructure will get recast as edge. Some of those edges will be quite large. They will look like datacenters. They will live in colocation.”

“Future architecture will have two parts: infrastructure that you don’t control yourself and just use and infrastructure that you do control,” Roese said. “It’s about control. It turns out that when we move into the edge environment, we discover very quickly edge isn’t really about where physically the devices and where the infrastructure live, it’s the type of domain it lives in. … The [edge-first] model is harder to do, but long-term, where is the actual control point for the enterprise? Is it the public cloud? It can’t be because you have to use multiples of them, so that’s not an absolute control point. But if that control point is a point between where the retail environment hits the digital road and everything flows through that control point, that’s a fantastic place to introduce controls like security and to decide how to do load balancing, how to introduce resiliency.”

The public cloud, which at one point was the destination for many organizations looking for a more elastic environment and dynamic consumption model, is essentially an operating model that where the control plane can’t be housed because of the multicloud adoption. AWS can’t control what’s in Azure or Google Cloud. The control points for everything from operations to security instead have to be located on premises, whether it’s the datacenter or the edge.

“Neither of those clouds are going to decide to use the other one, but your edge environment could absolutely sit there and monitor and understand that your Azure instance is slowing down so maybe you should redirect your actions to the other POS backend that is sitting in an alternate cloud,” he said. “Things like edge and modern datacenters in the multicloud context are the logical points to introduce those control points. Anything that is going to allow you to be in control of the multicldoud world requires you to pick the right place to do it. It’s not just about edge or datacenters, it’s not just about on-pre or off-prem. It’s about this: in the multicloud world, there inevitability will be things that  you own and operate that are unique to you and they’re generally your retail, your manufacturing, your datacenters, your edges. The question is, what is their purpose? Are they there just as an off-ramp for a bunch of random cloud services and public clouds or are they actually the core of your digital strategy, the place where you can introduce control and you should treat those clouds as just consumable services? You should be able to make that choice and the control plane should be under your control, which means it’s more of a private control plane for the multicloud than a public control plane.”

It will take time for IT professionals to begin to view their infrastructure in terms of where the control lies, said Roese, who came to Dell via the $60 billion-plus EMC deal. The industry over the past five years has been conditioned to believe that the infrastructure assets in the physical world should be subordinate to the public cloud. That doesn’t hold in a multicloud environment.

“The only thing that’s common when you’re talking about a multicloud environment are the IT systems in your store or in your retail environment or in your factory and those are things that you are ultimately in control of and those are the places where the data actually creates value, where transactions actually happen and the backend systems in the public cloud are fantastically useful for building applications and processing data, but they are actually subordinate to the real world,” he said. “This is a huge shift. You would never hear any public cloud provider talking about it but we’re talking about it.”

This is also a shift for Dell. The company has worked for the past few years to grow its Apex platform, including unveiling a full security stack of services this week. Apex in combination with automation tools and colocation centers enable enterprises to do with their private infrastructure – whether on premises or in colocation facilities – what they can do in public clouds.

“All those tools just progressively make it much easier for customers to stop thinking about the reason to do something in a public vs. a private environment,” the CTO said. “It’s because all of the infrastructure operation challenges have all gone away. The only thing that matters is, which applications and services make the most sense to the live in a public environment based on the kinds of tools that are available in that public environment. If you want to build the at-scale training infrastructure for AI models, honestly, put them in a public cloud. It’s ephemeral, it works and the tool chain’s there. If you want to do inferencing – the actual real-time processing of that AI model – doing that in a public cloud is silly if all the data is happening out in your factory. You now can make intelligence choices. If inferencing should be done at the edge, do it at the edge because there’s not a tradeoff. You don’t have to own a datacenter to have an edge. You don’t even have to own the infrastructure if you don’t want to. You can do that with Apex but still be in control. It shifts the discussion back to, let’s find the right infrastructure to run this application.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.