The edge has caught the imagination of IT vendors, who envision a place well outside of the confines of the central datacenter but not quite in the cloud where the vast amounts of data that are being generated by billions of devices, systems and sensors can be quickly captured, stored, processed and analyzed in as close to real time as possible. The internet of thins (IoT) is certainly a driver of edge computing, as are new workloads that leverage such fast-emerging technologies as artificial intelligence (AI) and machine learning, augmented and virtual reality, autonomous functions and data analytics. And there are certain demands at the edge, such as for huge amounts of processing, high bandwidth and low latency.
It’s part of an accordion-like back-and-forth trend in the industry, with computing going from being centrally located (think classic IBM mainframes) to distributed (client/server) and back to centralized (in the cloud). With the edge, computing to a degree is going back to being distributed and as we at The Next Platform have been saying for months now, that is where many of the hardware and software vendors have turned their attention. OEMs from Hewlett Packard Enterprise and IBM to Dell EMC, Cisco and others are furiously working out what the architecture of the edge will look like as software makers and component vendors reconfigure their offerings to enable them to run far outside the datacenter and cloud.
It’s still very much a work in progress and it will take a while before the picture of what the edge architecture will look like gets less blurry. Keerti Melkote, founder of Aruba Networks and now president of Aruba within HPE, recently spoke with us about his vision of how the edge will evolve, with applications in large part dictating what the platforms will look like. Others have spoken of micro-datacenters and specially-designed hardware and the need for such emerging technologies as Gen-Z for higher throughput and lower latency, the upcoming 5G networks for faster speeds and more bandwidth, and storage-class memory (SCM).
All that being said, according to John Roese, global CTO at Dell EMC and chairman of the Cloud Foundry Foundation, there are some basic truths about the edge, with one being that there are different kinds of edges that have different needs, and that the edge will essentially be a continuum of an enterprise’s computing environment that spans traditional datacenters and the cloud. Given that, it will need to leverage the same technologies. The edge isn’t a separate entity on its own but part of a whole, and needs to be treated that way, Roese told The Next Platform in a wide-ranging interview.
“One of the things we’re trying very hard not to do as an industry – at least Dell is trying not to do – is to over specialize the edge hardware,” Roese says, noting that OEMs will need to create hardened and specialized form-factor in some cases, like radio access networks (RANs) or oil rigs. “One of the reasons to go to something like a modular datacenter or a micro-datacenter or a single rack in a constrained, protected environment is not just that it’s easy to deploy and that it’s hardened. It’s that the things inside of it are the same boxes that are sitting in your datacenter. The reason why that’s important is that the innovation cycle in the datacenter is going really fast, so if you have an edge technology that suddenly forks from the mainline and it is extremely proprietary, your ability to absorb the next memory architectures, the next accelerator architectures, the next processing architectures, to be able to replace these technologies, to benefit from the cost reduction curves that are coming, is entirely abandoned. Basically you’re out of that cycle.”
If a vendor is building out the edge correctly, they will need to “make sure that the innovation cycle and the adoption of new technology at the edge is of a similar speed and velocity and trajectory as what’s happening in the datacenter and the way to do that is if you’re trying to harden something at the edge, don’t focus on building a proprietary individual … unit. Create an environment that protects it from the real world and allows it to have the same componentry.”
That includes not only the same hardware components, from processors and storage to memory, but also software capabilities around management, such as the Integrated Dell Remote Access Controller (iDRAC), which offers automated alerts and remote management of systems to reduce the need for IT administrator to physically address the servers, an important capability in any edge environment.
Dell EMC has its own growing portfolio of edge-focused offerings, including the ruggedized PowerEdge XR2 for space-constrained and harsh environments, the PowerEdge 740 and 740XD servers for such workloads as mobile network-functions virtualization (NFV) infrastructure, and modular micro-datacenters.
There are keys to understanding the edge, Roese says. Whatever a person’s view of the edge, it has to have two characteristics: it’s where real-time processing (at least 10 milliseconds) can happen to augment the various connected entities and it’s the interface to the rest the cloud, so it can do pre-processing, data reduction, model creation and other tasks. The CTO also sees various iterations of the edge. RANs at cell sites will have distributed compute and 5G as part of their architecture, but the limited space and power in these environments will limit them to specialized purposes, he says, given that it’s the edge that is closest to the mobile devices.
Cellular operators also are building out modular datacenters or remaking their central offices.
“They’re at the other end of the backhaul but they’re still real-time and those environments are much bigger edge environments,” Roese says. “They’re able to put kilowatts of power [or] maybe even more out there. They’re putting racks of systems out there, so you will have even in modern cellular environments some of the compute directly in the radio access network and that’s really, really real-time and mostly used for the very most specialized services.”
In another layer in the cellular edge, modular datacenters are being put in the physical plants for easy deployment, a lot of power and good scalability, which Roese says will be a huge opportunity at the edge.
There also is an industry cropping up around real estate companies buying up physical sites close to where the edge exists – like space in shopping malls or buildings in a city – where companies can deploy edge datacenters to bring compute closer to users and their devices. The last edge involves putting enterprise infrastructure in sites like factories where there are high levels of automation and demand for network scalability to drive intelligent systems.
“By placing edge compute in that factory, they can do the two things we just described: They can start delivering real-time analytics back and forth through that factory to tune its behavior as a local phenomenon and they can also use it to act as a sink for all the data the factories are creating, which in many cases is huge amounts of telemetry data coming through that environment,” Roese says. “Where it ends up is absolutely in a centralized environment to be processed and analyzed.”
That said, the real-time behavior is being pushed back out to the factory in an edge compute model and the actual technology in the factory looks similar to that in the cellular or real estate edges, he says.
“It is generally modern, flexible compute,” Roese says. “It usually has accelerators for AI and [machine learning] processing. It has a reasonable amount of persistence and storage but it’s not meant to be the permanent home of a lot of data. But it needs to be able to have enough storage to be able to source and sync and process data and for the most part it is agnostic to the applications running on it. It’s designed to basically run containerized or VMs or other types of application models on that environment because we’re not building an edge to be specialized to a particular task. We’re building edge to be a new layer of the cloud topology, which allows us to say, ‘Hey, if there’s something that needs real-time or generates a lot of data and has to organize it and reduce it, then maybe we containerize that code or put it in a VM and push it out to that edge environment and run it there because it’s not necessarily the application itself that’s interesting, it’s where you’re running it.’ That gives you the real-time and edge behavior. There are lots of areas basically they all actually look the same, they’re just different sizes.”
Because of that, the components in the systems need to be the same as those in the datacenters. They need to be able to run the same applications, interact with each the other and adopt the same innovation schedule and speed.
“When you’re a customer thinking about this, you’re not thinking of the edge as a separate topology,” he says. “You’re thinking of it as a layer in your overall cloud distribution model. At the edge, it’s not the code that makes it special, it’s where you run it. If I have a repository of containerized code and I decide some of these are real-time functions and I want to push them to the edge and other ones aren’t real-time and I’m going to run them in the core, it might be the same program but I’m choosing to run it in the right place. If the infrastructure underneath is completely radically different – doesn’t have the same capabilities and can’t benefit from the same innovation cycle – then all my software decisions suddenly become constrained. ‘Oh, I can’t run at the edge because the edge can’t run a container, or I can’t use an accelerator.’ The reality is we don’t want that in the model. What we want is this distributed topology that goes from the public datacenters to the private datacenters to the edge topologies, all the way up to the device, and the ability to choose where we put our data and where we run our code anywhere across that topology, treating it as a consistent architecture that just has different behaviors in terms of localization, real-time behavior and functionality. But the atomic units of compute and acceleration, of memory and persistence, are consistently implemented across it.”