Applications Will Drive Infrastructure At The Edge

Given its checkered history with acquisitions like the deal to buy the ill-fated Autonomy for its data analytics software, there was a bit of apprehension in 2015 when the pre-breakup Hewlett-Packard announced it was buying wireless networking vendor Aruba Networks for $3 billion. Aruba executives and customers – who call themselves Airheads – worried that Aruba would be swallowed up by the much larger company, now Hewlett Packard Enterprise and now without its PC, printer, and services businesses. Aruba did not want its products and corporate culture disappearing in the process.

That never occurred. Aruba over the past four years has remained a relatively independent company within Hewlett Packard Enterprise, with its culture intact and its product portfolio not only kept together but expanded. HPE has invested in the company and Aruba has seen its revenues grow from about $1 billion at the time of the acquisition to $3 billion now. Aruba also has become the wired and wireless networking company, has expanded into fast-growing areas like software-defined WAN and has become the lead business unit in HPE’s efforts in the fast-growing edge computing space.

With the rise of the cloud, the edge computing in its myriad forms (including the impending 5G wireless network), and artificial intelligence (AI) in its several guises and 5G networking, the edge is the hot new thing tech industry, which is working on ways to bring more compute and storage capabilities closer to where the data and applications are being generated and used. Spending on infrastructure out at the edge – which can mean everything from factory floors to oil rigs in the middle of the ocean to gas pipelines running through remote areas of the world – will likely dwarf that being spent on traditional datacenters.

Companies in all parts of the tech world are building out their portfolios of products aimed at the edge. HPE is no exception. Under president and chief executive officer Antonio Neri, HPE has committed to investing $4 billion through 2022 to build out what Neri calls the Intelligent Edge, and leading the charge will be Aruba. Keerti Melkote, Aruba founder and now president of Aruba within HPE, is also president of HPE’s Intelligent Edge business.

The Next Platform caught up with Melkote at Aruba’s recent Atmosphere 2019 user conference in Las Vegas to speak with him about how the edge will evolve and what Aruba’s role will be in it.

Jeffrey Burt: The first question I have for you is, when someone asks you what the edge is, what do you tell them?

Keerti Melkote: Basically the simplest way to think about it is it’s not the cloud. It is anything but the cloud. The way I think about it is [it’s] where the action is. It’s where people and things connect to the network and where real-time experiences become important. To be able to deliver on that real-time experience – whatever that experience is, whether it’s you ordering burgers in a drive-thru or it is you creating a cash-less store as a customer or it is creating a great fan experience in a stadium or enabling students to work better – all of those pieces to me are moments that are monetizable at the edge. I think of the edge as where the action is, where real things happen, not just where datacenters sit and consume this data.

JB: What’s driving this and where is the enterprise opportunity? McDonald’s in March bought tech company Dynamic Yield to improve the customer experience at the via point of sales (POS) at its digital drive-thru lanes.

KM: I think McDonald’s is a very interesting deal to bring together a technology company into a fast food platform. The reason they’re doing it is to transform their consumption experience for their customers. In order to deliver on that vision, as technology becomes more deeply embedded into the whole thing, they have to deliver that technology at the edge where their customers meet them. Increasingly it’s, how can I monetize those moments in a reliable way? If you rely on every bit going to some deep cloud somewhere and coming back, there’s latency issues.

Keerti Melkote, Aruba founder president of Aruba and also HPE’s Intelligent Edge business

There are potentially reliability issues, whether it’s going to predictably make it or not, and you can’t run your business in that way. You need that real-time interaction to be predictable, reliable [and] low latency, so it’s pleasing. So decision making is going to have to happen closer to where the action is, which is why I see the edge from a compute standpoint become richer and richer over time. The network will always be at the edge because you need to connect. If you’re streaming a movie from the cloud, you still need an edge connection. It’s not about the connection. I think the connection is there, which is why Aruba is at the tip of the spear for HPE, because you need something to connect. Rather, the compute is the key interesting element here moving down to the edge.

JB: How do you see that evolving? When you talk to vendors now, there’s discussion about hyperconverged, for example. This is about trying to create a dense, high-performance platform that’s rugged. So how do you see the infrastructure rolling out?

KM: The reorganization definitely is important from a hardware specs standpoint and you will see the hardware evolve that way, whether it’s networking hardware, server hardware or whatever. You’ll have more robust environmental hardening on those devices. The interesting part to me is, what’s the application ecosystem at the edge look like? That’s what drives the infrastructure application and data ecosystem. The application data ecosystem to me, if you’re a traditional provider of applications and you want to just deliver that at the edge, is a good example could be a point-of-sale service. When you take a credit card transaction and you don’t want any lines on the network because you want to have the credit card transaction go reliably, you might have a POS server sitting at the edge as opposed to sitting deep in the cloud. This is pretty traditional. People do this already. If you go to a big-box retail store, they might have a point-of-sale system in the store. That to me is a traditional application which we’ve always consumed that is amenable to traditional application stack and architecture – virtualization, hyper-converged, all the things we did for the datacenter can translate for that kind of application.

But then you take a different application like facial recognition and you put video cameras up there because motion detection, face recognition, these things have become important. That is a non-traditional application. It’s what I consider an IoT application, where a traditional computer stack won’t work. You need GPUs, you need neural networks, you need algorithms [and] you need to be able to do all this at scale. That ecosystem is not going to be hyperconverged or VM or any one of those traditional applications stacks. It’s going to be deep neural networks sitting potentially inside the camera. But if it is not inside the camera it might be one step removed from the camera and a dense set of GPUs that might be consuming these algorithms. For that I would say we are in the prototyping stage in the industry, where there are many different options that are coming out. My own bet is that there is going to be a certain amount of onboard compute in the appliance itself, like the camera or the motion detector. I think the semiconductor industry is advancing the state-of-the-art to be able to do some compute right at that point in time.

That would require an onboard compute ecosystem — an embedded software stack, an onboard compute ecosystem, neural network algorithms that are living on the device — which then potentially do inference at the edge in the box. But the deep learning is going to happen inside the network and, again, there is going to be a relationship between that embedded device and a server and the network. That application stack, I would say, is in the development phase, with AI machine learning at the height of it in the cloud, local rules matching rules, processing happening closer to the edge and inference happening in the device itself.

There’s a lot possible. Let’s take a disaggregated edge, which could be taken as a factory assembly line where you have conveyor belt and cameras and all kinds of things moving around. In that case, you might have an embedded server that’s automating the entire planning. The decision making is happening right there. It used to be done by PLCs [programmable logic controllers] and controllers in the past and it could be replaced by a server with an application. I think the application landscape is gonna change quite a bit for the edge driven by AI and big data.

JB: It sounds like it can be varied. Some applications are going to do OK with just onboard capabilities and then there’s going to be others that are going to need a local server and some will be run locally. It’s going to be a mix, a very diverse sort of environment.

KM: If you think everything is networked for a second, there is the device itself – the camera or the thermostat or whatever – with its own compute. Think of everything that’s connecting to the small computer with some compute ability. It’s an IP address close to it on the LAN. There’s going to be another server that it can actually communicate with, especially when you have a collection of things, let’s say three video cameras. They’re sensing [a subject] and they need to be able to compare notes. That happens one level above. And then the deep learning could happen in the cloud, where I ship the datasets out to the cloud. That’s the model that I see emerging. It’s like a three-stage architecture: onboard, local, cloud, or onboard, edge and cloud.

JB: So much of what you need to do this in the development phase, right?

KM: It is in early stages. From a technology industry perspective, I think we are still very much in the experimentation stage, an ROI-proving kind of a stage. We are using the tools that exist today to solve these problems, whether that’s CPUs, GPUs, ASICs, FPGAs, whatever is available. But I wouldn’t say there is a consistent stack that has emerged that says this is how it’s going to be, which is exciting. Honestly, in technology, it’s a great time to be in the business, innovating and aspiring to build that kind of a stack.

JB: As these datacenters become more application-driven, things are going to have to change. Some technologies we have now we can use, others will have to change.

KM: Yes, and I think the notions of privacy, latency, bandwidth, all of these cause real-time interactions, all of this is driving the need for more and more edge processing. And that is going to drive a very rich software ecosystem at the edge, which is I think the most interesting part of this.

JB: Given the development that is going on, doesn’t it seem that we’re on the right track?

KM: I think the investments are happening at a very feverish pace at this point. As always in the early phases of a market, you see a lot of hype and a lot of noise, so we are in that phase right now. I think over the next three to five years is where I expect some of the dust to settle in terms of what frameworks, what architectures, which vendors are going to eventually win out and what will become standards-based.

JB: Are there particular technologies that you think are going to be critical at the edge? Gen-Z is often mentioned. So is storage-class memory. What do you think will be important?

KM: For the edge I think those technologies appear more datacenter-oriented. For the edge I think it has more to do with AI, dense neural networks computer vision and IoT. This is the sector that I think is going to be more important than raw sort of interfaces.

JB: What about the idea of micro-datacenters populating the edge?

KM: I was where you said – micro-datacenters – two years ago. The simplistic viewpoint is, ‘Let me just take my application and run it at the edge, which means I can take an Intel CPU and a micro-datacenter server and I think there’s a market for this. The point-of-sale server is a good prototypical mental example of a market for that. I think that we’ve grown and I think we participate meaningfully in that particular space, but I think the real growth opportunity is one that I see as this deep processing of analog data. That is the big opportunity.

JB: Given the money that HPE is investing in edge computing and Aruba’s position at the forefront of HPE’s edge efforts, how do you see Aruba’s role at the edge evolving?

KM: We are today first and foremost a connectivity and security platform for our customers, so we still have to play a role in that particular space and the primary competitor is going to be Cisco and Huawei. I think that is sort of job one for Aruba. But the exciting differentiated trend for us, why we are a part of HPE is because of the computer. HP is the dominant technology-centric compute player in the market and we know high-performance compute really well, obviously, with SGI coming into the fold and our own Apollo platforms and so on. We understand all the datacenter technologies you talked about, whether that’s Gen-Z or storage-class memory. There are multiple things going, photonics, etc., that are going on. So I think we have a lot of core intellectual property that we have built up in the compute space that we can leverage as it comes out to the edge. How do you take that network world, security world and this new world emerging from embedded compute and bring them together? One of the reasons why Aruba is the tip of the spear is because embedded computing is going to be very big. If you take a networking box, whether it’s a switch or a router or access point, all are embedded computers, so we have a ton of embedded software experience that we can bring to bear in bringing these new types of devices. And it’s not just embedded software, it’s the manageability from the cloud and bringing the whole thing together in an easy-to-consume kind of manner. It’s not just the box itself.

JB: The manageability is going to be key when you consider a highly distributed environment with so many different and varied sources. It’s different, it’s unpredictable.

KM: Manageability is the key. I think otherwise it will become a pet project and never scale out. It will be a pilot rather than a scale-out kind of solution.

JB: What else will be important for the edge?

KM: The biggest hurdle I see is security because we are now going to be transacting business-critical, revenue-critical experiences at the edge and ensuring it is intrinsically, foundationally secure is going to be important from the moment bits leave a device and show up on the network, through all the processing that happens and so on. You can’t bolt on security later. This is one of those things that has to be built in to the platform. The idea of both proactive and detective security [will be important]. Proactive security to me is authentication, encryption, providing silicon route of trust. These are all things that I can implement to build in security into the platform. But at the same time, we are running software programs and if there’s a bad software program built by a bad actor, it could do the wrong things. So something that is able to detect the anomalies – that it’s not behaving the way it should behave – is going to be important. The proactive stance and the detective stance are going to be important.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.