Only The Agile And Adaptable Survive

Red Hat is not just the top Linux software vendor and the driving force behind IBM’s hybrid cloud ambitions. It is by far the most agile part of Big Blue.

That means a lot of different things for Red Hat, including keeping a close eye on the various compute engines and accelerators – from CPUs (particular those based on the Arm architecture) to GPUs to FPGAs to DPUs – as well as the continued disaggregation of the infrastructure inside of the datacenter and the growing importance of the edge in enterprise IT.

Kris Murphy, senior principal software engineer for at Red Hat, recently spoke with The Next Platform about what she is seeing and the company’s plans going forward. That involves a mix of addressing many of the changes now while keeping an eye on others. The most recent high-profile example was the introduction at the KubeCon show in late October of Red Hat Device Edge, an enterprise-level and vendor-supported distribution of MicroShift, an open-source effort led by Red Hat to build a lightweight offering that includes Kubernetes, OpenShift, and Red Hat Enterprise Linux (RHEL).

The goal is to deliver a Linux-based platform for enterprise edge deployments.

In addition, Red Hat this week said its Red Hat Insights Advisor is available in RHEL at the edge, giving organizations the same analytics capabilities at the edge that they now have in the datacenter to ensure the health and operations of their infrastructure.

“The edge is a big focus area for Red Hat because there’s so much opportunity and we also really believe that we can help our customers in these spaces,” Murphy says. “There are massive amounts of data. It is not efficient if you have 500 camera streams on a factory floor and in your offices to send all that data to a cloud to be analyzed and only really care about a little teeny slice of it. Ideally, you can process that data closer to where you are collecting it so that you only have to save the data that’s really interesting and that you can get business value out of it.”

The amount of data being generated at the edge is a key driver of edge computing – being able to collect and analyze the data closer to where it’s created is much more efficient than shipping it all to the cloud – but it’s not the only one, Murphy says. More AI work is being done out there and GPUs and other hardware have gotten small enough to support it. 5G also will bring connectivity to places where there has been none.

Traditionally Red Hat has addressed the server and gateway segment of the edge. Now the company is pushing further out to where the devices and sensors are, which overlaps with what Red Hat is doing with the low-power architecture and – just as important – the SystemReady standards of Arm server chips. Red Hat has participated in the SR tier of the Arm program aimed at servers and workstations. The focus is expanding to include the ES (for Arm embedded and SmartNIC SoCs) and IR for Linux and BSD on embedded Arm chips.

“This is more of the embedded and IoT space,” she says. “There are some different challenges and things that we need to enable and … different features and things we need in our orchestration-slash-OpenShift software to really be able to address this space. But this is where the edge market is going. It’s not just a server in a closet. It’s really an edge device that has AI built into it, that can process things right on the edge.”

And it’s all tied into enterprises’ IT operations in the cloud and on-premises, so Red Hat needs to make it possible for organizations to manage it all from a central point, including pushing updates to Kubernetes applications at the edge where there are few or no IT specialists are.

The edge also touches the other emerging technologies Red Hat is addressing, including Arm and other architectures. Red Hat in 2017 finally fully embraced Arm and is the focus for the vendor in terms of alternative architectures. It addresses the sustainability and power efficiency demands of organizations, some of whom can see millions of dollars saved by consuming even a little less energy and is being adopted by systems makers and hyperscalers like AWS with its Graviton chips.

Red Hat in May announced the general availability of OpenShift on Arm systems and is running a tech preview of OpenShift support for mixed clusters, with some nodes running on x86 chips and others on Arm, some that Kubernetes already offers.

Red Hat also is watching the development of RISC-V, which is playing a key role in research the company is doing around CPU cores in FPGAs and is being supported by rivals like Canonical with its Ubuntu Linux OS.

“Red Hat is keeping a close eye on as far as what we’re doing with RISC-V,” Murphy says. “Fedora is being kept up to date with all the upstream things and we’re trying to figure out when the right timing is that we will actually think about taking our OpenShift to RISC-V. But right now, with so much to do with Arm, that’s where we’re focusing for alternative architectures.”

There is the rise of specialty silicon from GPUs, from data processing units (DPUs) and Intel’s infrastructure processing units (IPUs) to other accelerators for network offloading, video encoding and decoding, and cryptocurrency. The software needs to catch up to all this, she says.

Red Hat is a founding member of The Linux Foundation’s Open Programmable Infrastructure (OPI) Project, which launched in June to create standards and a software ecosystem for emerging architectures that leverage not only GPUs but also DPUs and IPUs. Red Hat has certified RHEL to run on Nvidia’s BlueField DPUs and has early tech previews of work it’s doing with OpenShift and BlueField-2, including an OBM/OBS offload use case. The company also will work with other vendors in the space.

The evolution of infrastructure hardware also goes beyond specialty silicon to include such emerging technologies as the CXL protocol, which will play a major role in composable infrastructure, where systems become pool of hardware resources that are pulled together to run specific software and then returned to the pool once the work is done.

“People are learning that to do this well, the software is a little bit more complicated than people were originally thinking,” Murphy says, noting proprietary software stacks offered by the likes of Liqid and GigaIO. “That just doesn’t scale well if you have one proprietary stack. Red Hat sees a view where we think Kubernetes – or an extension of Kubernetes, but probably Kubernetes itself – is the right place to really be doing this composing. [With] containers, you really have the ability to grow and shrink dynamically, much more different than with a VM or other virtualization technologies. It’s the right fit long-term. That’s where we’re focusing and looking at.”

The industry is converging around such hybrid technologies as CXL, which offers a common way to attach resources like accelerators and memory and create infrastructure pools. CXL is still not yet mainstream, but CXL 3.0 was released in August, adding new features.

“There’s still a little bit of a waiting game to get there,” she says. “It’s like, how much can we develop now with current technology vs. how much do we really have to wait until those new kinds of hardware exist? But we’re definitely looking at it and trying things out with what’s out there now. It’s allowing us to think about where things run in a different way and decompose everything. … We’re exploring some of these software concepts with what we have now.”

In the end, enterprises don’t care what the underlying technology is. They want to be able to run their software fast, securely, and efficiently to get as much business value out of it as possible, Murphy says. The other stuff is the worry of the vendors. And that’s what Red Hat and others are trying to address.

What that means down the road is difficult to gauge. Red Hat has a handle on the next six months to two years, but after that the tea leaves are difficult to read. How are companies like Meta and Nvidia going to evolve their vision of the metaverse or omniverse, how will adoption or artificial or virtual reality (VR) play out.

“I know I don’t want to wear a VR headset all day while I work,” she says. “That’s not realistic to me with where things are at, but that’s a place businesses are going to go. But could technology evolve in a way that makes it more usable for everyday life? Absolutely. Are there use cases that will evolve, things like digital twins, being able to show factory floors and walk through them in virtual reality? That’s obvious. That will probably happen. But what it ends up looking like in the end is a little bit [unclear]. Are we all going to interact with each other in a virtual world in five years? Those are some of the unknowns.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. This line ” with OpenShift and BlueField-2, including an OBM/OBS offload use case” should be corrected to ” with OpenShift and BlueField-2, including an OVN/OVS offload use case”

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.