Building Software Bridges To Ensure Workload Portability

In these modern IT days – where the rapidly-evolving environment stretches from the datacenter to the cloud and edge, with a range of hyperscale cloud providers and myriad clusters to choose from – what is increasingly important when creating and running workloads is portability.

But not just any old kind of portability. What is also important is that the applications are able to have their performance optimized as they move around.

The IT industry has been talking about application portability for as long as there have been two different types of computers. There are some modern twists on the idea, according to Ryan Cook, principal emerging technologies developer at Red Hat. Enterprises still love the clusters and container platforms that run their workloads, and that they protect them at all costs, Cook tells The Next Platform. However, it doesn’t need to be that way.

“With proper tooling, that really doesn’t have to be the thing anymore,” he says. “Whether you’re doing Podman at the edge or if you’re even on Kubernetes, there’s some exciting stuff that’s in the pipeline that changes that. And it changes it in a good way because Red Hat’s been telling the hybrid cloud story for a couple of years now. But there’s the on-prem story and then there’s almost the idea of vendor shopping. You can move your workload successfully between these different platforms and utilize their best capabilities. For example, if you need GPU processing capabilities and the best place is Google, you should be able to move your workload there.”

Enterprises need to be able to easily move their workloads wherever needed, regardless of the architecture or platform or technology. Red Hat has a plan for that. Two actually, and in an interview with The Next Platform and a couple of days later at the IBM-owned company’s Red Hat Next! event in Texas, Cook talked about both.

The first is the fast-developing KCP Project, which launched last year (it’s on version 0.8 right now) and offers a multi-tenant Kubernetes control plan that enables organizations to run workloads on many clusters. According to a description published earlier this year, KCP “provides a generic Custom Resource Definition (CRD) API server that is divided into multiple logical clusters that enable multitenancy of cluster-scoped resources such as CRDs and Namespaces.”

Each logical cluster is isolated the other clusters and can host different teams and workloads. Cook says there essentially are two sides of the KCP Project. On one, it gives users a Kubernetes-like feel when access Red Hat-based resources, letting them look at Red Hat Cloud items, launch a cloud instance, easily interact with APIs, and get status updates on their ROSA (Red Hat OpenShift Service on AWS) cluster.

Portability is the focus on the second part, the one with the multiple clusters.

“Users will have a kubeconfig file that will contain multiple clusters and they have no idea that it’s multiple clusters,” he says. “They just launch their workloads into this Kubernetes config file, and their workload is going to be on a cluster or it’s going to be spread amongst multiple clusters. This one is like a moonshot, but a win for the whole project in general is the fact that [Red Hat is] looking to integrate storage and networking in that.”

For IT administrators, that means they can take an application running in a datacenter using Ceph RBD or OpenShift Data Foundation (ODF) and the persistent data to AWS, Microsoft Azure, Google Cloud or another cloud environment or to a ROSA cluster. What KCP will do is frame that. “It’s going to remove the difficulties out of it. It’s going to take the networking and storage for you,” he says.

Bringing along the networking and storage is a key differentiator from Red Hat’s Advanced Cluster Management (RHACM, pronounced rack ‘em) for Kubernetes, which enables users to move workloads between clusters. KCP removes the issue of the storage backend from the calculation when deciding to move workloads between cloud architectures or between the cloud and on-premises environments.

“You may want to go from AWS to your on-prem ODF or you might want to do an all-of-the-cloud story,” Cook says. “You want to have a GCP and Azure and intermingling those back-end storage classes gets to be difficult. If we can agnostically solve that, it really helps to remove the complexity and scariness from it. It’s really about that portability because if X cloud provider could allow us a bursting-type performance and we can move our workload to that cloud to get us a better database performance for a couple of days, it really could win us a lot of functionality and you can have the best place to run your stuff, as long as we nail the storage capability of movement regardless of the back end, because the back end is the difficult thing today.”

Another difference from RHACM is that KCP offers cluster disbursement. A user might have 15 pods for a web application and if a cluster goes down, KCP will rebalance the workload over other clusters. Cook describes it as the “next iteration” of RHACM.

Red Hat this year described the KCP Project as “under heavy development, which means that every day it is evolving and things are changing rapidly.” Cook says there has been a lot of change just between the 0.3 and 0.8 versions.

The second effort is the FetchIt Project, which Red Hat wrote about last month, saying that GitOps tools like ArgoCD and RHACM primarily are focused on Kubernetes deployments. However, there are lightweight environments – many at the edge – where Kubernetes isn’t required. That’s where FetchIt steps in, a lightweight GitOps tool for deploying and managing containers through Podman, using the same files deployed in Kubernetes clusters.

“What it really does is it upsides capabilities to Podman,” Cook says. “Right now with Podman, you either have to hand it off to Ansible or you have to have an administrator run it. There’s just not a really almost seamless way of just dealing with Podman.”

To run FetchIt, users will need a config file and Podman, and to ensure they have the appropriate Podman socket – a user socket or a root socket – running. Red Hat also recommends two ways to run it: as a system service or directly within the container, adding that in either case, it will be running in a container.

The challenge is creating a technology that can address the highly disparate environments found at the rapidly expanding edge.

“There’s still always going to be complex differences between an edge at a Walmart and an edge at my distribution center,” he says. “Sometimes that one-size-fits-all doesn’t really work and trying to figure out how we can be that middle ground to solve both of those locations with the same tech, allowing us to build one thing and it supports everybody, is one of the challenges we’re facing.”

At this point, both projects are at different stages of maturity. The KCP Project is becoming more established, with an existing community around it and a handful of demos available to members. Also, there will be new integration points coming early this winter and then at next year’s Red Hat Summit, Cook says.

“First, they’re going to try to strengthen the Red Hat Hybrid Cloud story, providing a way for a user to interact with those resources,” he says. “Then around Summit timeframe, we would hope to see that management of workloads across many clusters.”

The FetchIt Project is developing within Red Hat’s Emerging Technologies group. There are three or so engineers working on it and the company is “just trying to see if there is a story for productizing it, what that looks like, does it fit any customers’ needs today,” he says. “We’re just trying to see that next step with it.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.