A Private Rackspace Still Embodies The Public Cloud

One cloud was never going to be enough, no matter how much Amazon Web Services wants it to be otherwise. It is an increasingly multicloud world, and enterprises want to know that they can run their applications and services on any of the major public clouds as well as virtualized or containerized private infrastructure that mimics those clouds. No one wants to have vendor lock-in, but no one also wants to give up the ability to choose the best service available for a particular need. The rise of Docker containers and the Kubernetes controller to manage them is making the move to the cloud – and across clouds – a bit easier.

Since being bought by invest firm Apollo Global Management in 2016 for $4.3 billion and going private, managed hosting and cloud provider Rackspace has removed the word “Hosting” from its name and has also been aggressive in becoming a company whose products can run not only on its own cloud infrastructure but on other public clouds, such as AWS. Infrastructure continues to be important – the company still works with the Open Compute Project (OCP) and more recently expanded its bare metal capabilities – but Rackspace is putting a lot of effort and money into growing its software and services stack and ensuring it can run on multiple cloud platforms.

That push can be seen in three acquisitions the company has made in over the past three years. In May 2017, Rackspace bought TriCore Solutions for managing enterprise applications from the likes of Oracle and SAP, and in September acquired Datapipe, which provided managed services to public and private clouds as well as managed hosting environments. Last month, Rackspace bought RelationEdge for its capabilities in managing software-as-a-service (SaaS) applications like Salesforce.

All of the acquisitions are enabling Rackspace to expand its reach with larger enterprises and to move up the software stack, according to Joel Friedman, chief technology officer at Rackspace who came to the company through the Datapipe deal. The acquisitions also help the company in its work to grow its capabilities across multiple cloud platforms.

“Datapipe was an early adopter in embracing the cloud and saying, ‘We’re not going to be confined to only being an infrastructure provider. We’re a service provider,” Friedman tells The Next Platform. “It’s about solving the customer’s problems, becoming relevant and an extension of their IT teams with best practices and architecture and incident response and security and application management and all of that. Now, if we’re not managing the architecture, that doesn’t diminish our capabilities. Rackspace has adopted that same capability. It has pivoted from exclusively its own OpenStack public cloud and is now embracing third-party public clouds as well. We have that same type of overlay capability where we if you’re an AWS shop – and that could be the entire organization or it could be just the individual business unit that has developers that have affinities to one particular cloud – it doesn’t do me any good to attempt to convince you away from using a particular platform. That is kind of moving against the market. It’s fighting gravity. Instead, how do I offer services on top of that platform? How do I make sure you’re using that platform to the best of its capability? How do we ensure that it’s designed for resiliency, how it has automation, how it has infrastructure as a code, all of these other kinds of advanced, challenging things to implement. There’s a limited pool of talent within these organizations and the ability to go to one managed service provider that can cut across all these capabilities, that’s where it’s compelling.”

Businesses are going to continue to migrate to the cloud, but at their own speed, he says. Most have a mixture of modern and legacy technologies, from larger enterprises still trying to consolidate systems and applications gained through acquisitions to companies with smaller budgets that allow them to shift only parts of their operations to startups. Rackspace’s goal is to offer the individual products throughout its portfolio that customers can leverage but that also can be brought together to work across multiple cloud platforms as they grow their cloud efforts.

For example, the company last month unveiled its new Kubernetes-as-a-service offering, which can be leveraged not only on the Rackspace cloud but on other platforms as well. The service enables businesses to leverage containers for application development and delivery without having to bring the Kubernetes orchestration technology in-house. Rackspace in 2015 unveiled Carina, a service enabling customers to create managed clusters for running containers in the cloud. The service also gave Rackspace experience with containers and operating at scale, says Scott Crenshaw, executive vice president and general manager of private clouds at Rackspace.

The new Kubernetes-as-a-service is an enterprise-grade offering that has been in beta for six months and now is generally available on the Rackspace private cloud. It will expand soon to major public clouds and to bare metal servers in the cloud, Crenshaw says.

In April, Rackspace rolled out its bare metal-as-a-service product that is aimed at high-performance computing (HPC) workload. It includes NVM-Express SSDs for I/O optimization and GPUs to accelerate application performance and address parallel computation workloads. It’s an expansion of Rackspace’s capabilities in this area over its OnMetal cloud service, which we talked about two years ago.

“I’m not saying we’re fully pivoting, but we have a new bare metal release that … doesn’t tie it to our OpenStack public cloud, that’s just pure REST API-driven around our existing footprint,” Friedman says. “That is launching next quarter. This is different from the prior release. The previous one was all OpenStack-driven, so everything was around software platforms. If we had a software-based firewall or another software-based load balancer with OnMetal, this particular release enables us to do a full hardware solution – so hardware firewalls, hardware load balancers. For those particular high-performance platforms or customers that want these full throughput capabilities on-demand, pay-as-you-go, we’ll be able to offer those as well.”

The same week, the company launched the Rackspace Private Cloud Everywhere, powered by VMware, a private cloud-as-a-service design to enable enterprises to bring cloud servers and storage into their on-premises datacenters or colocation facilities.

The CTO says Rackspace has been able to make these strategic shifts in large part because it now is a private company. Echoing comments by Dell Technologies founder and chief executive officer Michael Dell, who spent $24 billion in high-profile move in 2013 to take his eponymous company private, Friedman says going private through the Apollo acquisition gave the company the space and time to make the dramatic shift in strategy at a time when Rackspace was struggling to compete with larger cloud providers like AWS and Microsoft Azure.

“We made a particular dollar figure investment internally that was pretty large about updating some of our systems, whether that be moving from internal phone systems to using AWS Connect, whether that be enhancing our portals or dashboards that support multicloud user experiences,” Friedman says. “All those different types of things require large investments. When you’re behold to the street, they want to see very particular ROI. Private investment does as well, but things are a lot looser if they’re looking at long-term strategic gains.”

Two years on, he says he doesn’t see major gaps in the company’s portfolio, but says the goal now is to make enhancements to the individual products as well as continue tying them together. In the meantime, enterprises will continue to adopt multicloud strategies, containerized services and infrastructure as a code. Longer-term predictions for the cloud are more difficult to make. It’s still unclear if computing will become a true utility.

“Until we get to a time when we see containerized platforms that bring their own PaaS with them, as in a container that can move, then it’s not truly portable between clouds while still getting that value, like a portable function-as-a-service,” Friedman says. “That’s kind of my bet, that eventually we’ll see functions-as-a-service that are portable for containerized use and then, perhaps, the differentiation between the underlying cloud players will be less relevant. [With PaaS as a containerized service], it doesn’t matter. Then you can bring your own PaaS, your own everything, and then truly it is just raw compute. That’s why I’m a little bit skeptical when you have organizations that talk about containers that can bounce between clouds. You can do it, if you design for it upfront, but what are the tradeoffs and is the technology ready to really make those tradeoffs?”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.