Hybrid Cloud Should Benefit You, Not Bezos

We have always been convinced, and remain so, that there is no way that the largest organizations in the world will move their computing to one of the big cloud builders. And ten years ago, when Amazon Web Services was still relatively small and yet growing fast enough to scare the heck out of those who sell IT infrastructure or make its components, the current chief executive officer of Amazon and the former head of its cloud division was fond of saying that “in the fullness of time” all workloads would move to the cloud.

One of the earliest references we can find for this statement is here, and we remember being at the November 2016 re:Invent conference – a press conference after the keynote, to be precise – and sitting right in front of Jassy in the front row and saying that, while that was an interesting statement, there was no way in hell this was going to happen. (We may have used more colorful language than that.) But that has been the party line from AWS since that time, until Jassy came on the call with Wall Street last week to go over the company’s overall financial results for the fourth quarter.

We quoted Jassy in full about the benefits of elasticity and how the AWS business was doing in our coverage of the year end AWS results, and we noticed a shift in attitude as well as a statistic that we do not believe to be true here in 2023. So we will cite that part again:

“I think it’s also useful to remember that 90 percent to 95 percent of the global IT spend remains on-premises,” said Jassy. “And if you believe that – that equation is going to shift and flip, I don’t think on-premises will ever go away – but I really do believe in the next ten to fifteen years that most of it will be in the cloud if we continue to have the best customer experience.”

So now Jassy believes that on premises IT infrastructure will not go away. Which seems more reasonable given data sovereignty issues, latency issues, cost issues, and just the desire by companies to control their own fates. Ya know, like the hyperscalers and cloud builders do. It’s funny how those who want you to give up your infrastructure and your code are the ones who never will. Do as I say, not as I do, we guess.

We don’t think cloud has peaked, and we definitely think that cloud has tremendous – dare we use this word? – utility. But we wonder about that cloud versus on premises percentage of datacenter compute, storage, networking, and software.

As we have said before, we think there are three different models that are evolving and we will see where the chips fall:

  • There are the infrastructure services and add-on software services from the major cloud builders like Amazon Web Services, Microsoft Azure, Google Cloud, IBM Cloud, Alibaba and Tencent as well as many smaller clouds and hosting providers who are getting more and more cloud like, especially in the adoption of cloud-like subscription pricing.
  • Then there are co-location facilities, which host bought or leased or utility priced IT gear on behalf of organizations, allowing them to get out of having to build, maintain, and depreciate datacenters but which allow organizations to have a lot more degrees of freedom and which, importantly, have high-speed links into the cloud builders. Interestingly, more than a few cloud builders also use these companies, with Equinix, QTS, Digital Realty, CyrusOne, and GDS (in China) being the big ones.
  • There are on premises datacenters owned and operated by organization, either using equipment they buy or lease subscribe to under utility pricing.
  • AWS Outposts, private Azure Stacks, and Google Anthos are really an extension of the cloud builder down into co-lo and on premises, and technically are a fourth deployment and pricing method for IT infrastructure. It is not clear if this is really being used except in corner cases. It is like a small version of AWS Govcloud, where AWS built a supersecure and isolated datacenter specifically for three-letter Federal government agencies in the United States. Either Govcloud is the first Outpost, or an Outpost is a very small, personal Govcloud.

The situation is very far from “cloud versus on premises.” It is more complicated than that. But just for fun, let us try to reckon how much of the global IT budget is actually being spent on cloud. We will have to mix and match some datasets.

According to Gartner, there was around $209 billion in IT spending for datacenter systems – servers, storage, switching, and operating systems for them – in 2022. Spending by hyperscalers (who really as SaaS vendors in a sense) and cloud builders for the gear in their datacenters. But Synergy Research says that in 2022, hyperscalers and cloud builders spent $97 billion on datacenter hardware. This is a cost of production for the hyperscalers and clouds. So the rest of the IT market – enterprises of all sizes, governments, educational institutions, research centers, telecommunications providers, and such – only spent around $112 billion on IT gear. So it looks like the hyperscalers and clouds represent around 46.4 percent of datacenter systems spending, which sounds about right.

On top of this, according to Gartner, there is another $790 billion in enterprise software spending in 2022. So basic IT spending outside of clouds – and not including myriad tech support, systems integration, application management, hosting, and cloud services – is $902 billion. (If you want to be fair, you would add in the amortized cost of having maybe 10 million programmers on the payroll at these non-cloud and non-hyperscaler companies. It is hard to reckon that, but it might be around $1 trillion. Some of these applications run in the cloud and some on premises and some in co-los.)

Now, again, according to Synergy Research, companies spent $195 billion on IaaS and PaaS services in 2022, and another $229 billion for managed private cloud, enterprise SaaS, and content delivery networks. We think managed private cloud and content delivery networks is a relatively small part of that. Call it 70 percent for enterprise SaaS, or $160 billion.

So, the universe of total IT spending on datacenter hardware and enterprise software (SaaS or not) is a cool $999 billion according to Gartner, and the portion that organizations are spending on cloud capacity (in the broadest sense) is $195 billion plus $160 billion, or $355 billion. When we do that math, the cloud penetration is 35.5 percent. Not 5 percent or 10 percent.

There is a portion of the remaining $644 billion up for grabs. But certainly not all of it, not based on the thinking we see out there among the IT customer base, which is plenty annoyed about the surprisingly high cost of cloud once you are into it, elasticity or not. There is a value to elasticity, but it is not an end unto itself. As we said in our comments earlier this week, the first $100 billion for AWS, which should happen in two years or so, is going to be a lot easier than the second additional $100 billion. And hence you see AWS moving up the stack selling software – its own software as well as that of competitors, for which it is getting a commission to run on its cloud.

Which brings us all the way to our point. How do you make it so that your IT organization wins and you don’t just end up in another sticky platform you can’t easily get off of when the discounts get thinner and thinner and the costs go up and up?

The answer is simple: Control your own platforms and your own code, control your own fate. You have to be like IT organizations of days gone by and more like hyperscalers and cloud builders themselves. It is expensive, but not as expensive as losing whatever competitive edges your own smart people will come up with over the decades. You cannot abdicate the value chain while AWS is trying to move up it. It is bad enough that you have to compete with AWS as it is.

Here is the idea. Way back when, as cloud was starting to take off, we used to joke that the last server in the corporate datacenter would be the LDAP or Active Directory server, the nexus through which all kinds of IaaS, PaaS, and SaaS services would be cross-connected. (It was funny to envision this giant four-socket X86 box sitting in the center or a raised tile datacenter with a zillion wires connecting it to a massive router.)

Out thinking has evolved, as it must. If we were running IT operations somewhere today, we would absolutely use cloud services, mainly to run test/dev or to put new ideas (like how to train AI models and how to integrate AI inference into applications) through the paces. But once we figured out what we were doing, we would never deploy those applications on a “public” cloud. No way. We would, however, deploy utility-priced servers and storage in co-location facilities adjacent to the clouds, just in case we needed extra compute capacity or fast access to cloud software stacks. We would also keep as much storage as possible in these co-location sites and the bare minimum of storage in the cloud. You can put data in a cloud for free, but they take your kidney if you want to move it.

Also: You need multiple co-los ones for high availability, cross-connected. And maybe you need to keep your storage and that LDAP/Active Directory server in a secure datacenter of your own, just in case the bit hits the fan. Replicate to your own facility if you want to be very safe. Consider it an online backup that can be used in a pinch and that is air-gapped against ransomware and hackers.

This is a kind of hybrid cloud that makes sense to us. One that uses substrates that can run across all of the clouds, on premises, and in co-los. Things like Red Hat OpenShift and HashiCorp HashiStack, or heaven help us even the full VMware stack with its Kubernetes layer on top, are expensive. Sure. But so is making Jeff Bezos the third richest man in the world for the next decade or two, which is only happening because of the profit margins of AWS.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

4 Comments

  1. Thanks for the article, no comments on cloud market size. I disagree with pretty much every piece of advice you give about hybrid infrastructure deployment, but the message at the heart of your article does resonate true: Don’t loose your core business advantage, your competitive edge, to anyone.

    Now, let’s dive deeper into your suggestions. First it should be noted that only a very large enterprise could stomach investments like you are proposing (hyperscaler, multiple co-los, and owned facilities) so anyone outside the largest ~500 companies in the world should stop reading.
    Next, we are led to believe this enterprise is very cost conscience based on these sneaky cloud bills; yet the alternative proposal is a “kitchen sink” approach where we get the worst cost-profile of all worlds. We reduce our purchasing power in the cloud AND with OEM vendors, we have to maintain skill sets for all deployments (VMWare / OpenShift does not exclude one from understanding AWS VPC / EC2 / IAM), and we purchase very expensive licensing for Openshift/Hashicorp/VMWare (which when deployed on AWS means we are paying for two hypervisors). But you’re right about those cloud bills, because if you run compute in the hyperscaler and your storage in a co-lo… your data transfer charges WILL be insane. Thank goodness we have not lost our competitive advantage, which we haven’t found yet since everything mentioned so far has been purchased.
    Regarding Dev/Test in cloud and production in your co-lo. No serious company would test their software on different infrastructure than what will be used in production, this is literally the oldest reason in the book for production issues.
    I’m tempted to not even address the LDAP/AD suggestion since everyone has already moved to hosted providers (Okta, Azure AD, etc.). But i’d love to hear more about this scenario where your hyperscalers are down, your co-lo is down, but everything is fine because LDAP/AD is running off a server under your desk.

    Finally, back to your main point. Your business, whatever drives value for your customers/stakeholders, is what matters. If the setup above drives objective value for your business then you should do it; however if you find that maybe, just maybe, your customers don’t care how much you pay IBM and instead they care more about the website being down all the time. Well in that case you might want to stop making IT decisions based on non-business drivers. [Opinions my own, not employers]

    • Everyone outside the largest 500 companies should stop reading? Wow! You wouldn’t happen to be a cloud evangelist or work for one of the hyperscalers would you?

      The region I work in has little presence from global power players such as the top 500 yet a lot of companies and even SMB players are constantly being bitten by massive cloud bill shock here. So much so, cloud repatriation is rapidly growing here. The strategy of “Cloud First” for most organisations is just about gone and replaced with “Cloud Smart” strategies. One of the big banks here actually is in the final stages of moving away from one of the big three cloud providers due to cost and security reasons. What makes this interesting for this provider is this was their big customer success story for the region so other customers will no doubt have noticed this and ask the question why?
      XaaS offerings from hardware vendors which use cloud-like consumption economics are also a big driving factor for cloud repatriation as customers are finding this opex model cheaper but they also have more or total environmental control. Cloud providers lack end to end transparency.

      I’m also interested in your comment around how “everyone” has moved to hosted IDaaS providers for authentication and I am very curious to know what industry verticals you have worked in.
      There are loads application providers for a lot of industry sectors (e.g. Health) which just won’t support their applications being used on servers in the cloud or even using IDaaS providers as the authorative authentication source for their apps. If they won’t support that kind of model, it has to be on prem AD or some other LDAP provider model for them.

      Don’t get me wrong, I am not anti cloud but prefer to be what I call cloud pragmatic. It’s another tool in the box – not utopia. According to Gartner predictions from years ago, the cloud vs on prem war was for the large part meant to be over now. It’s very far from it and it shouldn’t be a war. Cloud has brought great benefits to everyone and shaken up the industry but it’s not the only solution.

      As you say opinions my own – derived from a long technical career.

  2. In this line of thinking, Sally Ward-Foxton reported today (Friday) on STMicroelectronics using Synopsys’s AI/ML DSO EDA tool (chip layout optimization) on Microsoft Azure cloud, exploiting elasticity as needed, without risk of impacting their on-premise projects. Kind-of an hybrid cloud approach that benefits them I guess. Meanwhile, for Neoverse V1 software development, one may be stuck with AWS Graviton-3 (seeing how Apple M1/M2 may be non-standard, Qualcomm/Nuvia Oryon may be mired in legaleze, and A64FX may be in limited supply). We need a broader range of Neoverse workstations and laptops at affordable price-points (available for purchase)!

  3. Note: AI/ML EDA for chips was also covered by Jeff Burt, last August, in The Next Platform: “Using AI Chips To Design Better AI Chips”.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.