Search engine giant Google has invented so much sophisticated and scalable infrastructure for gathering, processing, and storing information that you cannot help it for wanting you to just consume what it has created as an abstracted platform service, just like the programmers at Google do.
But that is not how the world works, at least not yet.
Even as the company’s top brass talked at the Next 2016 conference in San Francisco about how its Cloud Platform was different from key competitors Amazon Web Services, Microsoft Azure, and IBM SoftLayer, and Rackspace Hosting, the discussion always drifted towards services and away from virtualized infrastructure where companies can do their own thing. This may get the hackles up for those who, like Google, think they can bring differentiation to their IT organizations at the infrastructure layer and all the way up the stack to applications. And Google is committed to offering infrastructure, platform, and software services across the board, and to its credit, it has set about exposing the services it uses internally so that others can buy them with metered pricing.
If you want to go through – or have to go through – the evolutionary stages that Google itself went through to create a massively scaled, largely automated set of infrastructure that makes it relatively easy to deploy applications, you can do it.
The biggest message coming out of Next 2016 is that Google is absolutely committed to the public cloud, and that it intends to be a contender against AWS and Azure, which have a similar scale in terms of the raw infrastructure they deploy but which have different levels of revenue derived cloud capacity and services. Last December, the company tapped Diane Greene, one of the co-founders of VMware, the server virtualization juggernaut that has built a tidy $6 billion business making X86 servers in the corporate datacenters of the world more efficient, to run the company’s enterprise business, which includes Google Cloud Platform and a slew of applications, and interestingly, also puts her in charge of datacenters and their gear, with Urs Hölzle, senior vice president of technical infrastructure, reporting to Greene.
“We are serious about this business,” said Greene during her keynote address, reminding everyone that Google’s parent company, now called Alphabet, invested $9.9 billion in capital equipment in 2015, with the vast majority of it being for its datacenters and the gear inside of it.
How much of this is dedicated to the Cloud Platform public cloud is not divulged, but it has to be a fairly small portion of it. Say it scales with revenues, then Alphabet’s overall revenues of $74.5 billion last year would represent the bulk of that infrastructure spending, with Cloud Platform probably generating (we estimate) somewhere around $900 million in revenues in 2015 (but more than doubling annually unlike the overall Google.) Google’s cloud infrastructure expenses are probably pretty small, maybe on the order of several hundred million dollars, perhaps as high as $1 billion, but are set to explode.
As part of this week’s campaign to tell the world how serious Google is about the public cloud, the company announced that it had expanded its facilities in Oregon and added a new datacenter in Tokyo, and that by the end of 2017 it would add another ten facilities – a mix of its own datacenters and those hosted in co-location facilities. Google opened a datacenter region in South Carolina last year, and has three other facilities in Council Bluffs, Iowa, St. Ghislain, Belgium, and Changhua, Taiwan. Each region has three or four zones. This capacity is accessible through 77 different network integration points across the world, which also hook into the other Google datacenters that are not running Cloud Platform workloads and which hook users into the Google wide area network, which is called B4, that links its datacenters together.
But Google has come to realize, as have Amazon, Microsoft, and IBM, that regional laws and business practices often require for datacenters to be located closer to the businesses that use them, and companies are also sensitive to latencies and they want to have rented capacity closer to their users. Hence the investment in more datacenters in the coming two years.
Google did not say where it would be building these datacenters, but it stands to reason that there will be more geographic distribution than we have seen in the past. It is reasonable to expect for Google to add capacity at existing facilities that currently only run its own workloads as well as adding other co-lo capacity where appropriate or politically expedient. Google operates 14 datacenter regions of its own today. Microsoft operates 22 Azure regions and will be adding five more this year. AWS has 12 regions with a total of 33 availability zones and is adding five more regions with a total of eleven more zones this year. All zones are not created equal within cloud providers or across them, of course, but it is safe to assume that a facility has many tens of thousands of servers.
Having capacity distributed around the globe is going to be important if Cloud Platform is to reach the level of AWS, which has over 1 million customers and which hit $7.88 billion in revenues. But Google has some technology tricks up its sleeve that it thinks, in the long run, will make its cloud more appealing to enterprise customers and startups alike. It is also willing to compete on price, which Google absolutely can do based on the vastness of its infrastructure and the volumes at which it buys servers, storage, and switching.
Hölzle said in his presentation that the custom machine type virtual machines that Cloud Platform launched last year allowed the average customer to save about 19 percent on their compute bills versus having to buy specific instance types. Also, because Cloud Platform has per-minute pricing rather than hourly pricing for compute infrastructure, customers can see significantly more savings compared to other cloud providers.
“But cost is not the only thing,” Hölzle added. “When you are thinking about picking a cloud provider for the next decade, innovation and the quality of the underlying infrastructure is just as important. In fact, if you are picking that cloud provider for a decade, innovation might be the most important part. So it is time to look at what is next. Over the next five years, I think that we will see more change in computing and in cloud than we have seen in the last decade or two – literally an explosion in innovation.”
Hölzle said he is basing that idea on a well-known effect in biological systems, where once a basic system is in place it enables evolution to accelerate. (He did not name this phenomenon nor did he mention anything about punctuated equilibrium or mass extinction.) “We are entering a similarly explosive period in cloud computing innovation right now, and as more software gets woven into our daily lives and into your companies, you all face the same challenges that Google has faced for many years – mainly to create more functional, easy to use applications, operate them at scale, and operate them at scale cheaply, efficiently, and securely.”
The answer is to automate everything, which Google has largely done, and that is why it is banging the “No Ops” mantra today as the hype around DevOps – making developers responsible for not just creating, but deploying and maintaining applications in production – continues to grow. Hölzle said that developers spend way too much time administering their setups, and he predicted it would be easy to run “even ambitious applications” at scale. (Google knows a thing or two about this, as does Microsoft and Amazon, of course.) With App Engine, DataFlow, Kubernetes, and other services on the Google cloud, he said Google was well on the way to accomplishing this goal for its cloud customers as it has long since done internally for its own software developers.
What Google is really after – and what it is selling to customers as their future because this is the experience inside of Google already – is something Hölzle is calling the “serverless architecture,” which means making the infrastructure absolutely invisible. In the past two decades, we have moved from physical machines in a co-location facility to virtual machines in a cloud datacenter, from a purchase order to an API call, which he said was a very profound change but that the basic building blocks of the virtual datacenters had not changed yet.
“It is still extraordinarily complex to orchestrate these building blocks together at scale. Developers have to think about things other than making their applications great – do we have enough servers, are they all patched, did we prepurchase too much capacity. In the virtualized datacenter world, this never gets easier. And actually it is a bit crazy that today’s cloud is based on all of the physical limitations that were created twenty years ago. It is like the virtual datacenter today still has a manual choke.”
Google was there ten years ago, but then it created software containers for Linux and the Borg container scheduler and a slew of other technologies that are the underpinnings of its infrastructure stack today, which insulates developers from worrying about frameworks and capacity planning and to focus on storing data and chewing on it for insight.
Which brings Google, and perhaps its cloud customers, full circle to the platform cloud that it always wanted to sell to enterprises in the first place. That was a hard sell in 2008, but it might be a whole lot easier in 2016, we think. And if you want raw virtual machines and containers, well, Google can expose those to you as well if you want to do it the hard way.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment