Here is how we know computing could eventually be a peer to energy, transportation, sustenance, and healthcare as a basic infrastructure need – and will be a bigger part of our lives in the future, if the hyperscalers and cloud builders have their way: The front loading of enormous capital expenses.
Before we dive into the Google numbers for the final quarter of 2025, we wanted to level set with some comparative data, and spent a while combing the Internet with the assistance of the Google search engine and its Gemini adjunct – token token token token token. . . – to come up with some numbers. These numbers are for illustrative purposes, of course, and there is a tremendous amount of wiggle in what you might get depending on what you count and what you don’t count in each sector’s revenue and capital expenses. And how you ask the question and the mood of the AI stack.
We thought, before even submitting the prompts, that the energy sector was capital intensive, and it certainly is. Depending on the timing and the development of new energy sources, these companies can spend a large portions of their revenue on capex. But on average, throughout the energy production and distribution business, it looks like the capex intensity is something like 25 percent of revenues in the United States:
All of the numbers in this table are in bold red italics because heaven only knows how Gemini took data out of Google searches and made its estimates. We are suspicious of such round numbers, but we are really just trying to use Gemini as a thought experiment and to make a point. We built a table with the help of Google and Gemini, but we don’t know if it can be trusted down to the details. We think the data has the right shape to it, but this is just confirming our pre-existing bias that some infrastructure sectors are more capital intensive than others.
Welcome to the modern AI world, where the machines act like they know a lot, and depending on how you ask you can get different answers. And we are not sure how much Apples to Oracles comparisons are being made here. (Yes, that was a joke. Sort of.)
Anyway, to continue in this experiment. Both the healthcare and farm/food sectors are very people-intensive and also have a lot of capital expenses, but it is a lot less than the transportation sector (more machines, less people) and the energy sector (a lot more machines still and fewer people by comparison).
But the capex spending by the Big Five hyperscalers and cloud builders – Amazon, Google, Meta Platforms, Microsoft, and Oracle – as a share of revenue makes energy look capex stingy by comparison. The fact that the hyperscalers and cloud builders are spending around half of their revenue on capex is indisputable. But as we said, we prefer to work with hard numbers that we have vetted, and building a table like that one above and verifying the underlying data and assumptions would probably take a day the old fashioned way.
With that experiment done, let’s dig into Google’s final quarter of 2025 – the real numbers, the ground truth – and see what 2026 is going to look like for revenue and capex.
The thing to remember about Google is that it has been embedding AI functions into its core search and ads businesses for more than a decade and it has been making its own TPU accelerators to support these functions for most of that time precisely because it was too expensive to add voice translation to search using CPU inference or GPU inference at the time way back in the 2010s when AI was relatively young. If only a portion of people used voice search a few times a day, Google’s datacenters would have melted, and hence the Tensor Processing Unit, which is now in its seventh generation with the “Ironwood” devices, was born.
The Gemini model was trained on TPUs and most of the inference that Google performs through Gemini APIs is done on its vast TPU fleet. In Q3 2025, Google was processing tokens (for inference, we presume) at a rate of 7 billion tokens per minute; in Q4, that rate jumped by 43 percent to more than 10 billion tokens per minute. If you do the math, Google processed 917.3 trillion tokens in Q3 and 1,310.4 trillion tokens in Q4 for its “first party” application applications. This new data revealed by Google chief executive officer Sundar Pichai does not include tokens processed for GenAI training, and it doesn’t look like it includes Google’s internal services use of Gemini because it does not map to the data Google’s Mark Lohmeyer showed off at the AI Hardware Summit last September. By our math at that time, we thought Google was processing around 1,460 trillion total tokens in August alone.
The point is, Google’s processing needs are growing fast because of actual use of GenAI, and the Gemini 3 models, which are arguably the best in the world now for certain things, with Anthropic’s Claude variants being best at certain other things, is growing rapidly.
To cover that processing demand and address a revenue backlog that stands at $240 billion, Google has to buy an incredible amount of iron in 2026, and Anat Ashkenazi, chief financial officer at Google, laid out the plan.
“The investment we have been making in AI are already translating into strong performance across the business, as you have seen in our financial results,” Ashkenazi said on the call with Wall Street analysts. (We will get to those numbers in a second. “Our successful execution, coupled with strong performance, reinforces our conviction to make the investments required to further capitalize on the AI opportunity. For the full year 2026, we expect capex to be in the range of $175 billion to $185 billion, with investments ramping over the course of the year. We are investing in AI compute capacity to support frontier model development by Google DeepMind, ongoing efforts to improve the user experience and drive higher advertiser ROI in Google services, and significant cloud customer demand as well as strategic investments in Other Bets.”
Ashkenazi also explained the timing of capex spend and what Google gets for it would depend on component supplies and pricing, and the timing of payments is what causes the variability between $175 billion to $185 billion. Call it $180 billion at the midpoint, and that is nearly double the $91.45 billion that Google spent on capex in 2025, which was 1.74X that which was spent in 2024, which was 1.63X of that spent in 2023, and so on back through time a decade ago.
If you look at Google’s revenue backlog growth against capex spending, you can see immediately that something has got to give:
The wonder is that Google has not spent more, frankly, given the spread of the two. But you can rest assured that the company will not spend a dime on infrastructure until it knows it can get it into a datacenter and know someone is ready to rent it almost the second it is fired up.
That spread between backlog and capex is getting wider, and that may be an effect of longer deals for cloud capacity being on the books. (It would be interesting if Google disclosed such data.) No matter what, it is clear to us that for Google to meet its future processing commitments to itself, the model builders like Anthropic and OpenAI, and to its hundreds of thousands of enterprise AI customers, it is not only going to have to double capex, but it is going to have to get another 1.5X performance boost from software improvements and other efficiency gains. This is a tall order after already getting a 1.8X boost from software tweaks to the Gemini models in 2025.
The good news is that functional GenAI from the Gemini 3 model is feeding back into Google products and services, and it is driving revenues and usage, which in turn is paying for the increased capacity. Google has kept the AI horse in front of the application cart, which it has been able to do thanks to its vast and wildly profitable search and ads businesses. YouTube streaming is a solid business in its own right, too.
Only the big can afford to get further embiggened. . . .
We only care about those businesses inasmuch as they give Google the data on which to train models and the money to afford to be a cloud provider and a model builder as well as what is arguably the largest user of GenAI on the planet.
The dotted line red line is meant to convey we were estimating the operating losses for the Google Cloud business.
In the quarter, Google booked $113.83 billion in sales, up 18 percent year on year, with net income of $34.46 billion, up 29.8 percent. Despite spending $27.85 billion on capex, the company existed the quarter with $95.66 billion in cash and equivalents in the bank, which is about half the cash it wants to spend on capex in 2026.
Google Cloud, the company’s cloud service as the name suggests, had $17.66 billion in sales, up 47.8 percent, with an operating income of $5.31 billion, up by a factor of 2.54X compared to the year ago period. That operating income rate was 30.1 percent of Google Cloud revenues, which is the highest profitability level Google has ever seen in its cloud business and nearly double the rate of only a year ago.
While it has been tough for Microsoft and Amazon Web Services to drive huge revenues directly with GenAI, Meta Platforms certainly has shown a knack for it and so has Google. Even if Google’s customers need time to figure it out, Google’s own businesses already long-since know how to use AI. We wonder how much internal backlog for AI hardware and services there is for Google’s own businesses, which have their own infrastructure and which do not officially use the Google Cloud.
As we have pointed out many times, all Google would have to do is a bookkeeping trick, calling all infrastructure Google Cloud and having its search, ads, and video businesses pay it for services and it would be the largest cloud in the world. But that would just be a stunt – even if it would be very funny.
Google seems intent on having its cloud grow in its own way and separate from the mothership, Alphabet, and those other quasi-independent businesses. The word on the street is somewhere between 30 percent and 50 percent top line growth for Google Cloud in 2026, and we wonder if it can’t be more.