AI

Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket

Published

With the hyperscalers and the cloud builders all working on their own CPU and AI XPU designs, it is no wonder that Nvidia has been championing the neoclouds that can’t afford to try to be everything to everyone – this is the very definition of enterprise computing – and that, frankly, are having trouble coming up with the trillions of dollars to cover the 150 gigawatts to more than 200 gigawatts of datacenter capacity that is estimated to be on the books between 2025 and 2030 for AI workloads.

And when we say trillions, we mean a lot of them. Depending on who you ask and what you could, a gigawatt of datacenter capacity costs somewhere between $45 billion and $60 billion, and if you do the min-max math on that, this is roughly somewhere between $7 trillion and $12 trillion in capital expenditures in the six year timeframe. Call it somewhere between $1 trillion and $2 trillion a year just for those doing AI model making and serving inference up for workloads.

Against this, a $2 billion investment by Nvidia in CoreWeave is literally pocket change, even if it is a lifeline today for CoreWeave, which is going to have to come up with plenty of capital to meet its goal of having 5 gigawatts of capacity installed by 2030. If you do the same math on this, you are talking about CoreWeave and its partners having to come up with somewhere between $225 billion and $300 billion to meet that 5 gigawatts of AI capacity goal.

Bite that one off a bit, and try to chew it. Those estimates are a little lower and a little higher than the annual budget for the Chinese military, which was around $250 billion officially in 2024. (Although the US Department of Defense says that China spent more like $330 billion to $450 billion.) Anyway, CoreWeave and its partners have to come up with some big numbers for capital expenses. Against these numbers, the many-times expanded capacity deal with model builder OpenAI, which stands at around $22.4 billion in capacity rentals out in 2029 and 2030, is a tiny fraction of the cost for building that 5 gigawatts of capacity, and even when you add in capacity deals with Google and Microsoft, which we also think are to fulfill commitments these two clouds made to support OpenAI, it is all still a drop on the bucket compared to the roughly $1 trillion in infrastructure capacity that OpenAI has committed to have made or to rent from now through 2032.

Much has been made about the circular nature of the deals between Nvidia and CoreWeave or between OpenAI and Anthropic with Microsoft, Google, and Amazon Web Services well before the neoclouds got footing. Well, these are absolutely and unequivocally round-tripping deals, where a vendor is investing in a model builder or a cloud builder (in the case of Nvidia and CoreWeave) and they in turn use that investment to buy gear or services from that investor. There is nothing illegal about this, but we think the US Securities and Exchange Commission might want to require that companies disclose the revenue from such partnerships, which have been going on for decades, so you can separate business with strings attached from business with no strings attached.

The thing is, Nvidia is rich – very rich and very fast – the richiest nouveau riche the world has seen in many decades. But it is not rich enough to buy its next five years of business, which in our model is $1.66 trillion from fiscal 2026 through fiscal 2030 (which is roughly analogous to calendar 2025 through 2029). Our model also shows Nvidia should bring at least $750 billion of that to the bottom line, assuming intense competition from AMD and custom AI XPU devices. Against that, the $2 billion given to CoreWeave in exchange for 22.94 million newly issued Class A shares (which dilutes the holdings of all shareholders) is also pocket change. Nvidia now has about a 13 percent stake in CoreWeave, up from a 1.2 percent stake three years ago and a 7 percent stake at CoreWeave’s initial public offering in March 2025.

Last fall, Nvidia did a deal with CoreWeave, which was a $6.3 billion expansion of an existing master services agreement between the two companies. In this case, the GPU maker is guaranteeing it will buy up any cloud capacity that CoreWeave does not sell between now and 2032. (This is a little bit like Larry Ellison vouching his fortune so his son David can buy Warner Brothers.) Nvidia co-founder and chief executive officer knows he will need systems to design future chips and train new AI models, and as we showed back in September, this $6.3 billion is equivalent to renting around 9,400 Nvidia GPUs each year between now and 2032. It’s really not that much money for Nvidia. And it is being offset by the value of Nvidia’s increasing stake in CoreWeave.

At the time that Nvidia announced this upgraded MSA with CoreWeave, its stake in the neocloud was worth $4.1 billion, and after today’s $2 billion investment, even after a 46.4 percent decline in CoreWeave’s stock since its peak on June 20, 2025, Nvidia’s 13 percent stake is worth around $4.72 billion. As CoreWeave deploys capacity and gets customers, Nvidia will very likely make more than its bait back in the rise in CoreWeave stock, and it is the company’s second largest shareholder and therefore Nvidia knows everything that is going on at CoreWeave. Nvidia gets all the benefits of ownership without having to spend somewhere around $60 billion to $70 billion it would take to acquire CoreWeave, which has a market capitalization just shy of $49 billion as we go to press.

In theory, every time Nvidia invests in CoreWeave, its stake in CoreWeave grows because of the news of the investment. (The more you invest, the more you make, apparently. . . . ) And over time, if the market plays out as both Nvidia and CoreWeave expect, not only will Nvidia sell CoreWeave billions in GPU-accelerated systems, it will see its investment in CoreWeave grow as it puts pressure on the big clouds that are trying to do their own AI XPU things. But in the past six months, CoreWeave’s stock went down, not up, because everyone is concerned about an AI bubble and the need to come up with enormous amounts of cash to build out infrastructure that customers have not commited to use as yet.

To one way of looking at this, the Nvidia investments in CoreWeave are more like channel management like in the old days of IT distribution, where companies like IBM and Hewlett Packard Enterprise would occasionally “stuff the channel” to make numbers a little better in a quarter. In the end, this always caught up with those who did it, because there was an inevitable bad quarter and the stock of the OEM supplier would end up taking a beating. Back then, companies did not build to order and they had to build capacity and get it out of their factories to book it as revenue. Nvidia is doing its part to line up a little money so CoreWeave can get others to have the confidence to finance its massive AI neocloud aspirations.

This is merely priming the CoreWeave pump, to stick with the water metaphors. Which is a lot less distateful than that mouthful of gas when you are siphoning out of a car with an old piece of rubber hose. (And that is a metaphor for the tail end of the history of an architecture, perhaps.)

What Nvidia knows is that as long as there is a relative shortage in HBM memory, it can allocate GPUs as it sees fit. And it is smart to help neoclouds and clouds make money by renting out capacity for a lot more than it would cost to enterprises to acquire systems themselves with the same number of GPUs in a market flooded with compute and memory. Aiming at the clouds only makes Nvidia more money so long as demand is greater than supply and so long as it is making investments in the neoclouds. And should the polarity reverse on HBM supplies, then the projected revenue streams from the clouds and neoclouds will look rosier than reality could turn out to be, and ditto if the rise of AI XPUs undercuts the pricing on Nvidia GPUs.

This is the real problem those running Nvidia, those buying Nvidia’s gear, or those investing in Nvidia stock need to worry about. Same with CoreWeave, which will have no choice but to host custom accelerators if that’s what customers like OpenAI demand.