
“If you build it, they will come,” as we all learned from watching Field of Dreams two and a half decades ago. But sometimes, the numbers are so big that you need a good backstop in case the pitches are a little wild or have too much heat. And that is why a deal just announced between Nvidia and neocloud CoreWeave has Jensen Huang playing catcher.
There is a lot of roundtripping between partners when it comes to AI processing, particularly between the big cloud builders, who have lots of infrastructure and money and a need for AI models, and the big AI model builders, who have models that operate at scale and therefore with better intelligence, but they don’t have infrastructure or money. So the clouds give the model builders cash, get stakes in the model builders, and get access to models to build into their software stacks, and that cash comes right back to them over the course of years as rentals of capacity to build the models on their clouds. This raises their cloud revenues as well, which raises their stock valuation as everything looks like AI and the cloud are crushing it.
This is why Microsoft has enthusiastically invested $13 billion in OpenAI in 2022 and 2023 and why Amazon invested $4 billion in Anthropic so it would turn around and port its Claude AI models to the homegrown AWS Trainium XPUs and pay for capacity to train those models and run inference workloads.
While Google was an early investor in Anthropic (investing $300 million in late 2022), the company has stepped up with $3 billion additional investments to get a 10 percent stake in Anthropic (that stake is valued as of January 2025), which is using the proceeds to – guess what? – train models on Google Cloud’s GPUs and TPUs. An exact valuation was not given for this deals inked earlier this year between OpenAI and Google, but OpenAI is probably committing to using billions of dollars in GPU and TPU capacity in exchange for giving Google rights to resell its models on the Google Cloud.
Interestingly, Google’s Gemini model is completely self-supporting. Google trains Gemini only on TPUs and it runs inference against it in its applications and across its API to the outside world only on TPUs.
Oracle is rumored to have just inked a deal north of $300 billion to provide the machinery that is part of OpenAI’s massive $500 billion Stargate project, which sent Oracle’s valuation soaring last week and briefly made Larry Ellison, Oracle’s co-founder with a 40 percent stake in that IT conglomerate and new cloud player, the richest man in the world.
CoreWeave, one of the neocloud upstarts that used to be a cryptocurrency miner, secured $15.9 billion in future GPU rentals from OpenAI back in March of this year, and is investing like crazy to get that capacity online. CoreWeave’s biggest customer last year was Microsoft, and we think most of that capacity was actually OpenAI workloads that were offloaded from Microsoft’s Azure cloud because Azure ran out of GPU capacity. Given the state of relations between Microsoft and OpenAI, given its Stargate effort to be independent of Azure, you can see why Microsoft would turn around and rent capacity from CoreWeave rather than try to build out another datacenter for OpenAI to use.
The deal between Nvidia and CoreWeave is a new twist on all of this, and it was announced in an 8-K filing with the US Securities and Exchange Commission today because the deal has a potentially material impact – and certainly a positive one – for CoreWeave between now and 2032, which it expires.
CoreWeave and Nvidia inked a master services agreement back in April 2023, which is when the company raised $221 million in Series B funding. That Series B was extended with another $200 million raise in May, followed by a $642 million secondary sale in December 2023, followed by a $1.1 billion Series C round in May 2024 and a $650 million secondary sale in October 2024. Nvidia has kicked in money during this time, and is said to have a 7 percent stake in CoreWeave. Nvidia backstopped CoreWeave’s IPO with a $250 million order for shares at $40 a pop as they came out on Wall Street. So this is not the first time Nvidia has been there for CoreWeave, which we think Nvidia is cultivating as a counter-balance to AWS, Microsoft, and Google.
CoreWeave has a market capitalization of just under $58.8 billion as we go to press, which makes Nvidia’s stake worth about $4.1 billion. Nvidia has made an 18.6X return on its initial CoreWeave investment, which is pretty good for a little more than two years. So, Nvidia can afford to be generous with CoreWeave and it really doesn’t hurt its books at all.
We reached out to CoreWeave press relations and investor relations to get the terms of the original MSA with Nvidia, but so far we have not heard back. The updated MSA now says that out through 2032 that Nvidia will guarantee $6.3 billion worth of GPU compute capacity spending if CoreWeave cannot find customers. That is an average of $900 million a year, and if you rented that capacity as GB200 NVL72 instances, at $10.50 per “Blackwell” GPU per hour, that would be renting 9,386.9 GPUs for a year.
Nvidia is one of the largest model builders and tuners on the planet, and is also using AI to help with its chip design. Renting 9,400 GPUs for a year probably dents its budget, but given the profits Nvidia is getting from selling GPU clusters to everyone, this is not going to break the bank. It does, however, mean that Nvidia can use spare capacity at CoreWeave at a set price and not have to put more iron at its Santa Clara headquarters, in its Equinix facility in San Jose, or in AWS where it is building the “Ceibo” supercomputer with that cloud builder for its own private use.
And more than half of that full capacity Nvidia might have to take at CoreWeave is covered by the lift in CoreWeave stock. We don’t think Nvidia will have to pay for very much capacity, in fact, unless the wheels come completely off the GenAI boom.
Perhaps this “Field of GPUs” is the opening act of a Star Trek NextGen++ “Field of Failed Dreams” nightmare episode.
Like a microphone in front of a speaker, the constant 10x growth in GPU cluster size necessitated by WorldAI’s quest for the HAL 9000 consumes ever more of MegaTech’s infrastructure loop-back capital…Endangering the financial markets of planet Cudah as a side effect.
Growing power requirements lead to a bidding War4Watts that eventually has all power cut from all but a few $MegaBillionaire compounds as its rerouted to the data centers that eventually pave over all of its “Northern Virginia”…
….
Our intrepid NCC-1701-D happens upon the dead husk of the society that once teamed among the various continents of Cudah (Ampere, Hopper, Blackwell, Rubin & Feynman.)
Within a fairly short time Data’s field work on Cudah helps him piece together the design of the quest for a synthetic neural network trillions of times more powerful than his own Positronic brain…And deduces that the quest was doomed to ultimately fail due not to hardware issues, but it reliance on an AI software technology proven to be a dead end a century before: LLMs.
I agree that LLMs are a dead end. It will be interesting to see if Song-chun Zhu gets better results with his approach.