What’s the difference between Meta Platforms and OpenAI? The big one – and perhaps the most important one in the long run – is that when Meta Platforms does a deal with neocloud CoreWeave, it actually has a revenue stream from its advertising on various Web properties that it can pump back into AI investments while OpenAI is still burning money much faster than it is making it.
And given OpenAI’s ambitions and aspirations to advance the state of the art in AI models, the spread between what it makes and what it needs will only get worse with time.
OpenAI is getting to be more and more dependent on CoreWeave even as it works with Crusoe to build its Stargate I datacenter campus in Abilene, Texas and stuff it with GPU-laden systems from Oracle and based on the same designs that are deployed in the Oracle Cloud Infrastructure service.
Crusoe announced today that the first phase of the Stargate I datacenter campus is up and running. We have asked how many GPUs were fired up in Phase 1 but have not received an answer as yet.
The eight buildings of Stargate I are said to weigh in at a combined 1.2 gigawatts of power draw and to hold as many as 400,000 GPUs. We think it is best, at this point, to clarify that this number people are talking about for Stargate I is for GPU sockets and to start counting GPU chiplets within those sockets as Nvidia will start to do with the “Vera-Rubin” hybrid CPU-GPU systems it will ship in the second half of next year and for which we believe OpenAI will be the first customer.
These systems, based on the 88-core “Vera” VC100 Arm server processor and the two GPU chiplet “Rubin” GR200 and GR300 GPU accelerators, will be known as the Nvidia VR200 NVL144 and VR300 NVL144 systems. Those numbers in the system name tell you the type of Rubin GPU – the R200 is optimized for both training and inference while the R300 is really aimed at inference since it has severely constrained FP64 performance – and the numbers after the NVL tell you how many GPU chiplets are in a shared memory rackscale compute node.
The first two datacenters on the Stargate I campus are also said to have more than 200 megawatts of power, but it is not clear how this power is allocated. We dislike the asymmetry of having different amounts of power allocated to the eight buildings, but that is the reality of power allocations from electric companies or locally generated power and of ever-increasing power draw from GPU-based systems as time goes by. So it would be a mistake to divide 1,200 megawatts by eight and get 167 megawatts per datacenter across the campus. For all we know, the first datacenter is only rated at 100 megawatts and the second datacenter that Crusoe is building as part of the campus is rated at 120 megawatts, both together comprising Phase 1.
What we do know is that this first datacenter is running rackscale systems based on the 72-core “Grace” Arm CPU and “Blackwell” B200 GPU accelerator from Nvidia, and that Oracle will eventually fill it with its compute and networking and, we think, with other third party storage for parts of the AI workflow. By the way, these systems should be called the GB200 NVL144 (by Nvidia chief executive officer Jensen Huang’s own admission) but are called the GB200 NVL72 because Nvidia wasn’t sure if it was going to count GPU sockets or GPU systems when it initially debuted the first Blackwell machines back in March 2024.
Anyway, with the GB200 weighing in at just over 1 megawatt for an eight-rack SuperPOD configuration (that is with an average power draw of 132 kilowatts per rack, somewhere around 100 megawatts gets you around 100 racks, which is 14,400 GPU chiplets. This is about 11.52 exaflops of aggregated compute at FP4 precision without sparsity support turned on.
We strongly suspect that the second datacenter at the Stargate I campus will use GB300 NVL72 systems using “Blackwell Ultra” GPUs, which run a bit hotter but which have 50 percent more FP4 oomph and will burn a bit more power, too. (We are guessing 10 percent to 20 percent more, fully loaded, even with power smoothing capacitors in the racks.)
Late last week, among many announcements, including a staggered $100 billion investment by Nvidia to help pay for the $500 billion Stargate project, OpenAI expanded its expanded datacenter capacity agreement with neocloud upstart CoreWeave, a former cryptocurrency miner turned AI datacenter provider that had some big deals with Microsoft (reasonably assumed to be for capacity used by OpenAI to train its models). OpenAI has been dealing directly with CoreWeave this year, announcing an agreement worth $11.9 billion for compute capacity back in March 2025, and the two tacked on another $4 billion of capacity to the deal in May 2025. Last week, OpenAI commissioned CoreWeave for an additional $6.5 billion in capacity, bringing the total contract value running out to May 2031 to $22.4 billion. The CoreWeave contracts include GPU capacity and the use of the datacenters wrapped around them and the power that feeds them.
Hot on the heels of that deal with OpenAI, CoreWeave has inked a $14.2 billion contract with Meta Platforms for datacenter and GPU processing capacity that runs out to December 2031. This may sound like a lot of money until you consider that Meta Platforms co-founder and chief executive officer Mark Zuckerberg has said that the social networker and AI model builder plans to spend around $72 billion in 2025 on infrastructure for its AI work as well as the upkeep of its various platforms (Facebook, Instagram, WhatsApp, Messenger, and Threads).
That capex spending rate over seven years – and it may be higher, it may be lower – would amount to $504 billion, of which $14.2 billion represents about 3.3 percent. And, if current trends persist and then you shave a little bit off the growth, Meta Platforms might have revenue around $440 billion in 2031 alone and it might have generated $2.1 trillion in revenues over those seven years.
There is, of course, no reason to believe that Meta Platforms can grow sales at 15 percent per year for the next seven years, or that it will do so profitably, or that it will have enough money to make such capex investments. But there is also not a reason to believe, given the AI bubble that we are in, that it won’t try and might succeed.
Whatever Zuckerberg’s ambitions are, it is clear that Meta Platforms can fund them and that OpenAI and its partners (CoreWeave and Crusoe for datacenters, Oracle for datacenter iron) are increasingly relying on debt financing to get it done.
How much do you think Oracle can mortgage the application software and database businesses for to raise money? The company already has $91.3 billion of debt in the form of notes and other borrowings, and it has $14.1 billion in operating lease obligations. (It just raised $18 billion in debt last week.) And what kind of margin will Oracle get building all of this AI iron for OpenAI and what margins will Crusoe and CoreWeave get from OpenAI?
Comments are closed.


Amazon borrowed money for many years and now dominates the online retail market. If OpenAI can hold out for a similar length of time before going into monetisation mode, they may have similar success.
I see millions of students getting used to using AI for learning along with a similar percentage of professionals, scientists and successful individuals. Distinct from high-visibility corporate AI projects aimed at cost savings or boosting productivity, people using AI as a tool in their daily lives constitutes a near invisible demand currently being satisfied by free or nearly free services. This demand may become surprisingly inelastic after some time.
Said another way, Netflix did not need corporate customers to make a lot of money.
As has been pointed out in various articles in the Register, the actual costs of running LLM are far in excess of what the general population are willing to spend, and without them the remaining few would not be able to afford the then higher prices if there are not those millions of subscribers. The costs of amazon setting up and devloping AWS (their major profit driver) pale in comparisson to openai and its gigantic losses / costs.
Eric –
Netflix does not need corporate customers to make money. I would not say they make a LOT of money. Yes ~$10B/yr income on ~$40B of revenue is a lot of money, but not compared to the investment these AI projects are making. I suspect that AI to the consumer will struggle in the same way streaming is now. With half a dozen players in the market, and consumers frustrated with ensh*tification, the providers need to increase prices, but consumers are leaving, or jumping to other providers. People will pay for it, but how much will they pay? Will they use a lower performing AI if it’s cheaper?
We also may soon see if consumer spend on streaming is elastic if there is a real recession. AI might run right into significant consumer belt-tightening in the medium term.
Just about everything I learn about the AI ecosystem makes me think it’s a really cool idea, that isn’t entierly hype, but seems require about 10 times as much investment as the real added value would justify. Or said differently, seems to provide about 1/10th the value its proponents seem to keep promising.