Site icon The Next Platform

OpenAI Datacenters Follow The Money To Abu Dhabi

All things being equal, you probably would not build a 5 gigawatt datacenter on Earth in the desert near the equator. But that is where a lot of salt domes built up under the ground, thereby entrapping that black gold we call oil, which in a modern economy is converted into bazillions of dollars.

This is precisely the amount of money that the next wave of AI will require, which is why we find OpenAI courting the United Arab Emirates and installing the second iteration of its Stargate project in a truly massive datacenter in Abu Dhabi.

As we reported last week when the Saudi Arabian sovereign wealth fund, called the Public Investment Fund, announced the formation of the Humain AI effort – something that will compete with OpenAI in developing models and commercializing them – and a plan to develop 500 megawatts of datacenter capacity in the kingdom, the Gulf States together have around $4 trillion to invest through their sovereign wealth funds, which sounds like a lot until you realize they cannot liquidate those assets all at once to invest in AI projects. Moreover, for perspective it is helpful to remember that over the next five years, based on current AI spending trends, the big four hyperscalers and cloud builders in the United States – Amazon, Microsoft, Google, and Meta Platforms – are going to spend around $325 billion on infrastructure capital expenses in 2025 alone, and probably something on the order of $1.6 trillion to $1.7 trillion over the next five years (2025 through 2029, inclusive). This is real money, coming from the cash they generate from their respective software, advertising, search, and cloud computing businesses.

So, yes, Saudi Arabia and the UAE are spending big bucks, and are spending like hyperscalers, but they are not spending more as far as we can tell.

The facility in Abu Dhabi will measure about 10 square miles, a little less than half the size of Manhattan (at 22.8 square miles) and about five times the size of Vatican City. If you look at just the datacenter capacity of Loudoun and Prince William counties in northeast Virginia, there is around 11.4 square miles of facilities (it is about 130 million square feet of datacenter floor). Add some space between the buildings for roads and parking and you might estimate the area used for datacenters at maybe 15 square miles.

Call it a unit of .67 Ashburns. The US Department of Commerce is calling the overall facility in Abu Dhabi the UAE-US AI Campus, and it looks like Stargate UAE is just one of the many systems installed by a number of companies that will be located in the 5 gigawatt facility. (It needs a better name.)

Group 42 Holding, the AI investment vehicle set up by the UAE several years ago to turn oil money into AI money, is steering the project for the site, and it will be operated in partnership with several different US companies. We presume Oracle will be one of those companies, and there is a chance that Microsoft, which has ties to G42 already, might be interested. (Google and Amazon Web Services tend to do their own thing, but if they are made a good offer to occupy some of the Abu Dhabi facility, they might not turn it down.) American companies will offer services and manage the infrastructure inside of the facility, and there are diversion guarantees to ensure that compute engine technologies that have restricted distribution by the US Commerce Department cannot be used by or distributed to organization located in countries that have export controls on this technology.

The important thing is that the UAE wants to create a regional datacenter powerhouse and also knows that nearly half of the population of Earth is within 2,000 miles of Abu Dhabi, which is a reasonable latency at the speed of light. (That is about 10 milliseconds using optical cables and 17 milliseconds using copper cables.)

The Abu Dhabi site will use nuclear, solar, and gas power to give the equipment juice and to cool it, and will have a science park for people “driving advancements in AI innovation.”

OpenAI has committed to using 1 gigawatt of datacenter capacity at the Abu Dhabi facility, and said in its announcement that 200 megawatts of capacity based on Nvidia’s “Blackwell” GB300 systems, and we presume that they intend to use the GB300 NVL72 rackscale systems given the need to do the least expensive, highest performing inference they can find. Depending on what is in the racks and how they are connected, a GB300 rack should burn somewhere between 120 kilowatts and 140 kilowatts. At the high end of that power consumption range, that is just north of 100,000 Nvidia Blackwell B300 GPUs. Those GPUs alone probably cost around $5 billion, and the system wrapping around it probably cost another $2.25 billion for CPUs, NVSwitch networks, cables, racks, and storage.

So how much oomph is that 200 megawatts? At FP4 precision on the tensor cores of the Blackwell B300, a GB300 NVL72 rack has 1.4 exaflops with sparsity support on. It is 720 petaflops with it off, and that is also the performance for FP8 with sparsity; FP16 performance is half that again. At 1,400 racks for around 200 megawatts, you get around 2 zettaflops at FP4 precision. With the full 1 gigawatt that Stargate UAE expects to utilize, and using GB300s alone for all of it (which OpenAI will not do), you would be talking about 7,000 racks at a cost of around $50 billion and delivering around 10 zettaflops. With future generations of Nvidia rackscale iron, the performance will go up faster than the cost but maybe not faster than the power consumption, so adjust that as you see fit in the 1 gigawatt thermal envelope. What gets deployed in the remaining four-fifths of the OpenAI slice of the massive Abu Dhabi datacenter will depend on what is available when it needs to roll out.

It is reasonable to wonder what the UAE gets out of these deals. At the very least, it gets money for datacenter rent, power, and cooling. And we strongly suspect that there are provisions for UAE researchers – and specifically those at G42 – to make use of OpenAI models on the machinery in the datacenters.

But perhaps more than anything else, possession is nine-tenths of the law. Should the geopolitical situation destabilize, the UAE will eventually have a facility crammed with somewhere on the order of 2.5 million AI accelerators in it worth somewhere well north of $100 billion with infrastructure that probably, in total, costs twice that.

All of the companies in the US that offer AI infrastructure or cloud services wrapped around it want a piece of this action, to be sure. (We will see about Google and AWS.) If there is $200 billion in investments in the datacenter infrastructure and systems in the UAE-US AI Campus, then at the rates the cloud builders charge today, there is somewhere around $700 billion in rentals over four years at the high end prices that AWS and Microsoft Azure charge and maybe somewhere closer to $400 billion at the prices the neoclouds charge. Doubling your money in four years is a pretty good deal.

It is just not clear who is paying and who is collecting in this cluster of partnerships. But rest assured: Just because we don’t know does not mean it has not been hammered out, down to the penny.

OpenAI is spreading out its risks by doing some of its processing in the UAE. The company currently uses Microsoft Azure and, we think via Microsoft, CoreWeave for a lot of its processing these days, but is building its own 1.2 gigawatt Stargate facility in Abilene, Texas with the help of Crusoe Energy, at a cost of $15 billion. That facility will have eight datacenters, each expected to be crammed with 50,000 Blackwell B200 GPUs in rackscale GB200 NVL72 systems, for a total of 400,000 GPUs. This week, Crusoe secured $11.6 billion in debt and equity financing to pay for the project, which will see 200 megawatts of capacity (two datacenters) fired up before June.

Exit mobile version