Compute

Some More Game Theory, This Time On The AMD-Meta Platforms Deal

Published

It may seem like you are having flashbacks, but you are not. The deal that AMD has just announced with Meta Platforms is extremely similar to the one it inked back in October with model builder OpenAI, right down to the 6 gigawatts of total datacenter capacity of compute, storage, and networking and to the same warrant for 160 million shares of AMD to sweeten the deal.

Eight more deals like this, and ten companies will own 100 percent of the stakes in AMD within five years. (No, this is not going to happen, and yes, that was a joke. We hope it is, anyway. . . .) But seriously, OpenAI and Meta Platforms can sell their shares after they are converted from the warrants (after reaching milestones in terms of technology installed and AMD share price driven by the success of those installations) – so OpenAI and Meta Platforms have more money to spend on GPUs!

Crazy new world we live in, ain’t it?

Anyway, one big difference between the Meta Platforms deal and the OpenAI deal is that we are pretty sure the former, which also has aspirations to be one of the top AI model builders in the world, will actually have the money to follow through on its hardware purchase commitments. People are still trying to figure out how OpenAI is going to come up with hundreds of billions of dollars in cash when it only makes tens of billions of dollars a year in revenues. We think it will beg, borrow, and stop just short of stealing and figure out some way to buy an enormous amount of datacenter capacity.

This deal AMD has inked with Meta Platforms is also similar in magnitude and nearly in scope to the one that the social networker signed with AI hardware juggernaut Nvidia last week. Nvidia was not terribly specific about the money Meta Platforms was spending, but said that the company would be buying access to millions of “Blackwell” and “Rubin” GPU accelerators from Nvidia, but the fine print, as we pointed out and that many people missed, is that some of this would be for cloud capacity, not on premises gear. Our best guess was that the Nvidia deal was for 2 million to 3 million GPUs, and lots of “Vera” Arm server CPUs both inside the AI systems and outside of them, as well as NVSwitch scale up networks for the GPUs and SpectrumX networks for scale out clustering, this might represent somewhere between $110 billion and $167 billion in revenue for GB300 NVL72 equivalents alone. (The Vera CPU clusters would be above and beyond this, but probably not enough to move the needle.)

How much of this investment by Meta Platforms was already on the books at the clouds and neoclouds over what we expect to be the next four or five years is unknown. Neither company was precise on the term of the deal, except to say that it was a “multiyear, multigenerational strategic partnership.”

AMD and Meta Platforms are being a little more precise, saying that the deal runs for five years, starting in the second half of 2026 when the first 1 gigawatt of systems based on a custom MI450 GPU accelerator and the “Helios” Open Rack Wide v3 rackscale system that was co-designed with Meta Platforms.

Depending on the accelerator used, 1 gigawatt of capacity is reckoned to be somewhere between 500,000 to 600,000 GPUs. Take the midpoint of that, and you are talking 3.3 million MI400 series equivalents across the 6 gigawatts installed over five years. At an average price of $35,000, that is $115.5 billion just for the GPUs, which averages $23.1 billion a year, which is consistent with the “double digit billions per gigawatt” that AMD chief executive officer Lisa Su cited in the conference call with Wall Street. Add in the cost of the rack plus networking (some from AMD, such as the Pensando DPUs) and storage, you are probably talking $35 billion per gigawatt for the iron, with the rest going into facilities, power, and cooling.

On the call, Su said that Meta Platforms was an early adopter of the “Antares” MI300X and MI350X GPUs from AMD, and without such a deal with the social networker she believed that the “Altair” MI450 series “would have done well” selling at Meta Platforms.

“But what we are looking to do is do something transformational,” Su explained. “And when you talk about gigawatt-scale deployment and six gigawatts over five years, that is transformational in terms of where we see our business. And in addition to that, they are at the forefront of what is happening with models. They are optimizing workloads for the for their future, and we are optimized alongside with them.”

Which brings up another unique part of this deal. Meta Platforms is getting semicustom MI450 series GPUs from AMD as part of the deal, and starting with the initial shipments of Helios racks in the second half of 2026. This is the first custom part of the MI400 series generation; Lawrence Livermore National Laboratory got a hybrid CPU-GPU part, the “Antares-A” MI300A with six GPU chiplets and two CPU chiplets, during the MI300 series generation. There could be other customers who get custom MI400 series parts, which is possible thanks to the chiplet design, provided they have significant enough volumes to warrant manufacturing a special part.

The exact nature of this custom MI450X part that Meta Platforms has commissioned is unclear, but Jean Hsu, AMD’s chief financial officer, said that this custom part did not require any additional tapeout during the MI400 cycle. We also know that the custom MI450X tuned for Meta Platform’s own inference workloads.

So it could have more or less HBM stacked memory than the standard parts or higher or lower clock speeds on the GPUs, depending on if Meta Platforms is optimizing for cost per performance per watt or just trying to drive the absolute best performance. If you wanted to get better balance on HBM capacity and bandwidth against the inherent compute in a socket, you might want to actually gear down the number of GPU chiplets, run them at slightly higher clocks, and ramp up the amount and speed of the HBM memory to get more capacity and bandwidth per unit of compute while also lowering the thermals on compute.

There are many different levers to pull – especially if AMD is separating out vector and tensor cores onto different tiles, as we hope it is. If this is the case, then Meta could dial up the tensor cores and dial down the vector cores, for instance, if that helped its inference workload and still stay in the MI450 socket.

Meta Platforms is also an early and big adopter of the impending “Venice” Zen 6 Epyc 9006 CPUs, and will be also scooping up lots of the future “Verrano” Zen 7 Epyc 9007 CPUs. These CPUs will be used in the Helios AI racks, of course, but will also be deployed to run more generic applications at Meta Platforms, supporting Facebook, Instagram, and other applications.

The first 1 gigawatt of capacity has been committed, and the remain five will be put under contract successively between now and 2030. That gives AMD 2 gigawatts of commitments from OpenAI and Meta Platforms alone, which means it can sign deals with its suppliers to fulfill those orders without much of a risk. Each new signing of the next batch of gigawatts – it averages 1.25 gigawatts a year from 2027 through 2030 – gives it confidence to do the manufacturing for each batch.

As we pointed out back in our OpenAI deal analysis, if you assume those warrants come available more or less linearly and that AMD can move its stock fairly linearly to $600 by 2030, then the value of that 160 million shares for both OpenAI and Meta Platforms would be somewhere around $69 billion by the end of the period of these contracts. (The details of this stock deal are in an 8-K filed by AMD with the US Securities and Exchange Commission.) That is somewhere around 2 gigawatts of GPU system capacity right there if AMD is undercutting Nvidia slightly on AI compute engines and racks, which we believe will be the case. This amounts to a discount on the hardware paid for with stock funny money instead of real cash. Getting AMD shares to more than triple in five years might be a challenge, but it is certainly doable.

Here is the neat bit: With this deal, AMD could have around 40 percent revenue share for AI accelerators at Meta Platforms, compared to around 50 percent for Nvidia and 10 percent for the company’s own MTIA devices and maybe some Google TPUs should that rumored deal come to pass. These accelerators together would represent a little more than half ($327 billion) of the $600 billion in datacenter investments that Meta Platforms chief executive officer Mark Zuckerberg has committed the company to spend out to the end of the decade. That’s some rough math on the back of a drink’s napkin, we realize, but so is the budgeting out four years from now.

We shall see what really happens. But one thing for sure is what Lisa Su said at the end of the call: “We are making a big bet on Meta, and Meta is making a big bet on AMD.”