How Long Before Broadcom Makes More AI Compute Engines Than Nvidia?

Chip maker and enterprise software player Broadcom announced its financial results for the final quarter of its fiscal 2024 today, which ended on November third, and all we kept thinking about as chief executive officer Hock Tan went over the numbers was the question in the title above.

We have never believed that Nvidia could maintain its current – and enormous – share of AI training and inference revenues, even with the GenAI Boom that has brought it skyrocketing revenues and profits for the past two years. These kind of sales and profits always bring intense competition, as happened to IBM, Sun Microsystems, and Intel in their turns in the datacenters of the world.

But Nvidia is not just seeing competition from GPU archrival AMD, which has gotten its datacenter GPU act together and now can compete on price and value and speeds and feeds with whatever compute engine that Nvidia forges. Nvidia’s biggest problem is that its biggest customers, who have massive enough IT expenditures that they can afford to compete with Nvidia and AMD and design their own XPUs for serial and parallel computing. And when they do so, it is chip design and manufacturing houses Broadcom and Marvell, who have vast expertise running chippery through the foundries of Taiwan Semiconductor Manufacturing Co, who will be benefitting.

The question in the title might not be as absurd as it seems on first pass. And remember, we are always restricting our thinking to compute engines, networking, and storage in the datacenter. Not at the edge and not in clients. And we are also specifically talking about shipments of units as well as revenues. It is possible that even if Broadcom can outship Nvidia some years hence with datacenter XPUs, Nvidia will still make more revenues and a lot more profits.

In the call with Wall Street analysts going over the Q4 F2024 numbers, Hock Tan laid down some numbers out that are going to make you go Hmmmmm.

In fiscal 2022, we reckon that Broadcom had $1.94 billion in semiconductor revenues that came from AI semiconductors, including networking chips as well as custom compute engine chips designed by the hyperscalers and cloud builders and shepherded into being by Broadcom in partnership with TSMC. The consensus in the rumor mill is that Google, Meta Platforms, and ByteDance all have custom AI chips being handled by Broadcom and that OpenAI and Apple have just formed partnerships. Marvell is apparently doing the same shepherding for Amazon Web Services and Microsoft.

In fiscal 2023, Tan said that Broadcom had $3.8 billion in AI chip sales, nearly double from the prior year. But in fiscal 2024, AI chip sales at Broadcom were up by a factor of 3.2X, or 220 percent, to $12.2 billion. This is but a taste of things to come, apparently.

Tan said that the serviceable addressable market (SAM) for AI compute and networking chips among its three key AI chip partners – again, we think these are Google, Meta Platforms, and ByteDance – in 2024 was somewhere between $15 billion and $20 billion, which means that $12.2 billion that Broadcom actually took down represents a market share of the SAM of between 60 percent and 80 percent. But no doubt more than a few jaws dropped and eyebrows raised when Tan said that by 2027 across these three voracious XPU and networking chip buyers would be building XPU clusters with 500,000 to 1 million calculating compute engines (not counting host CPUs or DPUs) and that together, these three hyperscalers and cloud builders would drive somewhere between $60 billion and $90 billion in SAM for Broadcom to directly chase.

Somewhere between 15 percent and 20 percent of that spending on silicon for AI will go for networking, said Tan, and the rest of the SAM that Broadcom is chasing is allocated for compute. By our math, assuming that the average compute engine will cost $15,000, that works out to somewhere between 3.2 million and 4.8 million units, and that is only somewhere between five and ten of such massive AI training clusters at 500,000 to 1 million XPUs each.

This does not include any potential sales Broadcom will get from OpenAI, which wants to wean off Microsoft Azure, and from Apple, which also is looking to be less dependent on AI chips made by others, if the rumors that came out this week are right.

In the quarter ended in early November, Broadcom had $14.05 billion in sales, up 51.2 percent year on year. Operating income was $4.63 billion, up 9.1 percent, and thanks to the legendary cost cutting that Tan is well known for, the company has rebounded from its deep cuts in the wake of absorbing server virtualization juggernaut VMware, with net income up 22.7 percent to $4.32 billion, representing 30.8 percent of revenues. This is about the average of the past three fiscal years ahead of the completion of VMware integration.

Sales of semiconductor solutions – we call them chips but that doesn’t sound fancy enough for a financial report – were up 12.3 percent year on year to $8.23 billion, and operating income for this part of the company rose by 7.9 percent to $4.61 billion, representing 56 percent of revenues.

Infrastructure software, including the CA and VMware stacks among other things, accounted for $5.82 billion in sales, up by a factor of nearly 3X. And thanks to VMware, operating income for Broadcom’s software rose by 2.8X to $4.19 billion, representing a very healthy 72 percent of revenues.

Tan said on the call that VMware sold licenses covering 21 million cores in the quarter, up from 18.5 million cores in Q3 F2024. The annualized booking value for VMware was $2.7 billion, and we estimate that VMware brought in $3.91 billion in sales and $2.57 billion in operating income. Kirsten Spears, chief financial officer at Broadcom, said that the company would no longer provide VMware revenue and profit figures now that the integration of the company is complete, but did add that 4,500 of the 10,000 largest VMware customers have moved to the VMware Cloud Foundation (VCF) repackaging that Broadcom announced earlier this year.

In the November quarter, the Networking division grew by 44.7 percent to $4.45 billion. AI networking, driven by adoption of Tomahawk 4 and 5 and Jericho-3AI switch ASICs, represented 76 percent of total networking revenues, or $3.38 billion, up by 2.6X year on year and up 2.2X sequentially. This was a pretty slow quarter for AI compute, which we calculate was just under $200 million in Q4 F2024. In Q3, AI compute was $2.18 billion and AI networking was $1.07 billion, for a total AI chip sales of $3.58 billion, up 2.4X year on year and up 10 percent sequentially.

If you extract the AI compute and networking chip revenues from all semiconductors sold by Broadcom, you get $4.65 billion in sales across all chippery in the Broadcom portfolio, which was down 20.7 percent year on year but up 15.7 percent sequentially.

Looking ahead to Q1 F2025, Broadcom says it expects for infrastructure software sales to be $6.5 billion, up 41 percent year on year and up 11 percent sequentially. The company is projecting that chip revenues will grow by 10 percent to $8.1 billion, and that AI chip revenues will grow by 65 percent to $3.8 billion. That leave non-AI chippery down in the “mid-teens” to $4.3 billion. That leaves overall revenues up 22 percent to $14.6 billion.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

4 Comments

  1. Wow! Those “rumors that came out this week” sure did a number on Broadcom/AVGO stock, up 20% since yesterday ($180 to $220) on that Apple 3.5D XDSiP news. Good for Broadcom!

  2. Broadcom is not real AI company. It doesn’t sell training or inference chips or AI SW.
    TPM : “we call them chips but that doesn’t sound fancy enough for a financial report”
    Same idea for XPU/Switch/NIC chips – we call them “AI” cause “custom” “doesn’t sound fancy enough for a financial report”

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.