Broadcom May Become The Biggest Counterbalance To Nvidia

Published

Here’s a funny thought. If Broadcom had not bought legacy software maker Computer Associates in November 2018 for $18.9 billion, it would have had a harder time buying server virtualization juggernaut VMware for $69 billion in November 2023. And without the enormous profits it gets from both of these legacy software providers, it would have been hard pressed to have the patience to build a custom AI XPU business that will soon dominate its vast chip business.

The fact that chief executive officer Hock Tan, who was running Avago Technologies when he took a shining to chip maker Broadcom, came up with the $37 billion it took to do the Broadcom deal in May 2015 in the first place to even do that monster deal has made the modern Broadcom as we know it today even possible.

This has been no mean feat.

Building conglomerates rarely works out the way Wall Street or the top brass at a company hopes. Sure, the bankers do fine collecting their fees for putting things together and then collect more fees a decade later ripping them apart “to unlock shareholder value” when it doesn’t work out. But in a complex company with many product lines, there is always something going wrong even when other things are going right. Every once in a while there is a harmonic convergence where everything is going right, say IBM circa 1989, and everybody thinks it will last forever. But then along comes the cacophonous divergence when everything goes wrong, like IBM circa 1992. There is much weeping and gnashing of teeth, and in the case of IBM, a massive layoff of half of its 400,000-strong global workforce and the largest writeoffs in the history of business.

Well, so far anyway. If this AI bubble bursts, we will see some much bigger writeoffs, but perhaps not so many layoffs. The layoffs come if AI succeeds, not it if it just grows instead of maintaining an exponential. So far, this AI bubble is made of Super Elastic Bubble Plastic, and if you are of a certain age you know what I mean.

Anyway, having one or two of the cylinders sticking all the time is why IBM has struggled over its most recent decades, and it is also why Hewlett Packard Enterprise and Dell had their own woes in the 2000s as they tried to build Baby Blues of their own, mixing datacenter hardware, software, and services.

But thus far, Avago transformed into the modern Broadcom has worked – and worked marvelously.

Tan is legendary for stripping out unnecessary costs and tuning up businesses to kick out profits and has made this hybrid hardware-software company work. And Tan has steered Broadcom brilliantly to make the most of an AI boom where the hyperscalers, cloud builders, and now the AI model builders want to control their own compute engine fates and also want the very best networking they can coax Broadcom into designing.

The AI compute and networking business utterly dominates the Nvidia business, and it is not quite there yet for Broadcom. But if current trends persist in homegrown XPUs for AI workloads, it is not hard to imagine that the combination of Broadcom’s custom ASIC business, its storage and networking chips, and its emerging rackscale systems business will pack a bigger wallop than even AMD does today. AMD may rue the day it spun off its ZT Systems business when Broadcom really starts humming along, and Supermicro, Lenovo, Dell, Foxconn, Quanta Computer, and Inventec may also wish that Tan just stuck with chips and stayed out of racks.

This all sounds weird, I know. But all it takes is for AI inference to become highly tuned at a few companies on custom AI hardware in a mad dash to drive down the cost of chewing on and spitting out tokens for Broadcom (and to a lesser extent, Marvell) to take big bites out of the GPU and networking businesses at Nvidia and AMD. Broadcom clearly wants to cut out the OEMs and ODMs and just do it all, and customers may want one throat to choke for the whole shebang.

With that, let’s get into Broadcom’s Q1 F2026 financial results.

In the quarter ended on February 1, Broadcom posted $19.31 billion in sales, up 29.5 percent, with operating income up 36.8 percent to $8.56 billion and net income rising almost as fast (up 33.5 percent) to $7.35 billion. That net income represents 38.1 percent of revenues, and clearly that legacy software is cushioning some of the blow of building custom AI chips for hyperscalers and model builders – an inherently less profitable business that Tan & Co cannot walk away from.

Broadcom ended the quarter with $14.17 billion in cash, and its cash pile is generally trending upwards. But at the same time, the debts it amassed buying the original Broadcom and VMware are still hanging around, with the company owing $66.1 billion in various debts.

The company’s chip group, called Semiconductor Solutions, had a 54.2 percent revenue pop in Q1 F2026 to $12.52 billion, and AI chippery was the big driver here. (More on that in a moment.) Operating incomes for Semiconductor Solutions rose by a very nice 60.4 percent to $7.51 billion, which represented 60 percent of revenues. If margins are not particularly high on AI accelerators and probably even lower on the AI rackscale systems that Broadcom is now ramping up for one of its six AI infrastructure customers, then they must be pretty damned good on a whole bunch of other things.

The Infrastructure Software group, which includes the CA mainframe and Unix system software, the Symantec PC and server security stack, and the VMware server virtualization stack, had $6.8 billion in sales, up a mere 1.4 percent year on year. Broadcom has not given out specific VMware figures since Q3 F2024, but this time around said VMware sales were up 13 percent to what we think is around $5.2 billion in our model. Tan said further that VMware had a revenue backlog of $9.2 billion, which is almost triple what it was a year ago.

Operating income for the Infrastructure Software group, thanks to the Tan Squeeze, rose by 4 percent to $5.3 billion. Operating margins for the Broadcom software business have been rising since the VMware hit as it was absorbed into the conglomerate, but it seems to have topped out at 78 percent of revenue. (That’s not so bad. The operating income at IBM’s absolutely captive software business, where customers have no alternatives whatsoever except in some cases from Broadcom, are around 88 percent, which is an industry peak that very few companies are ever going to match, much less beat.)

In any event, that Broadcom software business contributed more than its share to the middle line.

Now, let’s talk about Broadcom’s divisions.

Up until five quarters ago, Broadcom gave out financial breakdowns for its various semiconductor divisions. But with AI on the rise, it has stopped being specific about them and prefers now to talk about AI chips (and systems) versus other chips. It gives a few hints here and there, and we have kept our model going, admittedly with some error bars around the data.

We think that the Networking division (which includes AI stuff, including XPUs and rackscale systems) had sales of $7.91 billion, up 74.9 percent. Server Storage Connectivity – all that other datacenter chippery for peripherals – had maybe $1.1 billion in sales, up 13.3 percent, while chips on the Wireless division had maybe just under $2 billion in sales, off a point and a half. In our model, the Broadband division had $1.37 billion in sales, up 2.5X, and yes, that sounds a little high to us, too.

The way that Broadcom wants you to think about its chip business is AI versus non-AI, and here is what that split looks like:

What I think we are seeing in that orange line above is increasing competition in networking from Nvidia and Cisco Systems for both hyperscale Web and analytics infrastructure and for AI systems, too, as well as increasing competition from Astera Labs for PCI-Express switching and from Marvell on a whole slew of peripheral chippery.

Here is the data behind that chart above, just so you have it:

As is our custom, items shown in bold red italics are estimates from The Next Platform.

In our model, AI chip (and system) revenues rose by more than 2X to $8.44 billion in Q1 F2026. About a third of that was driven by AI networking chips and the other two thirds was driven by AI accelerators and rackscale systems. If you do the math, that is $5.65 billion in AI chips and systems, up 2.4X year on year and up 7.4X sequentially compared to a pretty weak Q4 F2025. AI networking drove $2.78 billion, up 60 percent year on year, but down 51.5 percent sequentially.

If I had to guess, Broadcom had some big customers do a lot of AI networking chip buying in Q4 F2025 precisely so it would counterbalance low AI XPU sales as its customers transition to new generations of compute engines throughout this year. That’s what I would do if I was a hyperscaler, cloud builder, or model builder. Get the network done so the AI systems can just roll in and plug in.

Broadcom has built relationships with five big AI customers for hone-designed XPUs, and now has added a sixth who will be ramping this year. Tan said that demand from Google for its “Ironwood” TPU v7 accelerators would be strong in 2026 and in 2027 and beyond demand for TPUs from Google would be even stronger. Separate from Google’s own TPU demand, Broadcom is also benefitting from Anthropic’s decision to buy $10 billion in TPU racks, with Google’s permission. The plan is for Anthropic to install 1 gigawatt of TPU v7 capacity in 2026 (we presume Tan is speaking in fiscal years) and that this will more than triple to in excess of 3 gigawatts in 2027 (again, a fiscal year).

Tan added that in contrast to what many rumors say about the MTIA accelerator effort at Meta Platforms, “Meta’s custom accelerator MTIA roadmap is alive and well” and it is shipping now. We presume this refers to the MTIA v2 that was revealed in April 2024. Tan added that for the next generation MTIA v3, which has not been revealed as yet, Meta Platforms will scale production to “multiple gigawatts” of compute capacity.

For the fourth and fifth AI customers, Tan told Wall Street that these customers would have “strong shipments” this year and more than double in fiscal 2027. The rumors are that AI custom ASIC customer 4 is ByteDance and customer 4 is Apple. Customer six is, of course, OpenAI, which is using Broadcom to shepherd its “Titan” AI XPU through manufacturing and assembly. Tan said that OpenAI would install over 1 gigawatt of Titan capacity in fiscal 2027.

Here is how Tan sees the lay of the land right now:

“Let me take a second to emphasize our collaboration with these six customers to develop AI XPUs is deep, strategic, and multiyear. We bring to the partnerships, each of them, unmatched technology in SerDes, silicon design, process technology, advanced packaging and networking to enable each of these customers to achieve optimal performance for their differentiated LLM workloads. We have the track record to deliver these XPUs and high volumes at an accelerated time to market with very high yields.”

“And beyond technology, we provide multiyear supply agreements as our customers scale-up deployment of their compute infrastructure. Our ability to assure supply in these times of constrained capacity in leading-edge wafers, in high-bandwidth memory and substrates ensures the durability of our partnerships, and we have fully secured capacity of these components for 2026 through 2028.”

Tan said that Broadcom’s visibility into 2027 for these customers had improved so much that he could draw a $100 billion line in the sand for a minimum for overall AI revenues for that fiscal year. (Later in the call, Tan said it would be “significantly in excess of $100 billion” as a target.)

To put that into perspective, back in fiscal 2022, Broadcom’s AI revenues were $1.93 billion, and grew in fiscal 2023 to $3.81 billion. In fiscal 2024, they jumped by more than 3X to $12.74 billion, and hit $20.25 billion in fiscal 2025. So assume around a 2.5X growth factor for AI revenue in fiscal 2026, and a 2X again in fiscal 2027 and you get to $100 billion. And we think it will probably more than double in 2028, which is an even bigger year for AI system deployments for a lot of different reasons – provided all the money does not run out.

More immediately, looking ahead to Q2 F2026, Broadcom is expecting for sales of around $22 billion, up 47 percent year on year. The expectation is for Semiconductor Solutions to see 76 percent growth year on year to $14.8 billion, with AI chips and systems growing by 2.4X again to $10.7 billion. Infrastructure Software revenues will rise by 9 percent to $7.2 billion, which means more profits in the coffer to invest in the AI buildout.