
Nvidia sells the lion’s share of the parallel compute underpinning AI training, and it has a very large – and probably dominant – share of AI inference. But will these hold?
This is a reasonable question as we watch the rise of homegrown XPUs for AI processing by the hyperscalers and cloud builders. They are all in various stages of creating their own Arm-based CPUs and vector/tensor math engines for AI workloads and, maybe someday, for supporting some traditional HPC simulation and modeling workloads.
The hyperscalers and cloud builders may design these CPUs and GPUs, but they get help, and Broadcom and Marvell have a lot more experience, either directly or through acquisitions, and they are the ones who are doing the helping. They have a hand in guiding the designs and providing IP blocks like SerDes or PCI-Express or memory controllers. They also get paid to get the custom chips through the Taiwan Semiconductor Manufacturing Co foundry (and if Intel is lucky, maybe some day one of its fabs using 18A or 14A processes) as well as to implement various dimensioned – 2D, 2.5D, 3D, and 3.5D – packaging.
This work to help bring these custom CPU and XPU compute engines into being has been a boon for Marvell and Broadcom alike. But as you might imagine, selling services and technology to the likes the hyperscalers cloud builders is no easy task.
Marvell has lined up Amazon Web Services (Inferentia 2 and Trainium 2, with Trainium 3 in the works), Google (for the Axion Arm CPU), Meta Platforms (for a new DPU), and Microsoft (for a future AI accelerator, presumably the “Athena 2” Maia 200). Broadcom has Google (for its TPU AI accelerators), Meta Platforms (presumably for the MTIA AI accelerator), and ByteDance (also presumably for a custom AI accelerator). Broadcom is rumored to be working with Apple and OpenAI on their own AI accelerators, too.
These are the toughest customers in the IT sector, and they want to pay the least amount of money possible for the highest level of service. These homegrown CPUs and XPUs have to cost a lot less than Intel and AMD CPUs and Nvidia and AMD GPUs to make it all economically worthwhile. Otherwise, why bother?
Both Broadcom and Marvell also have a variety of interconnects and electro-optical components that they sell to the hyperscalers and cloud builders, for both their generic compute networks and their new AI back-end networks.
At the moment, Broadcom’s AI business alone is about the size of all of Marvell’s business, but Marvell is growing its datacenter business fast and it is an alternative in many of the markets that Broadcom plays in. So much of Broadcom’s revenue comes from legacy big iron software and now VMware, but in the chips racket overall, Broadcom is about four times the size of Marvell. Size matters, but so does optionality, and that is why we can expect for both companies to have a growing business both selling their own chips and helping others bring their own to market.
The business is lumpy, as has always been the case with high performance computing because capacity needs are fulfilled in large chunks rather than a continuous stream to make the economics work better. Buying a big cluster every three or four years gets you a better deal than buying a third or a quarter of a system every year. This was true of HPC centers since the 1970s until today and it is true of every AI center.
In the most recent quarter, which was the first of its fiscal 2025 and which ended on February 2, Broadcom posted sales of $14.92 billion, up 24.7 percent from the year ago period. The company brought $5.5 billion to the bottom line, a factor of 4.2X higher than a year ago when it was digesting VMware and restructuring.
Broadcom ended the first fiscal quarter with $9.31 billion in the bank and $66.58 billion in debts. It does not seem to be in a hurry to pay down its debts, but they are down a smidgen from this time last year.
Broadcom is split into two groups – Infrastructure Software and Semiconductor Solutions – and this is the first quarter when we do not have all that much visibility into VMware. On the call, Broadcom chief executive officer Hock Tan said that 70 percent of the top 10,000 VMware customers had been upsold from the vSphere/ESXi server virtualization hypervisor with perpetual licenses to the full VMware Cloud Foundation suite, which virtualizes servers, networks, and storage. Very little was said about VMware revenues, other than they were growing. We think revenues came in at $4.61 billion because overall software revenues were up 47 percent year on year but we do not think the Computer Associates conglomerate of mainframe and Unix databases and tools grew all that much. For the math to work, VMware would have to post that much in sales, which is a factor of 2.2X increase. But, as we say, this is an informed guess with those assumptions. We do this for you in the absence of data.
The Infrastructure Software group had $6.71 billion in sales, up 46.7 percent year on year; operating income for the group was just a tad under $5.1 billion, up by 87.7 percent. As we all expected, Tan has cut deep enough and restructured the VMware products enough to pull a lot of profit out of the few that are still in the SKU stack. This is something that prior VMware managers could have done – Pat Gelsinger when he ran VMware from 2012 to 2021 or Michael Dell after he bought VMware in 2016 alongside of EMC – but chose not to, making VMware a takeover target as soon as it went public for the second time when Dell was over owning and needed to pay down some of its debts from the VMware and EMC acquisitions.
Tan is extracting 76 percent of software revenues as operating income, which is pretty high even for Broadcom and its legacy and largely captive software businesses.
In Q1 F2025, Broadcom’s Semiconductor Solutions chip business had $8.21 billion in sales, up 11.1 percent and down a tad sequentially. Operating income was $4.68 billion, up 13.7 percent and accounting for 57 percent of revenues. This is about as high as Tan can push it usually, given the enormous investments necessary in its Ethernet merchant silicon business and now its AI compute engine shepherding business.
Rather than give out precise growth figures or share figures for Q1 F2025 for its chip business, as it has done in the past, this time around Broadcom gave Wall Street vague ranges for growth among its five operating divisions in the Semiconductor Solutions group. We have pumped our best interpretations of the tea leaves that Broadcom gave us to figure out where each business is at.
As best we can figure, the core Networking group at Broadcom was up 35.2 percent to $4.52 billion, which was only up 1.6 percent sequentially. Server storage connectivity was up 8.5 percent to $962 million in our model, and wireless chippery was up a mere 1 percent year on year to just a tad over $2 billion, which was down 6,2 percent sequentially. Broadband chips brought in $539 million, down 43 percent, but up 16 percent sequentially.
Which brings us to AI chip co-design and manufacturing shepherding revenues and networking revenues relating to AI workloads.
We think that Broadcom had $4.12 billion in AI chip sales, up 77 percent year on year and up 15.2 percent sequentially. This was significantly better than the $3.8 billion guidance that Broadcom gave for the quarter thirteen weeks ago.
All other semiconductor sales accounted for $4.09 billion, down 19.2 percent year on year. You see why Broadcom wants to talk about AI XPUs and AI networks a lot.
Within this, AI compute was the big winner this quarter, but it was pretty slow last quarter as partners took a breather. In Q1 F2025, AI XPU sales were $2.47 billion, up 63.4 percent. AI networking revenues were more than double at $1.65 billion, but down 51.3 percent sequentially as far as we can tell.
Looking ahead to Q2 F2025, Tan said Broadcom expected AI sales of $4.4 billion, which would be an increase of 44 percent compared to Q2 F2024.
As a teaser for future quarters, Tan said it is in the process (pun intended) of taping out the industry’s first AI XPU based on 2 nanometer processes and using 3.5D packaging, which will push up to a 10,000 teraflops device. (Ah, but at what precision?) Tan added that Broadcom has taped out its “Tomahawk 6” StrataXGS Ethernet switch ASIC, which will have an aggregate bandwidth of over 100 Tb/sec and which has 200 Gb/sec SerDes to drive 1.6 Tb/sec Ethernet ports. First samples of Tomahawk 6 will be shipped to initial customers in the next few months.
To keep track: Broadcom has three hyperscaler and cloud builder customers it is making compute engines for, it has two more in the works (Apple and OpenAI, as noted above) and now we come to find out that two more hyperscalers have tapped Broadcom to make custom AI accelerators for training. (We do not yet know who they might be, but the list ain’t long, is it?)
Looking ahead, Broadcom is expecting for revenue to be flat sequentially in Q2 F2025 to $14.9 billion, which is an increase of 19.3 percent year on year. The company said further thar it expects for Infrastructure Software to post $6.5 billion in sales in Q2, which is 23 percent growth and which will probably be down $200 million in sales. The good news is that Semiconductor Solutions will see around a $200 million bump offsetting this decline, rising 16.6 percent year on year to $8.4 billion. We think the custom ASIC business will represent a lot of that sequential growth.
If this all stabilizes and the world economy doesn’t get thrown into recession by trade wars and actual wars – wars are always trade wars, but trade wars are not always wars – then it looks like Broadcom will be in a position to start paying down its massive debts with net income roughly equal to 40 percent of revenues. You gotta love sticky legacy software businesses with long time horizons and no easy alternatives.
Finally, that brings us to Marvell. Imagine how terrible the past few quarters would have been if Marvell had not bought the custom silicon business that was formerly part of IBM and was more recently part of GlobalFoundries as well as the aptly named Inphi, which supplies PHY communication transport circuits used in electro-optical interconnect components.
It would have been bad. But, as it turns out, it has been good. In fact, Marvell is making money – meaning net income – for the first time in a nine quarters and really the first time five years. And if this is sustainable, Marvell will be able to grow revenues and retain profits like it has not done in a decade and a half.
In the quarter ended on February 1, which was the fourth quarter of Marvell’s fiscal 2025 year, the company booked $1.82 billion in sales, up 27.4 percent and up 19.9 percent sequentially. Operating income was $235 million, a sharp reversal from the $33 million loss a year ago, and net income was $200 million, an even better reversal compared to the $393 million it lost a year ago.
Marvell ended the quarter with $948 million in cash and $3.93 billion in debt. The company’s datacenter business, which includes various kinds of compute engines and controllers as well as custom CPU and XPU ASIC services, had sales of $1.37 billion, up 78.5 percent year on year and up 24 percent sequentially. Enterprise networking, which is mainly the Teralynx switch ASIC line from its Innovium acquisition plus some homegrown Ethernet stuff, had $171 million in sales, down 35.3 percent but rebounding from a three quarter trough that we think was caused by a shift in focus away from general purpose infrastructure toward AI clusters.
The other Marvell groups are not all that interesting to us, and frankly Datacenter plus Enterprise Networking comprised 84.6 percent of the company’s revenues. This is, in effect, the Marvell business.
For the full fiscal 2025 year, the Datacenter group had $4.16 billion in sales, up 87.9 percent, and Enterprise Networking group had sales of $626 million, down 49 percent.
What everyone wants to know is how AI compute engines and AI electro-optics (Inphi stuff again) are doing. And we listened to the same call that Wall Street did and took a crack at building a model of AI sales at Marvell. Marvell provided guidance earlier in the fiscal 2025 year that its AI revenues would be at least $1.5 billion for the year, and expected sales to break through $2.6 billion in fiscal 2026.
As Matt Murphy, Marvell’s chief executive officer, put it on the call going over the latest financials with Wall Street, the company “blew by” its F2025 estimate for AI sales, and expects to do better than its original forecast for F2026, too. But it didn’t say by how much. After looking at other models and talking with some peer model builders, we reckon Marvell did about $1.85 billion in AI revenue in F2025 and that it will do over $3 billion in F2026 – perhaps even higher than $3.5 billion if this custom AI XPU and AI networking thing takes off and there is heavy demand for electro-optics.
As best we can figure, Marvell did $272 million in custom AI XPU revenue in Q4, up by a factor of 10.7X compared to a year ago, and electro-optics for AI rose by 3X to $580 million. Add it up, and AI revenues at Marvell were $852 million in Q4 F2025, up 3.9X year on year. AI comprised 62.4 percent of Datacenter group sales, and Datacenter comprised more than three-quarters of overall revenues. Which is another way of saying that Marvell is now a datacenter player like it has always wanted to be.
In our model for the full F2025 year, Marvell raked in $711 million in AI XPUs, up by nearly 9X compared to F2024. Electro-optics brought in the other $1.14 billion that gets the company to $1.85 billion in AI sales for the year.
We think that Marvell can do at least $1.15 billion in AI XPU revenue in F2026, with electro-optics bringing in another $1.87 billion. We shall see. A lot depends on Marvell finding new hyperscaler and cloud builder customers, or helping to convince tier two service providers and government labs to collaborate on a shared AI XPU design that gives them benefits over just buying Nvidia GPUs. Admittedly, for these companies, buying Nvidia GPUs is the safest bet, and it is not like GPUs are not holding a lot of their value over time in the open market. A GPU is perhaps a better investment than a lot of stocks and bonds are right now. Which is why we call them Hoprcoin.
Is marvell confirmed for tranium3? I think Alchip might have it.
I am hearing they kept it from multiple sources, and they said they were working on the next gen AI chip for an existing customer.