Marvell’s Custom XPU Pipeline Is A Declaration Of AI Independence

There is no question that one of the smartest things that chip designer, packager, and manufacturing process manager Marvell Technology did was to shell out $650 million in May 2019 to buy Avera Semiconductor. In the long run, that acquisition may prove more important than Marvell’s $10 billion acquisition in October 2020 of Inphi, which makes amplifiers, digital signal processors, and other kinds of optical circuitry.

Inphi was an immediate boon for Marvell’s financials and also gave it access to the hyperscalers and cloud builders of the world in a way that its ThunderX line of Arm server processors and TeraLynx line of Ethernet switch ASICs (which came through the acquisitions of Cavium Networks and Innovium) might have but did not. That’s because the hyperscalers and cloud builders were tired of being beholden to Intel, AMD, and Nvidia (and maybe someday soon Broadcom) for their compute engines. We think they might be feeling the same about network engines.

But the Marvell story didn’t end where the hyperscalers and cloud builders started designing their own XPUs. It just changed. The embiggened and embaddened Marvell that designed so many chips in its long history now takes that expertise and mixes it with the Avera design house that has given Marvell’s business a real boost with at least a few of the hyperscalers and cloud builders – and from the looks of things, a bunch more are signing up to get help bringing their own custom XPUs and switchery to life.

Avera is the combination of the chip design teams from IBM Microelectronics and GlobalFoundries, which itself includes many of the chip foundry and process and packaging experts from AMD and Chartered Semiconductor. IBM’s custom chip business is the cornerstone of Avera, and that business is the brainchild of none other than Lisa Su, AMD’s chief executive officer for more than a decade, who spearheaded the development of the PowerXCell, or simply Cell, massively vectored processor that was eventually at the heart of Sony and Microsoft game consoles until AMD stole that business away after she left Big Blue.

Back in September 2020, in Betting On Mass Customization In A Post Moore’s Law World, we did a deep profile of the custom chip design and shepherding business that Marvell was putting together, and we are not going to repeat all of that here. At the time, the only relevant business that Marvell was doing in custom XPUs was to help Groq bring the Tensor Streaming Processor 100 to market. The company was also trying to push custom ThunderX3 Arm processors to the hyperscalers as an alternative to the Graviton Arm processor from Amazon Web Services, which we think only convinced Microsoft, Google, Alibaba, Tencent, and Baidu all the more to create their own design chips.

Suffice it to say, the custom XPU story is playing out exactly as Marvell had hoped that it might, and now it appears on the cusp of hockey sticking. In our analysis of Marvell’s financial results for the first quarter of fiscal 2026 (the quarter ended in early May), we pointed out that Marvell has been shepherding the AWS Trainium 2 through production, helping to ramp the Trainium 3, and assisting in the development of the future Trainium 4. Marvell is also widely believed to be shepherding the Maia 100 AI XPU through its ramp and is also working on the follow-on Maia 200 AI chip with Microsoft’s Azure cloud folks.

This appears to be just the beginning, and despite the intense competition from Broadcom’s custom XPU and networking business and the eagerness of A1chip to steal away its custom XPU customers, Marvell’s top brass have been nothing but ebullient about this business – and therefore, most of Marvell’s revenue stream – for the past couple of months. The pitch went up a whole note as it reported its financial results for fiscal 2026’s second quarter ended in early August as the pipeline for custom XPUs has widened and deepened, and now includes a slew of customers who are interested in integrating NVLink Fusion IP blocks into their designs so their custom CPUs can share memory coherently with Nvidia’s GPU accelerators or to link their custom XPU AI accelerators to Nvidia’s own “Grace” and “Vera” Arm server CPUs.

Note: Officially, there is no way to just get NVLink Fusion ports and buy NVSwitch infrastructure to link custom CPUs and XPUs together, but for enough money, we bet Nvidia will let it happen. That may be so much money that it is cheaper to buy Nvidia technology and say to hell with it all for custom chips. But an antitrust lawsuit against Nvidia – which is not outside the realm of possibility and would parallel those that compelled changes in IBM’s behavior in the mainframe era – could make that third option available by consent decree if such a suit, should it materialize, be settled.

Stranger things have happened. It all comes down to relevant market to define the monopoly, how many aggrieved parties there are, and how much price control Nvidia is exerting. With 90 percent market share in AI compute and probably 75 percent control of networking (if you include NVSwitch as well as InfiniBand and Ethernet), we are already in monopoly territory if the relevant market is restricted to AI processing. And there is absolutely no question at all that Nvidia is able to exert tremendous control over XPU compute and networking with regards to AI platforms. Its datacenter business has around 75 percent gross margins, probably around 65 percent to 70 percent operating margins, and on the order of 55 percent to 60 percent net margins. It is hard to do better than this in any business – perhaps selling water to people living in a desert might be slightly more profitable.

We say all of this as a backdrop of the custom XPU and XPU attach business at Marvell. There are reasons that it is hockey sticking, and there are reasons why Marvell, Astera Labs, A1chip, and MediaTek are able to package up NVLink Fusion offerings for the hyperscalers and cloud builders making their own XPUs. And that antitrust threat is doing exactly what it should do: Compelling Nvidia to be open before a lawsuit even gets rolling. And this in turn is helping bolster the custom XPU and XPU attach businesses at Marvell. Now, hyperscalers and cloud builders can pay a little dough to Nvidia for NVLink Fusion interconnects and choose whatever XPU they want to attach to their homegrown CPUs – and they can be Nvidia GPUs, not just their own. Or, they can choose Grace or Vera CPUs in their fleets and attach any XPU to them. (We have not heard much about this option.)

Speaking on a call with Wall Street analysts, Matt Murphy, Marvell’s chief executive officer, reminded everyone the quick take from its custom XPU event in late June.

First, the company now has 18 different sockets under development in its custom chip business (with an unspecified number of customers) that span whole XPU designs as well as XPU interconnect integration (including its own networking as well as NVLink Fusion and possibly chips from third parties like Astera Labs, which is working on its own PCI-Express and UALink interconnect chips). Murphy added that Marvell has a pipeline of over 50 such opportunities, which have the potential of driving $75 billion of revenues over the lifetime of the contracts. (We would love to get some details on how that number is arrived at. Is it the potential revenue to Marvell, or the street value of the accelerators that might include Marvell IP and shepherding?)

Second, given all of this excitement for custom XPUs, Marvell now thinks it can grow its datacenter business faster than it was even thinking a year ago. Marvell thinks it had 13 percent of the $33 billion total addressable market it has products in in calendar 2024, and last year it was thinking that TAM might grow to maybe $75 billion in calendar 2028. Now, the company is boosting that 2028 TAM by 26 percent to $94 billion, and thinks it can bring down about 20 percent of that in calendar 2028. That is $18.8 billion in calendar 2028. In calendar 2024, Marvell had somewhere around $4 billion in datacenter revenues, so this represents a factor of 4.5X growth over five years, which works out to a compound annual growth rate of 36.3 percent.

To put that into perspective, in the trailing twelve months, Intel had $16.56 billion in datacenter revenues, AMD had $14.32 billion, and Nvidia had $146.56 billion.

We think that if all of these custom XPU deals get done (with the help of Marvell, Broadcom, and A1chip), this is a very good leading indicator of how the hyperscalers and cloud builders want to decrease their dependence on Nvidia. Frankly, they are the only companies on Earth, other than maybe a few of the biggest model builders, who can afford to play this game. And they most definitely need for AI training and inference costs to come down if GenAI is ever going to drive a sustainable business.

With that out of the way, let’s talk about the recent quarter for Marvell, which lays another stone on this custom XPU foundation.

In the quarter, Marvell raked in $2.01 billion, up 57.6 percent year on year and up 5.8 percent sequentially. Operating income was $290.1 million, a nice shift from the $100.4 million operating loss a year ago. Net income, which is how we keep score around here at The Next Platform (along with revenue of course), was $194.8 million, again a nice flip from the $193.3 million loss in the year ago period.

Marvell increased its cash hoard by 51.4 percent compared to this time last year, to $1.22 billion, and its debt was $4.47 billion, up 11.8 percent. In the third quarter it will book a $1.8 billion gain from the sale of its automotive Ethernet networking business to Infineon, which will give Marvell even more financial maneuvering room.

The company’s Datacenter group drove most of the sales in fiscal Q2, as it has for the prior five quarters as Inphi and custom XPU products and services have increased with the GenAI boom. Datacenter sales were $1.49 billion, up 69.2 percent, but only 3.5 percent sequentially.

Marvell’s Enterprise Networking group, which includes Innovium, Prestera, and XPliant Ethernet products, saw revenues rise by 28.2 percent to $193.6 million, and significantly was up 9.1 percent sequentially from Q1 F2026. The adjacent carrier infrastructure business was up 71.4 percent to $130.1 million and is trying to recover.

Muphy said that AI and cloud – what we call hyperscalers and cloud builders – drove 90 percent of revenues in fiscal Q2. This is not a business that is focused on peddling chips to the ODMs and OEMs, but going directly to the source where most of the money is being spent in IT these days. He added that custom XPU and electro-optics products will account for around 75 percent of Datacenter group sales.

Our model suggests that Marvell had just under $300 million in custom XPU chips for AI sales in fiscal Q2, more than double what it was a year ago, and that electro-optics accounted for AI drove $709 million in sales, a factor of 4.5X higher year on year.

In the quarter, Datacenter and Enterprise Networking together represented 83.9 percent of revenues, and going forward, Marvell intends to combine these businesses into a single group called the Data Center Group that will be run by Sandeep Bharathi, who has been running the company’s engineering efforts as well as being its chief development officer.  Bharathi joined Marvell in 2019 and was responsible for brining up 5 nanometer processes from Taiwan Semiconductor Manufacturing Co to the company’s chips as well as integrating the now-crucial Avera business into Marvell and driving the custom XPU business that will take the company’s business to the next level.

Murphy warned that custom XPU sales would be lumpy – which is ever the way in the high performance computing and hyperscale and cloud businesses – and would be flat going into the third quarter. Interconnect electro-optics revenues would more than fill in the gap, and custom XPU revenues would pick up in fiscal Q4. To be specific, Murphy said that the Datacenter group would have year-on-year revenue growth “in the mid-30s range” in Q3.

This is not the 50 percent growth that Nvidia has started forecasting, but it is not too shabby, either.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Antitrust implies anticompetitive behaviour that unfairly leverages market share. I don’t see that in Mvidia, but instead an efficient innovator in a very competitive market.

    To me it seems likely AI requires technology that wouldn’t exist without key players with the resources to carry out significant research and development. Nvidia is one such player. TSMC another and Huwei another. Neither IBM nor Bell Labs are key players.

    Entire governments have started competing on the premise AI will soon go beyond a hallucinating chatbot and have real strategic value.

    All of this benefits Marvell because there wouldn’t be demand for an XPU if a market didn’t exist in the first place. Note that TSMC, Nvidia and Huwei are also great benefits to the respective countries in which they are located.

    Personally, I’m quite interested in Tenstorrent. Their philosophy seems as developer friendly as CUDA, though currently not as complete or well funded. I wonder if something like Blackhole could be made with the 64-bit arithmetic to support large quantitative models and traditional high-performance computing.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.