Compute

Riding The Choppy AI Datacenter Waves With Supermicro

Published

There are somewhere on the order of 50,000 reasonably large companies, academic centers, and governments in the world that the enterprise IT market is like the ocean on a calm day in the doldrums. While there may be lots of little whirlpools and a bunch of choppiness, it is all below the surface and is averaged out.

This has never been the case in high performance computing, where the system designs push the limits and there are relatively few customers. They try to get the latest compute, networking, and storage, and that means their individual orders, which are big by comparison to the typical enterprise, tend to bunch up, leading to much bigger up and down spending waves. Even during the GenAI boom, there are cycles.

As Supermicro has moved more aggressively into supplying systems for hyperscalers, cloud builders, and model builders, its revenues have been larger, but they are also choppier and subject to increasing volatility. And even though absolute profits have grown, as is the case with all AI system suppliers, profit margins are under pressure because, frankly, there is just not as much engineering and therefore margin for the OEMs and ODMs to do in rackscale, liquid-cooled machinery from Nvidia, which makes up the vast majority of systems that are being bought.

Which is why Supermicro is scaling its business out from supplying individual servers in familiar rack-mounted form factors (1U, 2U, and 4U machines as well as dense-packed blade servers) and to rackscale nodes that have coherent CPU and GPU memory spanning a single rack and now, with the AMD and OCP “Helios” double-wides, two interconnected racks and now out to all of the gear wrapped around it that comprises a datacenter. The effort, called Data Center Building Block Solutions, is a natural evolution and expansion of the company.

Supermicro has a massive datacenter in its Fremont, California that has now scaled up to tens of megawatts – what used to sound large before the GenAI revolution and these compute-dense and power hungry GPU systems that run AI training and now inference. This datacenter is part of the factory where real AI clusters are assembled and burned in within a much more hostile environment than a typical AI datacenter to try to shake out bad iron before it ships. Across its factories in California, Taiwan, Holland, and Malaysia, Supermicro has a total of 52 megawatts of datacenter capacity just to burn in iron, and can now build 6,000 racks of AI gear per month, with 3,000 racks using direct liquid cooling.

So Supermicro has plenty of experience in standing up iron in a datacenter as part of its assembly line.

But the DCBBS is more than just having datacenter experience, or even helping neoclouds and model builders design datacenters and the systems within them. It is about supplying this expertise as well as in-rack and in-row direct liquid cooling systems as well as dry and water cooling towers and power distribution, generating, transformer, and battery backup systems and the software that controls all of this auxiliary gear and hooks into the systems, networks, and storage that are housed in the datacenter. The idea is to reduce the capex and the opex given a certain amount of desired AI compute, with a support package for the entire datacenter – soup to nuts, as they say.

People have been saying the datacenter is the server for a while now, and Supermicro agrees and it is going to sell the whole shebang, just shy of the shell walls and the concrete floor. For now, at least. But don’t be surprised when Charles Liang, Supermicro’s founder and chief executive officer, puts on a hard hat and rolls in with the cranes, bulldozers, and concrete mixer truck. . . .

This is about the only way we can think of to capture more of the AI wallet short of trying to take on Nvidia and AMD in CPUs and GPUs and Broadcom, Nvidia, and Cisco in networking. Which Supermicro cannot afford to do, clearly. This DCBBS effort will also have the effect of smoothing out Supermicro’s revenues on both ends – at the beginning, when datacenters are designed and built with the power and cooling systems, and at the end, when maintenance and tech support revenue streams start for the datacenter and the gear inside of it after it is installed and running.

It will take a few years for this DCBBS effort to yield, but looking back more than a decade, you might have thought it was unlikely that Supermicro would emerge as one of the dominant system suppliers in the world. Liang & Co are tenacious, and not afraid to work for low margins.

The margins certainly did get lower in the first quarter of fiscal 2026 ended in September. In the quarter, Supermicro was expecting for sales to be between $6 billion and $7 billion, which would have been flat to up 18 percent year on year. Instead, sales fell by 15.5 percent to $5.02 billion, and operating income swooned by 64.2 percent to $182.3 million and net income went down 60.3 percent in sympathy to $168.3 million.

Net income as a share of revenue was 3.4 percent, half the level of a year ago and significantly lower than the average levels as Supermicro started shipping large numbers of machines to hyperscalers, cloud builders, and then neoclouds and model builders more recently.

The bulk of Supermicro’s sales in the quarter were for AI systems loaded up with GPUs from Nvidia, with a smattering from AMD. But sales here have been trending down slightly in a choppy way after a meteoric rise over the past three years.

Sales of other kinds of systems (mostly CPU machines, but also storage arrays and switches) are trending upwards slightly over the past several years, but took a downturn along with AI systems in the September quarter, which was unfortunate for Supermicro.

The culprit is the latest rackscale GPU machines, presumably from Nvidia but maybe from AMD, too, that Liang said on the call were more difficult to build and test and took more time to source components for as well, and that, in addition to some customers needing to get their datacenters and power ready, caused $1.5 billion in revenue to shift out from the September quarter to the December quarter. (So clearly, Supermicro was going to try to beat its own guidance, and pretty handily.)

Supermicro said it had a backlog of more than $13 billion for GB300 NVL72 rackscale systems based on Nvidia’s “Blackwell” B300 GPUs on the books. This backlog includes the largest deal that Supermicro has done in its history, and of course we want to know who it is and how much iron they are getting for how much money.

We do the following chart for sentimental reasons:

We clearly remember the Supermicro that allowed all of us to be our own personal Gateway Computer in its early years. Supermicro still has to build motherboards and other subsystems as well as enclosures for its system customers, and to this day, if you want to DIY your own iron or you think you can live on lower margins than Supermicro’s system business, you can still buy these components and be a server manufacturer in your own right.

This subsystems business has been steady, more or less, since the Great Recession in 2009. At that time, subsystems generated about twice as much revenue as completed systems did for Supermicro. (Hard to believe, isn’t it?)

These channel categories are weird, but here they are:

It has never been clear to us why enterprise and channel customers were mixed with the AI model makers, or why OEM appliance sales were mixed with sales of systems and components to large datacenter operators. The 5G, telco, edge, and IoT group made sense, given the form factors and use cases for these customers.

It is fairly clear that the whatever went on in the September quarter, it happened in the United States:

Sales in Asia were up 2.4X to $2.31 billion (hopefully not to Chinese companies or Supermicro runs the risk of invoking the ire of the Trump Administration), but sales in the United States dove by 56.2 percent to $1.86 billion. Even with an additional $1.5 billion in AI system sales in the United States, the American market would have still been down around 21 percent for Supermicro. All of the markets are down sequentially, which is concerning and which points to supply chain issues, we think.

That brings us to the forecast for the second quarter and fiscal year in 2026.

Supermicro expects for sales to be between $10 billion and $11 billion in the second quarter, which will be the largest quarter in the company’s 32 year history and significantly larger than its $5.94 billion peak in Q1 2025. This revenue range includes “a new megascale GB300 optimized rack platform,” which is going to hurt gross margins to the tune of three points. For the full fiscal 2026 year, Supermicro is raising its guidance from the prior $33 billion to $36 billion. Three quarters ago, it was looking like it would be $38 billion. But that is just the way the high performance computing racket goes. . . . Up and down and sometimes sideways.