It’s still Ketchup Week here at The Next Platform, and we are going to be circling back to look at the financials of a number of bellwether datacenter companies that we could not get to during a number of medical crisis – including but not limited to our family catching COVID when we took a week of vacation at a lake in Michigan.
Today, we are going to take a hard look at the numbers that came out of server and storage maker Supermicro, which is particularly important as we await seeing Nvidia’s financials in two weeks. Nvidia would have reported last week, a week after Supermicro did, but has shifted out a few weeks since it takes a little more time to count the datacenter GPU and networking money and to very carefully analyze and discuss the numbers and trends underlying Nvidia’s tremendous success.
In the June quarter, which is the final one of Supermicro’s fiscal 2024 year, the company booked $5.31 billion in sales, up by a factor of 2.4X compared to the year ago quarter and up 37.9 percent sequentially. This was in the dead center of the range that Supermicro told to Wall Street last quarter.
Like Nvidia – and in a sense, because of the pressing need for AI servers by hyperscalers, cloud builders, and other service providers thanks to the GenAI boom that Nvidia helped create and is at the forefront of – Supermicro has seen incredible growth in the past two years. You just do not see revenue plot lines like the ones above very often in your life, but we have seen them with Nvidia, Supermicro, and AMD.
Unlike Nvidia, AMD and Supermicro have had challenges turning this explosive growth into profits, but Supermicro is starting to master this task even as it has to invest more and more. First in GPU-accelerated servers, which are difficult to engineer, and now for direct liquid cooling systems for servers, racks, and datacenters, including everything from cold plates all the way up to chillers on the roof.
In the fourth quarter, Supermicro said, in fact, that gross margins were adversely impacted by customer mix and product mix – meaning it was heavy on the hyperscalers and cloud builders, who use their own GPU and CPU allocations and who pay the least per server unit because of their volumes – and because of initial production costs for a new generation of direct liquid cooling tech.
Despite that, Supermicro posted a net income of $353 million against that $5.31 billion in sales, which was up 82.2 percent and which represented 6.6 percent of those revenues. Net income as a share of revenue was averaging 8.7 percent in the trailing twelve months, but before the GenAI boom hit and companies were looking for assembly outside of China (and in some cases Taiwan) because of the supply chain woes they experienced during the coronavirus pandemic.
And there was Supermicro co-founder Charles Liang, waiting for that moment to come along with his factories in California, the Netherlands, Malaysia, and of course Taiwan. It was a moment that Supermicro had been anticipating for a long, long time.
Supermicro started out as a subsystem supplier, making motherboards and daughter cards and enclosures for tier two and tier three system makers, and gradually evolved into a hybrid parts supplier and original equipment manufacturer (OEM) akin to the likes of Dell, Hewlett Packard Enterprise, and Lenovo. In its current incarnation, it looks more like an original design manufacturer (ODM) akin to Foxconn, Quanta Computer, Inventec, or ZT Systems, the latter of which was just acquired by AMD, and an ODM that has side gigs selling parts to OEMs and systems to customers as an OEM.
In the quarter, Supermicro had $272 million in subsystems revenues, which was up 77.8 percent year on year. The core systems business was up 2.48X to $5.04 billion.
We believe each quarter is its own animal, and for companies that sell gear to few and sometimes capricious customers – hyperscalers, cloud builders, and HPC centers – and have a relative few of them at that, you really need to look at them on an annualized basis, too.
For fiscal 2024, Supermicro booked $14.93 billion in sales, up by 2.1X compared to the prior fiscal year, and operating income rose by 66.4 percent to $1.27 billion. Net income, where the tax man cares, rose by 88.8 percent to $1.21 billion. Net income grew at less than half the pace of revenue, but don’t get the wrong impression.
This year has been a tough one as Supermicro ramps up its rackscale system business and adds liquid cooling to machines and facilities. Liang said that about $800 million in revenues was pushed out in the quarter because of shortages with liquid cooling equipment. He also said that last quarter, Supermicro could ship around 1,000 racks per month of liquid cooled gear, and this quarter it is up to 1,500 racks per month; by year’s end, it will be at 3,000 racks per month. This is part of what is driving the Supermicro business.
But back to financials. A longer view than four quarters provides perspective on Supermicro’s profitability. Supermicro was stuck in the mud a bit back in fiscal 2018, years before the coronavirus pandemic and GenAI came onto the scene. The company had $3.36 billion in sales, but it only posted $46 million in net income, which was an anemic 1.4 percent of sales. And over the next three years, revenues stayed in the same ballpark but net income ranged between 2.1 percent and 3.1 percent of revenues. Compared to fiscal 2018, fiscal 2024 had revenues that were 4.4X greater, but operating income grew by 13.3X and net income grew by 26.2X.
Things are clearly getting better as Supermicro gets bigger. This is not always the case, and was not the case, in fact, the first couple of times Supermicro had growth spurts to break $1 billion, $2 billion, and $3 billion in sales.
In the fourth quarter, Supermicro said that over 70 percent of its revenues were for AI and rackscale systems, which works out to around $3.8 billion, which is up by 3.3X compared to a year ago. If you take out subsystems sales and these AI/rackscale sales, that means the core OEM-ish part of the business was around $1.27 billion, up 41.5 percent.
Here is how Supermicro breaks down its sales by customer in its most recent carving up of its sales:
The 5G, telecom, edge, and IoT segment is still tiny, but posted $106 million in revenues in Q4.
We are honestly not sure how useful the Organic and AI/ML and OEM Appliance and Large Datacenter customer breakdowns are. But what we can tell you is that Organic and AI/ML accounted for $1.81 billion in Q4, up 85.7 percent, and OEM Appliance and Large Datacenter accounted for $3.4 billion in Q4, up 2.9X year on year.
Looking ahead to fiscal 2025, Supermicro expects for revenues to be in the range of $6 billion to $7 billion in the first quarter of fiscal 2025 ending in September. And for the full fiscal year Liang & Co are expecting for revenues to be between $26 billion and $30 billion, which is roughly double what it did in fiscal 2024 at the higher end of that forecast range.
These numbers assume some wiggle in the delivery schedules for Nvidia GPUs and system boards, and Liang said as much on the call, confirming that there were some delays in the deliveries of the GB200 hybrid of the “Grace” CPU and the “Blackwell” GPU. But Liang quickly added that Supermicro did not have any troubles provided customers with liquid cooled systems based on the “Hopper” H200 GPU, which has its HBM memory boosted to 141 GB.
As we have said before, HBM capacity and bandwidth to balance it is probably more important than adding lots more flops at this point – so long as you have enough flops, of course. An H200 with 141 GB is a better deal than an H100 with 80 GB because you will need that many fewer GPUs (proportional to the memory capacity and bandwidth) to do any given AI training run. And an H100 you can get your hands on is far better than a B100 you can’t get easily or a B200 that will be almost impossible to get that each have 192 GB of capacity. If Nvidia can make lots of H100s, it will sell lots of H100s, which means Supermicro can build lots of AI servers. And perhaps twice as many more than it would be able to if Blackwells were on the ramp and shipping sooner in volume rather than later.
A Blackwell delay, in a sense, is good for both Nvidia and Supermicro. So long as AMD can’t make millions of its MI300X GPUs, which as far as we know it cannot.
It’s simple math. AMD says it will sell $4.5 billion in datacenter GPUs this year. If you assume a list price of $20,000, that is 225,000 units, and if you assume a list price of $30,000, that is an even smaller 150,000 units. That was a lot of GPUs in the pre-GenAI world, but it is somewhere between six and nine clusters in the GenAI world. That’s it. Nvidia will sell an order of magnitude more H100, H200, B100, and B200 devices this year, no matter what the delays are in the Blackwell designs and that is because it has access to CoWoS interposer capacity and HBM capacity that AMD does not.
No matter what, Supermicro is planning on benefitting, as you can see from its ebullient forecast.
“In the June quarter, which is the final one of Supermicro’s fiscal 2024 year, the company booked $45.31 billion in sales,” I think you had a double strike on the $/4 key, as it should be $5.31 billion without the extra 4 digit. Now in a few years, $45B in Q4 could be a possibility.
Yes, thank you.