
Nvidia co-founder and chief executive officer Jensen Huang did not do his OEM and ODM partners, who are the company’s main route to bring the infrastructure underpinning GPU systems to market, any favors when he suggested its “Hopper” GPU platforms would be blown away by their “Blackwell” kickers.
But, Huang said the truth that no one ever speaks in product transitions and that salespeople never want their boss or their partner’s boss up to bring up when they are selling what is on the truck, knowing full well that the loading docks are going to offer something more capacious and very likely cheaper on a per unit of capacity basis, too.
And let’s quote Huang in full so we get the context before we dive into an analysis of Supermicro’s third quarter of fiscal 2025:
“The goal is to build these next generation computers for next generation workloads. And so here’s an example of a reasoning model. And in a reasoning model, Blackwell was 40 times – 40 times – the performance of Hopper. Straight up. Pretty amazing.”
“But I said before that when Blackwell starts shipping in volume, you couldn’t give Hoppers away. And this is what I mean, and this makes sense. If anybody – if you’re still looking to buy a Hopper, don’t be afraid, it’s okay. [laughter] But, I’m the Chief Revenue Destroyer. My sales guys are going, “Oh, no, don’t say that.” [laughter]
“There are circumstances where Hopper is fine. That’s the best thing I could say about Hopper. [laughter] There are circumstances where you’re fine. Not many, if I had to take a swing. And so that’s kind of my point. When the technology is moving this fast, and because the workload is so intense, and you’re building these things that are factories, we really like you to invest in the right versions.”
Well, some of Supermicro’s customers listened to that, as did customers of Hewlett Packard Enterprise, and in its March quarter, which didn’t end until two weeks after Huang said that, Supermicro got pinched by companies seeing Blackwell and Blackwell Ultra GPU systems, who didn’t cancel orders so much as push them out with the new and improved compute engines and interconnects that come with Blackwell machines.
As best we can figure, Supermicro had about $1 billion in AI system revenue that was expected to close in the March quarter push out to the future – some coming in the June quarter, and some probably pushed all the way put to the September quarter. And Nehal Chokshi, a stock analyst at Northland Securities, suggested that in addition to this Supermicro had to take a $100 million writedown of Hopper inventory and then try to sell them to other customers. David Weigand, chief financial officer at Supermicro, did not shoot that number down or provide a better one.
Supermicro also pulled in its guidance for fiscal 2025 sales, and now expects sales to be between $21.8 billion and $22.6 billion, down from an expected $23.5 billion to $25 billion. If you do the distribution on those error bars, that is somewhere between $900 million to $3.2 billion in deals that will slip out of fiscal 2025 into fiscal 2026, which begins on July 1.
Wall Street is jumpy right now, as we all know, and when given the opportunity to reaffirm the company’s $40 billion revenue guidance for fiscal 2026, which was laid out a few months ago, Charles Lliang, Supermicro’s founder and chief executive officer, said that when the situation with tariffs and the economy and customer demand becomes clearer, Supermicro will share guidance.
“We remain very confident with our mid-term and long-term growth,” Liang explained on the call with Wall Street analysts. “Especially with the Blackwell product line, we have very strong demand and also our coming soon data center building block total solution. We see lots of customers really interested in our datacenter total solutions. So demand growth will keep strong and yes, there is tariff and some macroeconomy uncertainty.”
Obviously, Supermicro is expecting a big boom with sales Blackwell and Blackwell Ultra machines, and is looking forward to future “Rubin” R100 and R200 GPUs next year, too, and “Rubin Ultra” R300 GPUs in 2027.
Given what we think Nvidia expects from Blackwell Ultra and Rubin, and what we think will be exploding demand for rackscale systems for AI inference and a continuing deployment of massively scaled AI systems with smaller GPU coherency domains (which are cheaper to deploy because they have less NVSwitch content, which is partially offset by higher InfiniBand or Spectrum-X content), we think it is reasonable for Supermicro to get to a $40 billion run rate in the next twelve to eighteen months.
The fact that Nvidia supplied a roadmap out to 2028 during the GTC 2025 shindig allowed companies to align and re-align their plans for AI inference and training against that roadmap, and more than a few are going to intersect the future at different points. But make no mistake: The investment in Nvidia infrastructure will be enormous, and there is a nearly 100 percent chance that Supermicro will be the largest server maker in the world within a few quarters.
Judge for yourself how valuable that might be, but this may be the only way that Supermicro can grow its profits – by making it up in volume. Like Wal-Mart and Amazon.
Supermicro still has a tidy little side gig selling motherboards and enclosures to customers who want to build their own machinery, but it has long since become a maker of systems and now rackscale and rowscale supercomputers.
In the March quarter, Supermicro posted just a smidgen under $4.6 billion in sales, up 19.5 percent year on year but down 19 percent sequentially. After Hopper writedowns, we think operating income was $146.8 million and net income was $108.8 million.
Subsystems brought in $141 million, down 7.2 percent year on year, and systems brought in $4.46 billion, up 20.6 percent.
Supermicro ended the quarter with $2.54 billion in cash, which is useful when you build insanely expensive capital equipment.
Weigand said on the call with Wall Street analysts that more than 70 percent of the company’s revenues in the quarter were for GPU-powered AI systems, including those sold to enterprises in addition to the hyperscalers and cloud builders who buy gear from Supermicro. We called that 70.4 percent of sales, and that works out to $3.24 billion. That was about the same revenue that Supermicro made in its entire fiscal 2018 year, just for comparison.
These characterizations of Supermicro’s distributions of channels to customers might not be particularly useful, but we have updated them and you can judge for yourself:
We would like to see a distribution of revenues that separates out hyperscale and cloud builder customers from other – and smaller – enterprises, and then a breakout of government and academic customers separate from this. We don’t care about edge and 5G for the purposes of this publication, but it seems clear this category has not taken off as many had expected. With the AI revolution, we think it will as AI inference gets more distributed for latency reasons, this could be a big business.
Here is a distribution of Supermicro revenues by geography:
The company’s European and Asian sales have ballooned in recent quarters, but they have their own cycles based on the largest customers and their AI rollout plans. Supermicro is predominantly selling to US companies, and in Q3 F2025, the company’s pair of “greater than 10 percent of revenue” who are located in the United States slashed their aggregate spending by 49.5 percent. So we know who shifted their plans to realign with the Nvidia GPU roadmap. That said, the $1.66 billion in spending by their two customers in the quarter was still 13.2 percent higher than in the year ago period.
Other customers accounted for $2.94 billion in sales, up a very healthy 23.3 percent year on year and up 22.6 percent sequentially.
Considering all that is going on, Supermicro negotiated the situation about as well as anyone could have. And now with its books knocked into order and controls put into place, the company is ready to push toward that $40 billion annual sales level through its factories located in the US, the Netherlands, Malaysia, and Taiwan. The company can arbitrage the tariff sea mines better than most other server makers, and it will benefit from that fact.
At the moment, the Supermicro factory in Fremont, California can pump out 5,000 racks per month, and 2,000 of them can be Nvidia GB200 NVL72 and GB300 NVL72 rackscale systems aimed at chain of thought inference. Other factories are ramping up their rackscale capabilities, which will have to happen if Supermicro is going to double its server business.
Looking ahead only one quarter, though, the math says that at the midpoint of revenue guidance Supermicro will do $6 billion in sales, giving it $22.2 billion in sales for the 2025 fiscal year. We reckon about $407 million of that Q4 F2025 revenue will drop to the bottom line, which is 6.8 percent of sales, and that AI systems will comprise 71 percent of revenues, or $4.24 billion.
I know this article is on Supermicro financials, but suppose I didn’t care about AI or money and was just interested in the technology behind the Hopper and Blackwell GPUs.
Blackwell is on 4NP while Hopper is 4N. A 40-fold performance increase in AI can’t be attributed to the process node so I can’t help but wonder whether something was taken away when optimising for AI.
Said differently, is it possible traditional 64-bit scientific computing runs slower on Blackwell than Hopper? How do the speeds compare for 64-bit non-AI tasks?
I have done this math.
https://www.nextplatform.com/2025/02/20/sizing-up-compute-engines-for-hpc-work-at-64-bit-precision/
Too soon for the “There is no point in FP1” T-shirts?
No! I want one!
There is an issue with accelerated technology deployments, creative destruction, prematurely destroying the capital value of every last product generation, any overage and secondary, which then presses down back gen secondary ‘used hardware’ although utilities tend to remain.
Used hardware whose resale value re-captured spreads learning through proliferation and continues to fund primary procurements.
Which is now for Ada and Hopper on the volume of platform validated channel inventory accumulating ‘capital values’ for sale. It’s also the reason why Nvidia dGPU generations have at least five resale lives on hand-me-down CUDA standard platform value. Supports learning and proliferation which sustains share and guards against competitive disruption that is not necessarily complimentary.
Currently, there is a lot of L40S and H100 server in the channel for resale and what a great place to start for Ai/ML on used that is an understood price performance equation.
Jensen Huang, ““But I said before that when Blackwell starts shipping in volume, you couldn’t give Hoppers away. And this is what I mean, and this makes sense. If anybody – if you’re still looking to buy a Hopper, don’t be afraid, it’s okay.”
Where many an IT shop will say if it works don’t fix it. Or in Hyperscale just add to the infrastructure that is ultimately a cost equation on what is a work horse.
There is no Blackwell DC accelerator in the channel yet that I am aware. DGX can be ordered through Nvidia OEM Dealers and Master Distribution. Similar Super Micro we’ll all know more after the Nvidia q1 financial call.
I will note creative destruction when purposely relied by ‘cartelized’ primary sellers to take back at every next ‘new’ product category introduction, supply and price making control, from the open market, can be an antitrust violation economically speaking.
Intel found this out over and over again in 1998, 2001, 2005, 2008 destroying the accumulating capital value of used for resale, eventually causing a revolt, on God Intel in this example prematurely turning validated used knowns into rotting tomatoes. It’s the premature aspect that is the holistic economic issue, and question, with creative destruction.
There is a place and time for moving to the utility and business value of every next generation but none before its time when purposely destroying back in time warps space. Industry space, open space, society space and open commerce.
Mike Bruzzone, Camp Marketing
On the Intel side