
An interesting thought experiment to do in 2025 when looking at the financial results of just about any of the key compute, storage, and networking component and system suppliers is to imagine how any given company’s numbers would look if you backed out the AI portions of its business.
In most cases, the numbers would turn out to be scary. One need only look at the results of beleaguered Intel, which does not really have much of an AI play at all, to see what we mean.
The good news for memory and flash maker Micron Technology is that over the past several years it has come up with several AI plays, and its business is benefitting from that.
Micron has become a third – and now significant – supplier for the HBM stacked memory that is used with GPU and other kinds of AI accelerators. Significantly, Micron’s eight-high HBM3E stacks are used in Nvidia’s GB200 system that has been ramping for several months now, and its twelve-high HBM3E stacks are going to be used in the higher-performance and more capacious GB300 systems that will put twelve-high stacks of HBM3E on Nvidia’s “Blackwell” GPU accelerators.
The company has also created a server-class variant of the low power DDR5 (LPDDR5) memory that is used by Nvidia in the “Grace” CG100 Arm server processor it created as a host controller for its MGX accelerated computing nodes and the rackscale NVL systems that make use of it. The first iteration of the LPDDR5X memory that Micron created in conjunction with Nvidia was soldered onto the Grace system board. But with the later version being used with the GB300 package and the “Blackwell Ultra” B300 GPU, the bandwidth is being boosted using LPDDR5X memory in a SOCAMM modular form factor, also used in laptops but not requiring soldering and also allowing for it to be replaced or upgraded as needed.
Micron also sells high-end TLC and QLC flash memory modules and high-speed DDR5 server memory that is also sold for AI systems.
Add it all up, and Micron is making billions of dollars a quarter from the AI server business that it might otherwise not have been able to capture or that might not exist at all had the GenAI Boom not happened the way it has.
Let’s drill down into the numbers.
In the quarter ended in February, which is Micron’s second quarter of fiscal 2025, revenues rose by 38.3 percent to $8.05 billion, operating income increased by nearly an order of magnitude to $1.77 billion, and net income doubled to $1.58 billion. Revenue and both kinds of income were down sequentially, but because of everything else that Micron sells, not its AI-related products.
Micron ended the quarter with $8.22 billion in cash and short-term investments, with another $1.37 billion in long-term investments, and the top brass at Micron said on the call going over the numbers with Wall Street that it was on track to plow $14 billion into capital expenses in fiscal 2025. The company spent $3.1 billion on capital stuff, mostly as it builds out its foundries in the United States in Idaho and New York and overseas in Singapore, in fiscal Q2, which is net of proceeds from the US government out of the CHIPS Act.
On the whole, Micron’s DRAM main memory and NAND flash memory businesses have largely recovered and returned to normal seasonality after the bust period that began in 2022. As you can see, there is the typical boom-bust cycle that the memory business exhibits. It remains to be seen if the memory business can defy this cycle a bit because of the demand exceeding the supply for HBM memory, but overall, NAND flash seems to be riding down its pulse as normal.
In fiscal Q2, Micron’s DRAM memory sales rose by 47.3 percent year on year to $6.12 billion, but fell sequentially 4.3 percent from Q1. The flash business rose by 18.4 percent to $1.86 billion, but was off 17.2 percent sequentially.
Importantly, the Compute and Networking business unit at Micron – which is where all of this high-end AI business is located – is making money. More money than it ever has in the past and is approaching the operating income levels that were seen out of Intel in its peak of profitability in its Data Center Group in the late 2010s. Which is high praise until you contemplate just how profitable Nvidia’s Datacenter group business is today.
Let’s drill down into the Compute and Networking group.
In the quarter, Compute and Networking rose by 3.8 percent sequentially and more than doubled year on year to hit $4.56 billion. Operating income exploded by a factor of 68.5X year on year to $1.92 billion, which represented 12.2 percent growth in profits sequentially. The sequential profit growth rate is more than three times the sequential revenue growth rate, which just goes to show you how good HBM and LPDDR5X are for Micron’s business.
On the call, Sanjay Mehrotra, Micron’s chief executive officer, said that HBM3E memory deliveries were ahead of plan and revenues also exceeded expectations, with the revenue breaking about $1 billion for the first time.
“HBM3E continues to do well with eight-high, our yields, our capacity ramp is going well, our execution is going well,” Mehrotra said on the call. “And all that experience of eight-high in terms of capacity ramp as well as yield ramp will, of course, help us as we ramp our twelve-high. You know, we have announced before that we are now in volume production with our twelve-high. Just like any other new product, and these are highly complex products, HBM is the most complex product ever made in the industry. These kind of complex products, of course, in the early stages, there is a yield ramp. We expect twelve-high to have a premium over eight-high and of course, will continue to be accretive to our DRAM margins nicely as well.”
Mehrotra reiterated what he said a quarter ago that by the end of calendar 2025, Micron’s share of the HBM market would be inline with its share of the overall DRAM market. Depending on how you carve it up, Micron has somewhere between 20 percent and 25 percent share of the more standard DRAM market. And interestingly, Micron has upped the total addressable market for HBM memory from what it thought was $30 billion in calendar 2025 to $35 billion now, and says that the HBM TAM will be on the order of $100 billion by 2030. Obviously, 20 percent to 25 percent of this is a huge business, and will utterly dwarf everything else that Micron is doing.
Much like Nvidia’s datacenter business also utterly dwarfs everything else that chip and system maker is doing.
As best we can figure from our model, Micron sold $1.14 billion in HBM memory in fiscal Q2, up 52 percent sequentially and up by a factor of 19X year on year. We think that high capacity server memory and the LPDDR5X memory that is being used in AI compute nodes (and we think those not architected by Nvidia at some point in the future) added up to $1.05 billion in sales in the quarter, up by a factor of 23X year on year and up 3.5X sequentially.
That gives you a very good sense of the ramp of Grace CPUs in Nvidia systems, particularly since the configuration of LPDDR5X memory on the Grace CPU is static. Micron revenue growth for LPDDR5X is directly derived from Nvidia node count sales growth.
The other interesting thing is what happens if you take out HBM, high capacity server DRAM, and LPDDR5X memory from the overall DRAM numbers. If you do that, the core DRAM business, which is a mix of DDR4 and DDR5 memory used in generic PCs and servers, fell by 26.4 percent sequentially to $3.94 billion; this represented a 2.8 percent decline year on year. We strongly suspect that if you took AI sales out of the NAND flash business, you would see a similar shape to the curve, but perhaps with steeper declines.
Looking ahead, Micron is forecasting that DRAM and NAND bit shipments will grow in fiscal Q3, but gross margins will be squeezed due to recoveries in sales of consumer products and the ongoing underutilization in the flash portions of its fab operations. Micron expects revenues to be $8.8 billion, plus or minus $200 million, and for capital expenses to be north of $3 billion. Interestingly, HBM memory sales will grow sequentially in each quarter in 2025. That’s as much as Micron is willing to say about its Q4 F2025 right now.
Be the first to comment