Skyrocketing HBM Will Push Micron Through $45 Billion And Beyond

Micron Technology has not just filled in a capacity shortfall for more high bandwidth stacked DRAM to feed GPU and XPU accelerators for AI and HPC. It has created DRAM that stacks higher and runs faster than the competition from SK Hynix and Samsung.

And therefore Micron has also found its footing in this niche DRAM market after a failed attempt to start a different stacked memory standard a decade ago in its partnership with Intel to make Hybrid Memory Cube MCDRAM.

We were not going to bring up Micron’s also failed partnership with Intel on 3D XPoint persistent ReRAM memory, which promised nearly the performance of DRAM at costs only moderately above that of flash memory. But that is part of the exotic memory story at Micron, too.

Neither MCDRAM nor 3D XPoint worked out, but Micron learned a thing or two from the experience with Intel, which sought to keep both technologies tied closely to its own Xeon server CPU platforms. This had the effect of limiting their volumes, which is a disastrous thing to do. And so Micron decided to get into HBM and came to market with faster and taller HBM stacks and has managed to secure big contracts with Nvidia and AMD.

The third time is indeed the charm.

Specifically, Nvidia used six-high stacks of Micron’s HBM3E in the “Hopper” H200 GPU accelerators to make 141 GB of capacity, and is using eight-high stacks of HBM3E from Micron in its “Blackwell” B200 follow-on to deliver 192 GB. (These both use eight-high stacks of 3 GB DRAM.)

AMD is using eight-high HBM3E stacks from Micron in its Instinct MI325X GPU accelerators to deliver 256 GB of capacity. (That’s eight 4 GB devices glued vertically together at 32 GB per stack times eight stacks, one for each GPU chiplet.) AMD is also is using twelve-high stacks of 3 GB DRAM at 36 GB each to deliver 288 GB of capacity with the Instinct MI350 and Instinct MI355X accelerators. We know what you are thinking – why not just use eight twelve-high stacks of 4 GB DRAM to deliver 384 GB of HBM3E on the MI355X? There must be a reason – or reasons. It is expensive, hot, or tough to manufacture.

It is likely that Micron will get a slice of the future Nvidia “Rubin” R200 GPUs and AMD “Altair” MI400 GPUs. And we think a slice directly proportional to its manufacturing capacity, given that Nvidia and AMD could consume all of the HBM output from Micron, SK Hynix, and Samsung if it were not for the fact that antitrust regulators would get involved.

The three HBM manufacturers have to be careful to leave some stacked DRAM for the hyperscalers and cloud builders that are making their own XPU accelerators, and if they do not, you can rest assured that Google, Amazon Web Services, Microsoft, and Meta Platforms would be happy to sue the crap out of Nvidia or AMD or the HBM makers to get their fair share – and maybe any combination of these companies.

So far, with all of 2025’s HBM capacity long since allocated and probably the bulk of 2026’s expected capacity, you gotta figure everyone is watching this pretty carefully. With Nvidia being so rich, it can pay a premium for HBM, as we explained more than a year ago in He Who Can Pay Top Dollar For HBM Memory Controls AI Training. But Nvidia co-founder and chief executive officer Jensen Huang also has to be careful to leave some HBM on the table for Big Green’s many competitors in the AI arms race or that sueball will be lobbed at it as sure as the sky is Big Blue.

With that in mind, we consider Micron’s financial results for the third quarter of fiscal 2025 ended in May, where the company posted the highest revenue in its history and began its climb back to the profitability it saw from the regular DRAM business in late 2021 and early 2022 during a boom cycle before DRAM capacity glutted and caused the predictable bust cycle.

In the quarter ended on May 29, Micron brought in $9.3 billion in revenues across its various memory products, up 36.6 percent year on year and up 15.5 percent sequentially from Q2 F2025. Operating income more than tripled from the year-ago period to $2.17 billion. Net income, which is what the company and investors alike care about most was up by a factor of 5.7X to $1.89 billion and up 19.1 percent sequentially. Net income was a very good 20.3 percent of revenues, which is a level of profitability that any manufacturer, much less one in a capital intensive business like running a foundry, has a right to be proud of. This is, however, still less than half the profitability level of compute engine and network ASIC foundry Taiwan Semiconductor Manufacturing Co. But that just tells you that TSMC has a monopoly on the manufacturing of those devices in a way that Micron absolutely does not. SK Hynix has the bulk of the DRAM and HBM business, and Samsung is close behind. But Micron will get its 20 percent share of HBM, just as it has its 20 percent share of regular DRAM, over the long haul.

The HBM market was around $18 billion in calendar 2024, and it is expected to grow to around $35 billion in 2025, according to Sanjay Mehrotra, Micron’s chief executive officer, who spoke about the numbers on a call with Wall Street analysts.

Micron ended the quarter with $10.81 billion in cash and equivalents, which gives it some breathing room as it builds out its foundries. The company has previously said that over the next two decades or more, it will invest $200 billion in the United States, with $150 billion in foundries and $50 billion in research and development. The plan is to spend around $14 billion for capital expenses in fiscal 2025 across various foundry sites.

Mehrotra said that Micron’s datacenter business more than doubled year over year and set a new record, but Micron does not yet carve out datacenter as a separate category in its financial reporting. This is the fourth record quarter for its datacenter business in a row, and if current trends persist, we think Micron will start breaking out datacenter from consumer and other customer sets.

Thanks to a nearly 50 percent sequential increase in HBM revenues, which we estimate to be $1.69 billion and higher than the $1.31 billion we had been expected thirteen weeks ago, the DRAM business overall rose by 50.7 percent to $7.07 billion. Micron is still the only manufacturer of low power DRAM (LPDDR5X) memory for servers, which is used with Nvidia’s “Grace” CG100 CPUs that are often paired with its Hopper and Blackwell GPUs. During the quarter, Mehrotra said that Micron started sampling the future LPDDR5 memory based on its 1-gamma DRAM, which is etched using its first extreme ultraviolet (EUV) lithography technique.

“The node provides a 30 percent improvement in bit density, more than 20 percent lower power and up to 15 percent higher performance compared to 1-beta DRAM,” Mehrotra explained. “We will leverage 1-gamma across our entire DRAM product portfolio to benefit from this leadership technology.”

Mehrotra added that the yield and volume ramp for its twelve-high HBM3E “is progressing extremely well” and that shipments will crossover in Q4 F2025. Sometime in the second half this calendar year, Mehrotra said that Micron will have an HBM share that is commensurate with its current share in the regular DRAM market. This is around 20 percent, which is pretty fast. And with HBM expected to be a $100 billion market by 2030, this could indeed be a $25 billion to $30 billion business for Micron if it keeps leapfrogging SK Hynix and Samsung technically.

The future HBM4 memory coming from Micron will use its well-established 1-beta process, and will have a custom logic base die that will help its HBM4 memory deliver more than 2 TB/sec per HBM4 stack, which is 60 percent more bandwidth than a stack of HBM3E memory. HBM4 will also have 20 percent lower power consumption than its twelve-high HBM3E. (Presumably this comparison is to an eight-high stack of HBM4.) HBM4 memory from Micron has already sampled to multiple customers – and presumably more than the four current paying HBM customers it has – and is expected to ramp with the launch of various compute engines in calendar 2026.

Our model suggests that Micron’s high capacity server DIMMs and LPDDR5 memory drove $1.23 billion in sales in fiscal Q3, up by a factor of 15.4X from the year ago period. If you extract out these two type of memory plus also take out the HBM memory, Micron is left with a plain vanilla DRAM business that brought in about $4.15 billion, down 7.6 percent. This part of the memory business – PC and smartphone memory plus low-end server stuff – is still kinda bustish.

The Compute and Networking business unit drove $5.07 billion in sales, up 97 percent year on year and up 11.1 percent sequentially, and delivered $2.18 billion in operating profit, up 4.9X and accounting for 43 percent of CNBU revenues.

The other part of Micron we care about, its Storage business unit that makes enterprise flash, was up 7.2 percent to $1.45 billion, but lost $9 million. Sales have been shrinking sequentially for the storage business for the past two quarters and so have profits. That said, Micron has had three record quarters of datacenter flash drive sales in a row, so this is helping boost revenue growth even if it might not be at sufficient volumes to help with overall company profitability.

“Looking ahead to fiscal Q4, we see a robust demand environment and expect to grow revenue by 15 percent sequentially to a record $10.7 billion at guidance midpoint,” Mehrotra said on the call.

That’s sets up Micron to be a $45 billion memory maker in fiscal 2026 – if not larger.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

2 Comments

  1. In my opinion Intel and Micron’s 3D XPoint had the possibility of being a Christensen-style disruptive technology, but was cancelled due to the fact that big companies with established products tends to cancel new products only because they don’t earn as well as the old products.

    Said another way, Intel tried to monetize Optane before software had caught up to the point where 3D Xpoint had established use cases. Now that the AI rush has made big data even bigger, fast accessible persistent memory seems more useful to me than just 5 years ago.

    3D Xpoint is, of course, completely different than HBM. Maybe both could be earning billions right now.

    • I agree. Intel tried to keep 3D XPoint for itself, and that killed it, and we need persistent memory in high capacity and cheaply.

      Imagine, if you will, a very slow moving but massively parallel computer running at 1 GHz with memory and compute in absolute balance and not burning so much energy. . . .

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.