Site icon The Next Platform

Micron Humming Along On All Memory Cylinders

The United States may not have an indigenous foundry that makes high performance XPU compute engines for AI and HPC applications, but it certainly does have a home-based maker of high performance memory in Micron Technology. And that is a good hedge in a world that is increasingly political and that needs more HBM stacked memory than either Samsung or SK Hynix, both Korean firms, can supply.

Micron was late to the HBM game, launching its HBM2E memory in June 2021 when Samsung and SK Hynix already had delivered two generations of stacked DRAM memory – HBM and HBM2 – to the likes of Nvidia, AMD, Intel, and others building compute engines that needed more bandwidth than regular DRAM main memory working over a normal DDR memory bus could deliver. Micron was perhaps sidetracked by its 3D XPoint memory partnership with Intel, which ended in a whimper for both companies. But the demand for HBM soared as AI took off, and so Micron got aggressive with its HBM designs, skipping the HBM3 generation entirely and focusing on putting out the best HBM3E (meaning the highest clock speeds and the tallest stacks). And last year, Micron was the sole memory maker supplying HBM3E memory for the upgraded H200 GPUs from Nvidia, sending its HBM memory business into the stratosphere.

At the same time, Micron was tapped by Nvidia to be the supplier of low-power DDR5 laptop memory that has been hardened for server workloads, which is used in conjunction with its “Grace” CG100 Arm server processor, which is the host CPU the Nvidia rackscale computers that have become the gold standard on the clouds and in many an enterprise datacenter. To this day, Micron is still the only supplier of LPDDR5 memory to Nvidia for Grace CPUs, and it is likely that it will be the only one supplying such on-board memory for the future “Vera” Arm processor coming next year.

And, to make matters better, the general purpose server market is finally, after a tough hiatus, undergoing a much-needed upgrade cycle as enterprises as well as clouds and hyperscalers consolidate web, application, and database workloads onto fewer servers using a lot less space and power to free up room for hot and power-hungry XPU clusters to do GenAI training and inference. And the CXL memory business is also adding to the mix, although how much remains to be seen.

After many tough years, fiscal 2025 is a good one to be Micron.

In the August quarter, which is the last one for Micron’s fiscal 2025, revenues were up 46 percent year on year to $11.32 billion. This was also a 21.7 percent sequential growth from fiscal Q3, which is a pretty good number for any company in any business.

After making some heavy investments over the years to get a slew of products to market to drive high performance in the datacenter, operating income is on the rise, and net income is rising even faster. In the August quarter, operating income was up 2.4X to $3.65 billion, and net income was up 3.6X to $3.2 billion. That net income represented 28.3 percent of revenues. This is a level of profitability that Micron has not seen since the server memory shortages of 2017 through 2019 and is a far cry better than the $7.1 billion in losses that Micron racked up in the five quarters from Q1 F2023 through Q1 F2024 inclusive as it was rejiggering its business for the future that it forecasted and fortunately intersected with.

It takes a lot of money and continual advancements in ever-shrinking process technologies to be in the foundry business, and Micron has done a good job on this front. If anything, the company has benefitted slightly better than its capital expense flows might suggest was possible:

As you can see, this being the memory business, there is a boom and bust cycle to this, and flash, which Micron also makes, is a kind of memory and it is also plagued by such cycles, which are driven by a dearth of new technology followed by overcapacity that drives the price down for all suppliers. (Memory is like oil in that way, except memory keeps getting faster and denser and oil reserves turn into gas reserves and thus the fuel gets more ephemeral as you go through the petroleum cycle.)

In Q4 F2025, Micron spent $4.93 billion on capital expenses, which was 43.6 percent of revenues and which is a lot for any company – particularly one that is vertically integrated like Micron, designing products as well as etching and selling them. (This stands in contrast with Taiwan Semiconductor Manufacturing Co, the world’s largest and best compute and networking engine foundry, which doesn’t design stuff but rather just etches and packages what others design.) Micron has some of the toughest customers in the world, who do not want to pay a premium for anything because they buy in high volumes.

Lucky for Micron, demand for many of its products, and especially HBM stacked memory, is greater than the world’s supply.

Here is the breakdown of Micron’s sales across DRAM, NAND, and NOR storage:

DRAM sales across all types rose by 67.8 percent to $8.94 billion, setting a new record high that will surely be broken again and again in the coming quarters if the GenAI boom continues. NAND flash memory sales were actually down 4.3 percent to $2.26 billion, but were up 5 percent sequentially. NOR flash and other products make up a miniscule part of the business, but nearly doubled to $113 million in Q4.

Back in April, Micron said that it would be rejiggering its business units, and this is the first quarter where the new categories were presented:

Thus far, Micron has not cast this new way of looking at its numbers backwards across several years. Just the year ago and prior quarters compared to the current one. We will fill in the blanks as they become available.

The ones we care about here at The Next Platform are the top two. The Cloud Memory business unit sells HBM, CXL, and DDR DRAM memory as well as datacenter flash drives, non-volatile (battery backed) DDR DIMMs, and RDIMM memory. The key customers for these products are the hyperscalers and cloud builders and enterprises and service providers that are operating at scale. Cloud Memory sales were up 3.1X to $4.53 billion in the quarter, and operating income was $2.18 billion, or 48 percent of revenues. This business is what is lifting Micron.

The Core Datacenter business unit sells NVM-Express flash drives and plain vanilla flash drives as well as DDR server memory and LRDIMM and RDIMM memory for various kinds of devices as well as some variants of non-volatile DRAM. In the fourth quarter, this business was off 23 percent to $1.58 billion, and operating income was $394 million, down 28.7 percent and representing 25 percent of sales. This other part of the Micron datacenter business did grow revenues 3.1 percent sequentially and operating income rose by 28.8 percent from Q3 F2025 as well. So it is a tough compare but coming out of a trough.

Add the two together and you get the Micron datacenter business, which had $6.12 billion in sales, up 75 percent, and operating income of $2.58 billion, up 150 percent.

Here is the trend lines for revenues and operating profits for the Micron datacenter business, using the old Compute and Networking business unit as a proxy for all but the last two quarters shown:

This datacenter business is as healthy as it has been in a decade, but the operating income as a share of revenue is lower than when it peaked in fiscal 2018.

Micron does not break out its HBM, high capacity server DRAM, and LPDDR5 sales for datacenter compute engines separately from each other, but it has given us enough hints in the past two years to do some modeling.

Based on what Micron has said, we reckon that Micron sold just a tad under $2 billion in HBM memory in the fourth quarter ($1.985 billion, if you want three significant digits in an informed guesstimate). That represents 17.7 percent growth sequentially from the third quarter but a factor of 3.78X more than it sold in HBM in the year ago period. Our model suggests that sales of high capacity server DRAM and LPDDR5 memory for servers was up 10 percent sequentially to $1.32 billion, which represented 3.92X growth year on year. Add it up, and high end compute engine memory represented $3.31 billion in sales, up 3.83X from a year ago. LPDDR5 memory was up over 50 percent sequentially, by the way. (We don’t know enough to extract it out separately as yet.)

If you extract that high-end memory out of the total DRAM sales for Micron, then other lower-end DRAM (including that sold into PCs, mobile devices, embedded gear, and autos) added up to $5.63 billion, up 26.2 percent. Even the low-end stuff is selling.

“Robust datacenter demand, including the uptick in server unit growth, has contributed to a tight industry DRAM environment and strengthened NAND market conditions,” Sanjay Mehrotra, Micron’s chief executive officer, said on a call with Wall Street analysts going over the numbers. “Additionally, broadening of demand across end markets has also constrained DRAM supply. On the supply side, we expect low supplier inventories, constrained node migration as industry supports extended D4 and LP4 end-of-life, longer lead times and higher costs globally for new wafer capacity, all to limit the pace of supply growth for DRAM in 2026. In calendar 2026, we anticipate further DRAM supply tightness in the industry and continued strengthening in NAND market conditions. Over the medium term, we anticipate industry bit demand growth of mid-teens CAGR for both DRAM and NAND.”

Mehrotra added that Micron had spent $13.8 billion on capex in fiscal 2025, and would do more than that in fiscal 2026. The company plans to spend $4.5 billion or so in Q1 F2026 on capital expenses, and said further that this is the rate to expect each quarter for the year. So expect about $18 billion in capex from Micron this fiscal year, give or take some wiggle room each quarter. (Both the $13.8 billion and $18 billion are net of any incentives received from the governments of the United States, Japan, and Singapore, where Micron has foundry operations. Micron had $2 billion in such incentives in fiscal 2025.)

Interestingly, the shift of regular DRAM to its 1γ memory process would free up 1β process capacity to make more HBM stacked DRAM, which is based on the older technology. (You can’t use a new and possibly lower yielding process to make stacked memory or you would go broke getting good yield on the stacks.) Micron still believes it will have around 20 percent of HBM market share by Q3 F2026, which is consistent with its overall DRAM share, and that the HBM market is going to grow to $100 billion by 2030.

What Micron is not doing this time around is skipping the HBM4 generation; it is doing both HBM4 and then HBM4E. Micron has boosted the pin speeds on its HBM4 memory to 11 Gb/sec and that means a stack can drive 2.8 TB/sec of aggregate bandwidth. Mehrotra claimed that this is the fastest HBM4 memory that will come to market. The company also has eight-high and twelve-high stacks for the HBM4 generation. With the HBM4E generation, Mehrotra said that Micron will be able to do custom base die chiplets as well as its own base die design for customers who are good with that. The base die is being added to HBM4E memory to allow some computation and other customizations to the memory subsystem.

In the meantime, HBM3E is selling like hotcakes, and Micron has six customers and most of its HBM3E capacity for calendar 2026 has already been sold. Sales of HBM4 capacity for calendar 2026 are almost completed, too.

Micron expects Q1 F2026 sales to be $12.5 billion, plus or minus $300 million.

Exit mobile version