Riding The Memory Boom And Trying To Avoid The Bust
The bust cycles in the memory market are horrible, but the booms are a thing to behold, and with GenAI voraciously consuming all the DRAM, HBM, and flash memory it can get allocated through the hyperscalers, cloud builders, and model builders and supplies well short of demand for the foreseeable future, there has never been a better time to sit tight and just keep waiting for the prices to rise week after week after week.
The memory makers are going to make a fortune, and all systems, whether they are for traditional workloads or for GenAI models, are going to get a lot more expensive. Given this, and the long history of boom-bust cycles in the memory market, the DRAM and flash makers have very little incentive to accelerate their capacity buildouts and every incentive to stick to their plans and just watch the profits swell as demand is, what, 2X to 3X higher than industry capacity, which is growing at maybe 20 percent to 30 percent a year.
This is going to drive system architects mad.
They are going to have to figure out ways to get more work done with less memory. As impossible as that may seem given the need for vector databases and KV cache servers in a new intermediate tier. Given this, those software-defined storage arrays that can do more with less DRAM and flash capacity are going to be the winners. In-memory data compression and de-duplication, other ways of efficiently encoding and retrieving data, and techniques to bundle up data and hit flash as little as possible (to keep it from wearing out) would seem to be important, just to give a few examples. These metrics will be as important as cost per GB and IOPS read and write because it will not be enough to buy memory and flash – it has to be used maximally. Just like compute in the GenAI, given its enormous expense. We are not far away from a day when a GPU accelerator will cost $100,000 a pop. (About six months)
OK, so maybe this is a system architect’s dream. . . .
The top brass at Micron Technology must be feeling like they are living in a dream, too.
In the quarter that stopped at the end of February (Q2 of Micron’s fiscal 2026 year), Micron’s revenues nearly tripled year on year to $23.86 billion, with operating income up by a factor of 9.1X to $16.14 billion and with net income up by 8.7X to $13.79 billion.
To put this into perspective, this second quarter of F2026 drove nearly as much revenue as all of fiscal 2024 – and with 17.7X more profits. There is no reason to believe that Micron and its memory and flash peers are not going to be coining money for the next year or two in ways we did not imagine were possible during the memory and flash bust only a few years ago.
Micron ended the quarter with $14.59 billion in cash – enough to build three-quarters of a memory fab – and spent a smidgen more than $5 billion on capital expenses to expand fab capacity that will come online two years or so from now.
The DRAM market is expanding on so many fronts, and Micron plays in all of them including datacenter-class LPDDR5 memory for AI servers like Nvidia’s “Grace” and “Vera” Arm server CPUs, DDR5 memory for high-end servers (generally X86 machines but also including IBM Power and z CPUs and various Arm designs), as well as the honey pot that HBM stacked memory has become. And consequently, Micron’s DRAM memory exploded in Q2 F2026, up 206.5 percent to an incredible $18.77 billion.
Here’s the fun bit. DRAM revenue increased 73.6 percent sequentially from Q1 F2026, but Mark Murphy, Micron’s chief financial officer, said that capacity shipped, measured in bits, only increased “mid-single digits” while the rest of that “in the mid-60s percentage range” was driven by bit price increases.
In any other time, the fact that the flash business grew by 1.7X year on year to just a tad under $5 billion would have been astonishing, but give the exploding DRAM market being up by more than 3X, the flash business pales by comparison. No one knows for sure how much of the flash business is for datacenter products, but I can confidently make this prediction: Going forward, there will be more concentration of flash capacity in the datacenter among all of the suppliers, which means our PCs, tablets, and smartphones are also going to get more expensive as too much demand chases too little supply. The same thing will happen to DRAM, of course.
Micron used to give a hint here and there so Wall Street could reckon how much of its DRAM business was coming through HBM stacked memory and how much was coming from high capacity server DIMMs and LPDDR5 low-powered server memory. Micron is Nvidia’s sole supplier for this, and it has just began shipping an LPDDR5 SOCAMM2 module that will allow Nvidia to quadruple the main memory capacity on the Vera CPU to 2 TB compared to a peak 512 GB for the Grace CPU. (For yield reasons, the capacity on Grace was actually lower than this, at 480 GB.)
Despite the dearth of hard data, I have continued to model out HBM memory revenues as well as for these other categories. My best guess is that HBM stacked memory revenues were up 7.3X year on year to $8.32 billion as Micron ships HBM3E memory for Nvidia “Blackwell” B300 GPUs and for AMD’s “Antares” MI325X, MI350X, and MI355X GPUs. Despite some rumors to the contrary, Micron is making HBM4 memory for the Nvidia “Rubin” R200 GPUs coming later this year; it is not clear if Micron has any slice of the AMD “Altair” MI400 series; the word on the street is that Samsung and SK Hynix will be the main suppliers of HBM for the initial members of this family of AMD accelerators.
My model says that high capacity server DIMMs plus LPDDR5 modules accounted for $1.46 billion in sales in Q2 F2026, up 39.5 percent. If you take this high-end and low-cost server memory out of the mix along with HBM, then the remaining DRAM business accounted for $8.99 billion in my model, up 128.2 percent. Still not bad, and illustrative of the across-the-board memory boom we are in.
Micron’s datacenter business is humming along quite nicely, as you can see.
The new Micron business units that comprise the datacenter business are the Cloud Memory and Core Datacenter units, and together, these units had $13.44 billion in sales, up 181.3 percent year on year, with an operating income of $8.94 billion, up 362.8 percent year on year.
Only a year and a half ago, this datacenter business was a relatively puny $3.5 billion with 29 percent of revenue dropping to the middle line. Now, it is 3.8X bigger and 8.7X more profitable. Somebody is going to get a whole lotta bonuses and stock options at Micron.
The shortages of DRAM and flash are helping the other Micron business units, as you can see in the table above. When the mobile and PC business is more profitable than the datacenter business and the auto and embedded business is almost as profitable as the datacenter business, you know something weird is happening.
It may sound like Micron is sitting in the catbird seat, but all suppliers have to be careful not to overextend their hand. Which is why Micron has just inked its first five-year strategic customer agreement, which is a whole lot different from the one-year long-term agreements it is used to signing.
It is natural to jump to the conclusion that this SCA was inked with Nvidia. But it could be Broadcom, which needs to secure HBM capacity for the homegrown accelerators being crafted by Google, Anthropic, Meta Platforms, ByteDance, and Apple. It could be Marvell, which will buy a much smaller volume of HBM memory compared to Nvidia and probably AMD, too, through its chip shepherding work for Microsoft and Amazon Web Services.
It is hard to say. Micron no doubt does not want to do a lot of such SCA deals because it limits its pricing levers. But it probably is a good thing to ink a few such deals so it can confidently build out a capacity expansion plan and fund it. All we know is the five year SCA comes at a time when Micron has committed to spend more than $25 billion on capex in fiscal 2026 and is looking out into the next several fiscal years at probably what used to seem like an enormous amount of capex investments that are now made possible by the GenAI boom.
Here’s the other thing: If Micron can make more DRAM and flash quicker than its rivals, it can steal market share in a boom time. You can bet Sanjay Mehrotra, the company’s chief executive officer, is thinking a lot about that – and how to not overdo it and create a bust cycle.