The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ November 4, 2025 ] AMD Is Coiled To Hockey Stick In The AI Datacenter Compute
  • [ November 4, 2025 ] Arista Modular Switches Aim At Scale Across Networks, Hit Scale Out, Too Connect
  • [ November 3, 2025 ] More Upward Revisions On AI Infrastructure Spending AI
  • [ October 31, 2025 ] AWS “Bullish” On Homegrown Trainium AI Accelerators Cloud
  • [ October 31, 2025 ] Microsoft: Getting Margins From AI Means Sometimes Saying No AI
  • [ October 30, 2025 ] Google Spends More On Servers Than The Whole World Used To Cloud
  • [ October 29, 2025 ] As Rackscale GPU Systems Boom, Big Green Is Emphatically Red, White, And Blue AI
  • [ October 28, 2025 ] How Qualcomm Can Compete With Nvidia For Datacenter AI Inference AI
HomeHBM3e

HBM3e

Store

Micron Humming Along On All Memory Cylinders

September 24, 2025 Timothy Prickett Morgan 1

The United States may not have an indigenous foundry that makes high performance XPU compute engines for AI and HPC applications, but it certainly does have a home-based maker of high performance memory in Micron Technology. …

Store

Skyrocketing HBM Will Push Micron Through $45 Billion And Beyond

June 30, 2025 Timothy Prickett Morgan 2

Micron Technology has not just filled in a capacity shortfall for more high bandwidth stacked DRAM to feed GPU and XPU accelerators for AI and HPC. …

Store

We Can’t Get Enough HBM, Or Stack It Up High Enough

November 6, 2024 Timothy Prickett Morgan 4

There are lots of ways that we might build out the memory capacity and memory bandwidth of compute engines to drive AI and HPC workloads better than we have been able to do thus far. …

Store

Micron Gears Up For Its Potential Datacenter Memory Boom

June 27, 2024 Timothy Prickett Morgan 6

If you don’t like gut-wrenching, hair-raising, white-knuckling boom bust cycles, then do not go into the memory business. …

Compute

He Who Can Pay Top Dollar For HBM Memory Controls AI Training

February 27, 2024 Timothy Prickett Morgan 6

What is the most important factor that will drive the Nvidia datacenter GPU accelerator juggernaut in 2024? …

Compute

Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance

November 13, 2023 Timothy Prickett Morgan 3

For very sound technical and economic reasons, processors of all kinds have been overprovisioned on compute and underprovisioned on memory bandwidth – and sometimes memory capacity depending on the device and depending on the workload – for decades. …

AI

Nvidia Gooses Grace-Hopper GPU Memory, Gangs Them Up For LLM

August 22, 2023 Timothy Prickett Morgan 12

If large language models are the foundation of a new programming model, as Nvidia and many others believe it is, then the hybrid CPU-GPU compute engine is the new general purpose computing platform. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform