The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ July 3, 2025 ] How Multi-Agent Systems Revolutionize Data Workflows AI
  • [ July 2, 2025 ] How Will Juniper Change HPE’s Datacenter Networking Strategy? Connect
  • [ July 1, 2025 ] Even AI Can’t Predict How Much Accelerated Iron The World Will Buy AI
  • [ June 30, 2025 ] Skyrocketing HBM Will Push Micron Through $45 Billion And Beyond Store
  • [ June 25, 2025 ] HPE Throws Everything At AI – And AI At Everything Compute
  • [ June 25, 2025 ] Some Thoughts On The Future “Doudna” NERSC-10 Supercomputer HPC
  • [ June 23, 2025 ] Nvidia Passes Cisco And Rivals Arista In Datacenter Ethernet Sales Connect
  • [ June 20, 2025 ] QuEra Quantum System Leverages Neutral Atoms To Compute Compute
HomeHBM3e

HBM3e

Store

Skyrocketing HBM Will Push Micron Through $45 Billion And Beyond

June 30, 2025 Timothy Prickett Morgan 2

Micron Technology has not just filled in a capacity shortfall for more high bandwidth stacked DRAM to feed GPU and XPU accelerators for AI and HPC. …

Store

We Can’t Get Enough HBM, Or Stack It Up High Enough

November 6, 2024 Timothy Prickett Morgan 4

There are lots of ways that we might build out the memory capacity and memory bandwidth of compute engines to drive AI and HPC workloads better than we have been able to do thus far. …

Store

Micron Gears Up For Its Potential Datacenter Memory Boom

June 27, 2024 Timothy Prickett Morgan 6

If you don’t like gut-wrenching, hair-raising, white-knuckling boom bust cycles, then do not go into the memory business. …

Compute

He Who Can Pay Top Dollar For HBM Memory Controls AI Training

February 27, 2024 Timothy Prickett Morgan 6

What is the most important factor that will drive the Nvidia datacenter GPU accelerator juggernaut in 2024? …

Compute

Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance

November 13, 2023 Timothy Prickett Morgan 3

For very sound technical and economic reasons, processors of all kinds have been overprovisioned on compute and underprovisioned on memory bandwidth – and sometimes memory capacity depending on the device and depending on the workload – for decades. …

AI

Nvidia Gooses Grace-Hopper GPU Memory, Gangs Them Up For LLM

August 22, 2023 Timothy Prickett Morgan 12

If large language models are the foundation of a new programming model, as Nvidia and many others believe it is, then the hybrid CPU-GPU compute engine is the new general purpose computing platform. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform