The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ December 4, 2025 ] AWS Graviton5 Strikes A Different Balance For Server CPUs Compute
  • [ December 4, 2025 ] With Celestial AI Buy, Marvell Scales Up The Datacenter And Itself Connect
  • [ December 3, 2025 ] With Trainium4, AWS Will Crank Up Everything But The Clocks Compute
  • [ December 3, 2025 ] HPE Gets The Lead Out On Juniper-Aruba Networking Integration Connect
  • [ December 1, 2025 ] The Road To HPC And AI Profits Is Paved With Good Intentions AI
  • [ November 26, 2025 ] AI Propels Dell’s Datacenter Top Line – Bottom Line Is A Challenge Compute
  • [ November 25, 2025 ] TACC’s “Horizon” Supercomputer Sets The Pace For Academic Science HPC
  • [ November 21, 2025 ] HPC Is Not Just Riding The Coattails Of AI HPC
HomeL40S

L40S

AI

What To Do When You Can’t Get Nvidia H100 GPUs

November 17, 2023 Timothy Prickett Morgan 2

In a world where allocations of “Hopper” H100 GPUs coming out of Nvidia’s factories are going out well into 2024, and the allocations for the impending “Antares” MI300X and MI300A GPUs are probably long since spoken for, anyone trying to build a GPU cluster to power a large language model for training or inference has to think outside of the box. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform