The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ December 23, 2025 ] Rebellions AI Puts Together An HBM And Arm Alliance To Take On Nvidia Compute
  • [ December 22, 2025 ] Liquid Cooling Means More Performance And Less Heat For Supercomputing HPC
  • [ December 19, 2025 ] HBM Supply Curve Gets Steeper, But Still Can’t Meet Demand Store
  • [ December 18, 2025 ] Nvidia Nearly Completes Its Control Freakery With Slurm Acquisition Control
  • [ December 18, 2025 ] Building The AI Factory Datacenter Compute
  • [ December 17, 2025 ] Nvidia Is The Only AI Model Maker That Can Afford To Give It Away AI
  • [ December 17, 2025 ] Tomorrow’s Datacenter Won’t Be Like Yesterday’s – Here’s Why AI
  • [ December 16, 2025 ] How Sustainable Is This Crazy Server Spending? Compute
HomeL4

L4

AI

Nvidia’s Four Workhorses Of The AI Inference Revolution

March 21, 2023 Timothy Prickett Morgan 5

Last May, after we had done a deep dive on the “Hopper” H100 GPU accelerator architecture and as we were trying to reckon what Nvidia could charge for the PCI-Express and SXM5 variants of the GH100, we said that Nvidia needed to launch a Hopper-Hopper superchip. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform