The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ March 24, 2023 ] Docker Helped Invent Containers, And Is Now Reinventing Itself Control
  • [ March 23, 2023 ] Is This The End Of The Line For NEC Vector Supercomputers? HPC
  • [ March 23, 2023 ] More Power To You – Energy Efficiently Compute
  • [ March 23, 2023 ] Preparing For Upcoming Hybrid Classical-Quantum Compute Compute
  • [ March 21, 2023 ] Nvidia’s Four Workhorses Of The AI Inference Revolution AI
  • [ March 21, 2023 ] Nvidia Bends The Clouds To Its Own Financial Will AI
  • [ March 21, 2023 ] Inside The Infrastructure That Microsoft Builds To Run AI AI
  • [ March 20, 2023 ] Taking Another Stab At Massively Parallel Data Analytics Store
HomeH100

H100

AI

Nvidia’s Four Workhorses Of The AI Inference Revolution

March 21, 2023 Timothy Prickett Morgan 4

Last May, after we had done a deep dive on the “Hopper” H100 GPU accelerator architecture and as we were trying to reckon what Nvidia could charge for the PCI-Express and SXM5 variants of the GH100, we said that Nvidia needed to launch a Hopper-Hopper superchip. …

AI

Nvidia To Build DGX Complexes In Clouds To Better Capitalize On Generative AI

February 22, 2023 Timothy Prickett Morgan 4

GPU computing platform maker Nvidia announced its financial results for its fiscal fourth quarter ended in January, which showed the same digestion of already acquired capacity by the hyperscalers and cloud builders and the same hesitation to spend by enterprises that other compute engine makers for datacenter computing are also seeing. …

AI

The “Hopper” GPU Compute Ramp Finally Starts

September 21, 2022 Timothy Prickett Morgan 6

You can’t be certain about a lot of things in the world these days, but one thing you can count on is the voracious appetite for parallel compute, high bandwidth memory, and high bandwidth networking for AI training workloads. …

Compute

How Much Of A Premium Will Nvidia Charge For Hopper GPUs?

May 9, 2022 Timothy Prickett Morgan 0

There is increasing competition coming at Nvidia in the AI training and inference market, and at the same time, researchers at Google, Cerebras, and SambaNova are showing off the benefits of porting sections of traditional HPC simulation and modeling code to their matrix math engines, and Intel is probably not far behind with its Habana Gaudi chips. …

Compute

The Buck Still Stops Here For GPU Compute

March 24, 2022 Timothy Prickett Morgan 3

It has taken untold thousands of people to make machine learning, and specifically the deep learning variety, the most viable form of artificial intelligence. …

AI

The NVSwitch Fabric That Is The Hub Of The DGX H100 SuperPOD

March 23, 2022 Timothy Prickett Morgan 6

Normally, when we look at a system, we think from the compute engines at a very fine detail and then work our way out across the intricacies of the nodes and then the interconnect and software stack that scales it across the nodes into a distributed computing platform. …

About

The Next Platform is published by Stackhouse Publishing Inc in partnership with the UK’s top technology publication, The Register.

It offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform