The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ December 4, 2025 ] AWS Graviton5 Strikes A Different Balance For Server CPUs Compute
  • [ December 4, 2025 ] With Celestial AI Buy, Marvell Scales Up The Datacenter And Itself Connect
  • [ December 3, 2025 ] With Trainium4, AWS Will Crank Up Everything But The Clocks Compute
  • [ December 3, 2025 ] HPE Gets The Lead Out On Juniper-Aruba Networking Integration Connect
  • [ December 1, 2025 ] The Road To HPC And AI Profits Is Paved With Good Intentions AI
  • [ November 26, 2025 ] AI Propels Dell’s Datacenter Top Line – Bottom Line Is A Challenge Compute
  • [ November 25, 2025 ] TACC’s “Horizon” Supercomputer Sets The Pace For Academic Science HPC
  • [ November 21, 2025 ] HPC Is Not Just Riding The Coattails Of AI HPC
HomeLlama 3.2

Llama 3.2

AI

Cerebras Trains Llama Models To Leap Over GPUs

October 25, 2024 Timothy Prickett Morgan 11

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform