The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ November 10, 2025 ] Riding The Choppy AI Datacenter Waves With Supermicro Compute
  • [ November 10, 2025 ] Quantinuum Makes Another Milestone On Commercial Quantum Roadmap Compute
  • [ November 7, 2025 ] VAST Data’s $1.17 Billion Deal With CoreWeave Is A Leading Indicator AI
  • [ November 6, 2025 ] Attacking Interconnects And Networks Across All Scales Connect
  • [ November 4, 2025 ] AMD Is Coiled To Hockey Stick In The AI Datacenter Compute
  • [ November 4, 2025 ] Arista Modular Switches Aim At Scale Across Networks, Hit Scale Out, Too Connect
  • [ November 3, 2025 ] More Upward Revisions On AI Infrastructure Spending AI
  • [ October 31, 2025 ] AWS “Bullish” On Homegrown Trainium AI Accelerators Cloud
Home70B

70B

AI

Cerebras Trains Llama Models To Leap Over GPUs

October 25, 2024 Timothy Prickett Morgan 11

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform