The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ December 12, 2025 ] Everybody But Nvidia And TSMC Has To Make It Up In Volume With AI AI
  • [ December 11, 2025 ] Oracle Is Using OpenAI To Build A Platform For The Enterprise AI
  • [ December 10, 2025 ] Driving HPC Performance Up Is Easier Than Keeping The Spending Constant HPC
  • [ December 8, 2025 ] IBM Broadens Its Enterprise Software Stack With Confluent Buy Connect
  • [ December 4, 2025 ] AWS Graviton5 Strikes A Different Balance For Server CPUs Compute
  • [ December 4, 2025 ] With Celestial AI Buy, Marvell Scales Up The Datacenter And Itself Connect
  • [ December 3, 2025 ] With Trainium4, AWS Will Crank Up Everything But The Clocks Compute
  • [ December 3, 2025 ] HPE Gets The Lead Out On Juniper-Aruba Networking Integration Connect
Home70B

70B

AI

Cerebras Trains Llama Models To Leap Over GPUs

October 25, 2024 Timothy Prickett Morgan 11

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform