The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ May 30, 2025 ] Dell Sets Up For A Killer Spike In AI Server Sales Compute
  • [ May 28, 2025 ] Nvidia Does Not Need China, But It Craves It And That Is Risky Compute
  • [ May 28, 2025 ] With DPU-Goosed Switches, HPE Tackles VMware, Security – And Maybe HPC And AI Connect
  • [ May 27, 2025 ] Why We Need A Data-Centric OS For AI Control
  • [ May 27, 2025 ] Reasons to attend the 2025 AI Infra Summit AI
  • [ May 23, 2025 ] OpenAI Datacenters Follow The Money To Abu Dhabi AI
  • [ May 23, 2025 ] Lenovo Breaks Even On Datacenter Hardware, Makes It Up In Services Compute
  • [ May 21, 2025 ] Silicon One G200 Finally Drives Cisco’s AI Networking Business Connect
Home70B

70B

AI

Cerebras Trains Llama Models To Leap Over GPUs

October 25, 2024 Timothy Prickett Morgan 11

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform