The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ May 14, 2025 ] Saudi Arabia Has The Wealth – And Desire – To Become An AI Player AI
  • [ May 14, 2025 ] Taking On VMware, HPE Mashes Up VM Essentials With Morpheus Cloud Controller Control
  • [ May 12, 2025 ] Armv9 Architecture Helps Lift Arm To New Financial Heights Compute
  • [ May 12, 2025 ] No “Doom And Gloom” In The First Half For Arista Connect
  • [ May 9, 2025 ] China Export Controls Whack AMD Datacenter GPU Business AI
  • [ May 8, 2025 ] Supermicro Hiccups On Hopper, Pulls $40 Billion Guidance For Fiscal 2026 Compute
  • [ May 6, 2025 ] Cisco Pulls Together A Quantum Network Architecture Connect
  • [ May 2, 2025 ] Amazon Says It Can Embiggen AWS Past “Multi-$100 Billion” With AI Cloud
HomeLlama 3.2

Llama 3.2

AI

Cerebras Trains Llama Models To Leap Over GPUs

October 25, 2024 Timothy Prickett Morgan 11

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform