The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ May 20, 2025 ] Dell Is Determined To Gets Its Piece Of The AI Enterprise Pie Compute
  • [ May 19, 2025 ] Nvidia Licenses NVLink Memory Ports To CPU And Accelerator Makers Connect
  • [ May 16, 2025 ] How Is Neocloud TensorWave Paying for Its Fairly Large AMD Cluster? Compute
  • [ May 15, 2025 ] Pushing AI System Cooling To The Limits Without Immersion AI
  • [ May 15, 2025 ] Intel Xeon 6 CPUs Carve Out Their Territory In AI, HPC Compute
  • [ May 14, 2025 ] Saudi Arabia Has The Wealth – And Desire – To Become An AI Player AI
  • [ May 14, 2025 ] Taking On VMware, HPE Mashes Up VM Essentials With Morpheus Cloud Controller Control
  • [ May 12, 2025 ] Armv9 Architecture Helps Lift Arm To New Financial Heights Compute
Home405B

405B

AI

Cerebras Trains Llama Models To Leap Over GPUs

October 25, 2024 Timothy Prickett Morgan 11

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform