The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ January 28, 2026 ] Microsoft Takes On Other Clouds With “Braga” Maia 200 AI Compute Engines AI
  • [ January 27, 2026 ] Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket AI
  • [ January 26, 2026 ] Nvidia Takes The Open Road In AI Weather Forecasting AI
  • [ January 26, 2026 ] AI Is Coming To Solve Your System Outages Control
  • [ January 23, 2026 ] Intel Is Still Struggling In The Datacenter, But It Could Get Better Compute
  • [ January 22, 2026 ] Meta Platforms Metamorphizing Into An AI Cloud For Sovereigns AI
  • [ January 21, 2026 ] Upscale AI Nabs Cash To Forge “SkyHammer” Scale Up Fabric Switch Connect
  • [ January 16, 2026 ] Is Nvidia Assembling The Parts For Its Next Inference Platform? Compute
HomeLlama 3.2

Llama 3.2

AI

Cerebras Trains Llama Models To Leap Over GPUs

October 25, 2024 Timothy Prickett Morgan 11

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform