The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ November 10, 2025 ] Riding The Choppy AI Datacenter Waves With Supermicro Compute
  • [ November 10, 2025 ] Quantinuum Makes Another Milestone On Commercial Quantum Roadmap Compute
  • [ November 7, 2025 ] VAST Data’s $1.17 Billion Deal With CoreWeave Is A Leading Indicator AI
  • [ November 6, 2025 ] Attacking Interconnects And Networks Across All Scales Connect
  • [ November 4, 2025 ] AMD Is Coiled To Hockey Stick In The AI Datacenter Compute
  • [ November 4, 2025 ] Arista Modular Switches Aim At Scale Across Networks, Hit Scale Out, Too Connect
  • [ November 3, 2025 ] More Upward Revisions On AI Infrastructure Spending AI
  • [ October 31, 2025 ] AWS “Bullish” On Homegrown Trainium AI Accelerators Cloud
HomeGreen

Green

AI

What We’re Getting Wrong About Efficient AI Training at Scale

July 9, 2021 Nicole Hemsoth Prickett Comments Off on What We’re Getting Wrong About Efficient AI Training at Scale

Famed computer architect, professor, author, and distinguished engineer at Google, David Patterson, wants to set the record straight on common misconceptions about carbon emissions and datacenter efficiency for large-scale AI training. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform