The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ July 10, 2025 ] Sizing Up AWS “Blackwell” GPU Systems Against Prior GPUs And Trainiums Cloud
  • [ July 10, 2025 ] Will Companies Build Or Buy Their GenAI Models? AI
  • [ July 10, 2025 ] How BigQuery Combines Data And AI For Business Transformation AI
  • [ July 9, 2025 ] With Money And Rhea1 Tapeout, SiPearl Gets Real About HPC CPUs HPC
  • [ July 8, 2025 ] Only The Biggest Neoclouds Will Survive Cloud
  • [ July 7, 2025 ] Silicon One: Many Cisco Chips With One Architecture Chasing Many AI Workloads Connect
  • [ July 3, 2025 ] How Multi-Agent Systems Revolutionize Data Workflows AI
  • [ July 2, 2025 ] How Will Juniper Change HPE’s Datacenter Networking Strategy? Connect
HomeArtificial Analysis

Artificial Analysis

AI

Stacking Up AMD Versus Nvidia For Llama 3.1 GPU Inference

July 29, 2024 Timothy Prickett Morgan 14

Training AI models is expensive, and the world can tolerate that to a certain extent so long as the cost inference for these increasingly complex transformer models can be driven down. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform