The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ March 29, 2023 ] Cerebras Smashes AI Wide Open, Countering Hypocrites AI
  • [ March 29, 2023 ] DDoS DNS Attacks Are Old-School, Unsophisticated . . . And They’re Back Security
  • [ March 28, 2023 ] Enfabrica Converges Extended Memory And I/O Down To One Chip Connect
  • [ March 28, 2023 ] Pushing The Boundaries Of Webscale Optical DCI Performance Cloud
  • [ March 27, 2023 ] The Dream Of Placing Blocks On Chip Designs With AI AI
  • [ March 27, 2023 ] Power To The Engineering People Cloud
  • [ March 24, 2023 ] Docker Helped Invent Containers, And Is Now Reinventing Itself Control
  • [ March 23, 2023 ] Is This The End Of The Line For NEC Vector Supercomputers? HPC
Hometraining

training

Compute

Chip Roadmaps Unfold, Crisscrossing And Interconnecting, At AMD

June 14, 2022 Timothy Prickett Morgan 4

After its acquisitions of ATI in 2006 and the maturation of its discrete GPUs with the Instinct line from the past few years and the acquisitions of Xilinx and Pensando here in 2022, AMD is not just a second source of X86 processors. …

AI

Intel Pits New Gaudi2 AI Training Engine Against Nvidia GPUs

May 10, 2022 Timothy Prickett Morgan 1

Nvidia is not the only company that has created specialized compute units that are good at the matrix math and tensor processing that underpins AI training and that can be repurposed to run AI inference. …

AI

The Performance Of MLPerf As A Ubiquitous Benchmark Is Lacking

April 8, 2022 Jeffrey Burt 0

Industry benchmarks are important because, no matter that comparisons are odious, IT organizations nonetheless have to make them to plot out the architectures of their future systems. …

AI

Google Chips Away at Problems at “Mega-Batch” Scale

August 9, 2021 Nicole Hemsoth 0

As Google’s batch sizes for AI training continue to skyrocket, with some batch sizes ranging from over 100k to one million, the company’s research arm is looking at ways to improve everything from efficiency, scalability, and even privacy for those whose data is used in large-scale training runs. …

AI

Feeding The Datacenter Inference Beast A Heavy Diet Of FPGAs

July 31, 2020 Timothy Prickett Morgan 0

Any workload that has a complex dataflow with intricate data needs and a requirement for low latency should probably at least consider an FPGA for the job. …

AI

Habana Takes Training And Inference Down Different Paths

August 26, 2019 Michael Feldman 2

Processor hardware for machine learning is in their early stages but it already taking different paths. …

AI

Programmable Networks Train Neural Nets Faster

February 14, 2018 Timothy Prickett Morgan 0

When it comes to machine learning training, people tend to focus on the compute. …

About

The Next Platform is published by Stackhouse Publishing Inc in partnership with the UK’s top technology publication, The Register.

It offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform