The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ October 14, 2025 ] Oracle First In Line For AMD “Altair” MI450 GPUs, “Helios” Racks Compute
  • [ October 14, 2025 ] Google Rolls Up Gemini And AI Tools Into An Enterprise Platform AI
  • [ October 13, 2025 ] Broadcom Goes Wide With AI Systems And Takes On The ODMs AI
  • [ October 10, 2025 ] IBM Ships Homegrown “Spyre” Accelerators, Embraces Anthropic For AI Push AI
  • [ October 8, 2025 ] Cisco Takes On Broadcom, Nvidia For Fat AI Datacenter Interconnects Connect
  • [ October 7, 2025 ] Dell Says It Can Finally Make Some Big Money On GenAI AI
  • [ October 6, 2025 ] Did AMD Use ChatGPT To Come Up With Its OpenAI Partnership Deal? AI
  • [ October 2, 2025 ] Meta Buys Rivos To Accelerate Compute Engine Engineering Compute
Homemachine learning

machine learning

AI

Making AI Run At Any Scale But Not At All Costs

August 24, 2022 Jeffrey Burt 1

AI is arguably the most important kind of HPC in the world right now in terms of providing immediate results for immediate problems, and particularly for enterprises with lots of data and a desire to make money in a new economy that does not fit models and forecasts before the coronavirus pandemic. …

Compute

Inside Tesla’s Innovative And Homegrown “Dojo” AI Supercomputer

August 23, 2022 Timothy Prickett Morgan 7

How expensive and difficult does hyperscale-class AI training have to be for a maker of self-driving electric cars to take a side excursion to spend how many hundreds of millions of dollars to go off and create its own AI supercomputer from scratch? …

AI

Google Stands Up Exascale TPUv4 Pods On The Cloud

May 11, 2022 Timothy Prickett Morgan 2

It is Google I/O 2022 this week, among many other things, and we were hoping for an architectural deep dive on the TPUv4 matrix math engines that Google hinted about at last year’s I/O event. …

AI

Intel Pits New Gaudi2 AI Training Engine Against Nvidia GPUs

May 10, 2022 Timothy Prickett Morgan 1

Nvidia is not the only company that has created specialized compute units that are good at the matrix math and tensor processing that underpins AI training and that can be repurposed to run AI inference. …

AI

HPE Creates Its Own AI Stack For Large Enterprises

April 27, 2022 Jeffrey Burt Comments Off on HPE Creates Its Own AI Stack For Large Enterprises

While the hyperscalers have been running AI workloads against vast datasets in production for a decade and a half, many large enterprises have lots of data they think is relevant but they are not at all experiences with AI and the system requirements it has. …

AI

Doing The Math On CPU-Native AI Inference

September 1, 2021 Mark Funk 4

A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). …

Compute

Maybe Nvidia Should Buy VMware Instead Of Intel

August 18, 2021 Timothy Prickett Morgan 1

It is hard to imagine how anyone could run Nvidia better than it is being run right now. …

Compute

Tuning Up Nvidia’s AI Stack To Run On Virtual Infrastructure

March 12, 2021 Jeffrey Burt Comments Off on Tuning Up Nvidia’s AI Stack To Run On Virtual Infrastructure

Having to install a new kind of systems software stack and create applications is hard enough. …

Compute

Gordon Bell Prize Winners Leverage Machine Learning For Molecular Dynamics

November 23, 2020 Jeffrey Burt Comments Off on Gordon Bell Prize Winners Leverage Machine Learning For Molecular Dynamics

For more than three decades, researchers have used a particular simulation method for molecular dynamics called Ab initio molecular dynamics, or AIMD, which has proven itself to be the method most accurate for analyzing how atoms and molecules move and interact over a fixed time period. …

AI

Feeding The Datacenter Inference Beast A Heavy Diet Of FPGAs

July 31, 2020 Timothy Prickett Morgan Comments Off on Feeding The Datacenter Inference Beast A Heavy Diet Of FPGAs

Any workload that has a complex dataflow with intricate data needs and a requirement for low latency should probably at least consider an FPGA for the job. …

Posts pagination

1 2 … 7 »
About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform