AI

Nvidia’s Enormous Financial Success Becomes . . . Normal

For the past five years, since Nvidia acquired InfiniBand and Ethernet switch and network interface card supplier Mellanox, people have been wondering what the split is between compute and networking in the Nvidia datacenter business that has exploded in growth and now represents most of revenue for each quarter. Now

AI

Dell Wants To Help You Build Your AI Factory

No surprises here: Reviewing first quarter earnings calls of S&P 500 companies, London-based analytics firm GlobalData found that generative AI was a key point of discussion among a growing number of the public companies. Business fundamentals analyst Misa Singh saying that “companies are looking at GenAI tools for better productivity,

Sign up to our newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

More Analysis

HPC

To Exascale And (Maybe) Beyond!

The difference between “high performance computing” in the general way that many thousands of organizations run traditional simulation and modeling applications and the kind of exascale computing that is only now becoming a little more commonplace is like the difference between a single, two door coupe that goes 65 miles

AI

Japan Gets An LLM Compliments Of Fujitsu And RIKEN

Very few organizations have enough iron to train a large language model in a reasonably short amount of time, and that is why most will be grabbing pre-trained models and then retraining the parameters in the models with much smaller datasets that are important to them. After that, they need

Connect

Greasing The Skids To Move AI From InfiniBand To Ethernet

Just about everybody, including Nvidia, thinks that in the long run, most people running most AI training and inference workloads at any appreciable scale – hundreds to millions of datacenter devices – will want a cheaper alternative for networking AI accelerators than InfiniBand. While Nvidia has argued that InfiniBand only

Edge

AI At The Edge Is Different From AI In The Datacenter

Today’s pace of business requires companies to find faster ways to serve customers, gather actionable insights, increase operational efficiency, and reduce costs. Edge-to-cloud solutions running AI workloads at the edge help address this demand. Placing compute power at the network edge, close to the data creation point, makes a vital

AI

Red Hat Saddles Up For The Wide Open GenAI Horizons

A theme snaking its way through conversations these days about generative AI is the need for open source models, open platforms, and industry standards as ways to make the emerging technology more accessible and widely adopted by enterprises. There is a growing number of open source models – or models