
Nvidia Unfolds GPU, Interconnect Roadmaps Out To 2027
There are many things that are unique about Nvidia at this point in the history of computing, networking, and graphics. …
There are many things that are unique about Nvidia at this point in the history of computing, networking, and graphics. …
For the past five years, since Nvidia acquired InfiniBand and Ethernet switch and network interface card supplier Mellanox, people have been wondering what the split is between compute and networking in the Nvidia datacenter business that has exploded in growth and now represents most of revenue for each quarter. …
No surprises here: Reviewing first quarter earnings calls of S&P 500 companies, London-based analytics firm GlobalData found that generative AI was a key point of discussion among a growing number of the public companies. …
While a lot of people focus on the floating point and integer processing architectures of various kinds of compute engines, we are spending more and more of our time looking at memory hierarchies and interconnect hierarchies. …
At his company’s GTC 2024 Technical Conference this week, Nvidia co-founder and chief executive officer Jensen Huang, unveiled the chip maker’s massive Blackwell GPUs and accompanying NVLink networking systems, promising a future where hyperscale cloud providers, HPC centers, and other organizations of size and means can meet the rapidly increasing compute demands driven by the emergence of generative AI. …
We like datacenter compute engines here at The Next Platform, but as the name implies, what we really like are platforms – how compute, storage, networking, and systems software are brought together to create a platform on which to build applications. …
If you want to take on Nvidia on its home turf of AI processing, then you had better bring more than your A game. …
Here is a history question for you: How many IT suppliers who do a reasonable portion of their business in the commercial IT sector – and a lot of that in the datacenter – have ever broken through the $100 billion barrier? …
We have five decades of very fine-grained analysis of CPU compute engines in the datacenter, and changes come at a steady but glacial pace when it comes to CPU serving. …
The exorbitant cost of GPU-accelerated systems for training and inference and latest to rush to find gold in mountains of corporate data are combining to exert tectonic forces on the datacenter landscape and push up a new Himalaya range – with Nvidia as its steepest and highest peak. …
All Content Copyright The Next Platform