Nvidia Unfolds GPU, Interconnect Roadmaps Out To 2027
There are many things that are unique about Nvidia at this point in the history of computing, networking, and graphics. …
There are many things that are unique about Nvidia at this point in the history of computing, networking, and graphics. …
For the past five years, since Nvidia acquired InfiniBand and Ethernet switch and network interface card supplier Mellanox, people have been wondering what the split is between compute and networking in the Nvidia datacenter business that has exploded in growth and now represents most of revenue for each quarter. …
In 2024, there is no shortage of interconnects if you need to stitch tens, hundreds, thousands, or even tens of thousands of accelerators together. …
At his company’s GTC 2024 Technical Conference this week, Nvidia co-founder and chief executive officer Jensen Huang, unveiled the chip maker’s massive Blackwell GPUs and accompanying NVLink networking systems, promising a future where hyperscale cloud providers, HPC centers, and other organizations of size and means can meet the rapidly increasing compute demands driven by the emergence of generative AI. …
We like datacenter compute engines here at The Next Platform, but as the name implies, what we really like are platforms – how compute, storage, networking, and systems software are brought together to create a platform on which to build applications. …
It is a pity that we can’t make silicon wafers any larger than 300 millimeters in diameter. …
Things would go a whole lot better for server designs if we had a two year or better still a four year moratorium on adding faster compute engines to machines. …
Note: This story augments and corrects information that originally appeared in Half Eos’d: Even Nvidia Can’t Get Enough H100s For Its Supercomputers, which was published on February 15. …
Note: There is a story called A Tale Of Two Nvidia Eos Supercomputers that augments and corrects information that originally appeared in this story as it was published on February 15. …
Amazon Web Services may not be the first of the hyperscalers and cloud builders to create its own custom compute engines, but it has been hot on the heels of Google, which started using its homegrown TPU accelerators for AI workloads in 2015. …
All Content Copyright The Next Platform