
Nvidia Weaves Silicon Photonics Into InfiniBand And Ethernet
When it comes to networking, the rules around here at The Next Platform are simple. …
When it comes to networking, the rules around here at The Next Platform are simple. …
Welcome to the second part in our series of chats with J Metz, chair of the Ultra Ethernet Consortium. …
Just about everybody, including Nvidia, thinks that in the long run, most people running most AI training and inference workloads at any appreciable scale – hundreds to millions of datacenter devices – will want a cheaper alternative for networking AI accelerators than InfiniBand. …
Here we go again. Some big hyperscalers and cloud builders and their ASIC and switch suppliers are unhappy about Ethernet, and rather than wait for the IEEE to address issues, they are taking matters in their own hands to create what will ultimately become an IEEE standard that moves Ethernet forward in a direction and speed of their choosing. …
It was a fortuitous coincidence that Nvidia was already working on massively parallel GPU compute engines for doing calculations in HPC simulations and models when the machine learning tipping point happened, and similarly, it was fortunate for InfiniBand that it had the advantage of high bandwidth, low latency, and remote direct memory access across GPUs at that same moment. …
Variety is not only the spice of life, it is also the way to drive innovation and to mitigate risk. …
Frustrated by the limitations of Ethernet, Google has taken the best ideas from InfiniBand and Cray’s “Aries” interconnect and created a new distributed switching architecture called Aquila and a new GNet protocol stack that delivers the kind of consistent and low latency that the search engine giant has been seeking for decades. …
Sponsored Moving more bits across a copper wire or optical cable at a lower cost per bit shifted has been the dominant driver of datacenter networking since distributed systems were first developed more than three decades ago. …
The InfiniBand interconnect emerged from the ashes of a fight about the future of server I/O at the end of the last millennium, and instead of becoming that generic I/O it became a low latency, high bandwidth interconnect used for high performance computing. …
It is always good to have options when it comes to optimizing systems because not all software behaves the same way and not all institutions have the same budgets to try to run their simulations and models on HPC clusters. …
All Content Copyright The Next Platform