
Future Proofing Inference Servers With PCI-Express Switches
At this point in the history of datacenter systems, there can be no higher praise than to be chosen by Nvidia as a component supplier for its AI systems. …
At this point in the history of datacenter systems, there can be no higher praise than to be chosen by Nvidia as a component supplier for its AI systems. …
Two decades ago, the hyperscalers and cloud builders started remaking the Ethernet switch market in the datacenter in their own image, and now it looks like AI training and inference is going to morph Ethernet switching in the datacenter once again. …
The best minds in networking spent the better part of two decades wrenching the control planes of switches and routers out of network devices and putting them into external controllers. …
When it comes to networking, the rules around here at The Next Platform are simple. …
Nvidia sells the lion’s share of the parallel compute underpinning AI training, and it has a very large – and probably dominant – share of AI inference. …
With the months-long blip in manufacturing that delayed the “Blackwell” B100 and B200 generations of GPUs in the rear view mirror and nerves more calm about the potential threat that the techniques used in the AI models of Chinese startup DeepSeek better understood, Nvidia’s final quarter of its fiscal 2025 and its projections for continuing sequential growth in fiscal 2026 will bring joy to Wall Street. …
With a three year cadence between PCI-Express bandwidth increases and a three year span between when a gear shift is first talked about and when its chippery is first put into the field, it is extremely difficult to not be impatient for the next PCI-Express release to get into the field. …
UPDATED Networking giant Cisco Systems and AI platform provider Nvidia have hammered out a deal to mix and match each other’s technologies to create a broader set of AI networking options for their respective and – importantly, prospective – customers. …
In many ways, Arista Networks still behaves like a startup even though it was founded twenty years ago, rollout out its first products a little more than a decade and a half ago, went public a decade ago, and now as over 10,000 customers and over 100 million Ethernet ports sold that generated a cumulative $32 billion in revenues for hardware, software, and support. …
While the hyperscalers and big cloud builders all are racing as fast as they can to build the biggest – and presumably the best – models, or collections of models, to win the AI race and become the Microsoft or Red Hat of commercial-grade models, the acquisition of AI hardware and envelope pushing on AI model architecture is not indicative of the adoption of AI by enterprises. …
All Content Copyright The Next Platform