Pushing PCI-Express Switches And Retimers To Boost Server Bandwidth
Things would go a whole lot better for server designs if we had a two year or better still a four year moratorium on adding faster compute engines to machines. …
Things would go a whole lot better for server designs if we had a two year or better still a four year moratorium on adding faster compute engines to machines. …
For the past decade or so, we have been convinced by quite a large number of IT suppliers that security functions, network and storage virtualization functions, and even the server virtualization hypervisor for carving up compute itself should be offloaded from servers to intermediaries somewhat illogically called data processing units, or DPUs. …
Not everybody can afford an Nvidia DGX AI server loaded up with the latest “Hopper” H100 GPU accelerators or even one of its many clones available from the OEMs and ODMs of the world. …
When system architects sit down to design their next platforms, they start by looking at a bunch of roadmaps from suppliers of CPUs, accelerators, memory, flash, network interface cards – and PCI-Express controllers and switches. …
You can’t be certain about a lot of things in the world these days, but one thing you can count on is the voracious appetite for parallel compute, high bandwidth memory, and high bandwidth networking for AI training workloads. …
The ink is barely dry on the PCI-Express 6.0 specification, which was released after years of development in January 2022, we hardly have PCI-Express 5.0 peripherals in the market, and the PCI-SIG organization that controls the PCI-Express standard for peripheral interconnects already has us all coveting the bandwidth that will come later in the decade with PCI-Express 7.0 interconnects. …
Paid Feature There are many ways to scale up and scale out systems, and that is a problem as much as it is a solution for distributed systems architects. …
Supercomputers are expensive, and getting increasingly so. Even if they are delivering impressive performance gains over the past decade, modern HPC workloads require an incredible amount of performance, and this is particularly true of any workload that is going to blend together traditional HPC simulation and modeling with some sort of machine learning training and inference. …
It is refreshing to find instances in the IT sector where competing groups with their own agendas work together for the common good and the improvement of systems everywhere. …
In the longest of runs, say within the next five to ten years, in the large datacenters of the world, the server chassis as we know it will no longer exist. …
All Content Copyright The Next Platform