Site icon The Next Platform

Rest In Pieces: Servers And CXL

If you had to rank the level of hype around specific datacenter technologies, the top thing these days would be, without question, generative AI, probably followed by AI training and inference of all kinds and mixed precision computing in general. Co-packaged optics for both interconnects and I/O comes up a lot, too. But with so many compute engines with their backs pressed up against the memory wall, talk inevitably turns to increasing bandwidth and capacity for CPUs, GPUs, and FPGAs and lowering the overall cost of main memory that gives the systems of the world a place to think.

And that leads straight to the CXL protocol, of course.

Rambus has a long history of innovating in the memory and I/O arenas, and it is a player in both HBM memory and CXL extended and shared memory for systems. And thus, we had a chat recently with Mark Orthodoxou, vice president of strategic marketing for datacenter products at Rambus, about the implications of CXL memory pooling and memory sharing on impending and future server designs.

Among the many things that we discussed with Orthodoxou was the fact that the PCI-Express switch makers – the PLX division of Broadcom and MicroChip – are far too late coming to market with their products, and are lagging the speed of the PCI-Express slots in servers by 18 months to 24 months, depending on how you want to count it. If we are going to live in a CXL world to attach CPUs to each other, main memory and expanded memory to servers, and CPUs to accelerators, we need switches to come out concurrent with server slots or there isn’t much of a point, is there? Grrrr.

And so, we have strongly encouraged Rambus to start making PCI-Express switches and get things all lined up. You can lean on them in the comments in this story, too.

Exit mobile version