The Increasing Impatience Of The Speed Of The PCI-Express Roadmap

Richard Solomon has heard the rumblings over the years. As vice president of PCI-SIG, the organization that controls the development of the PCI-Express specification, he has listened to questions about how long it takes the group to bring the latest spec to the industry. It’s a question we raised last year, nothing the two-year cadence CPU and GPU makers have for releasing the latest and greatest silicon, not to mention the chip makers for network switching and interface cards for Ethernet and InfiniBand.

Right now, PCI-SIG has settled into a steady three-year cycle, with the PCI-Express 6.0 released in 2022 and the one for PCI-Express set for next year. And according to Solomon, those three years are what it takes to get the new spec out, get silicon for it, and have members run their products through its compliance workshops to ensure they can be added the organization’s Integrators List. That all comes after a six-month preliminary FYI test phase, which for PCI-Express 6.0 started earlier this month.

All this just takes time, Solomon told journalists and analysts during this week’s PCI-SIG Developers Conference 2024 in Santa Clara, California.

“We get a lot of questions about that,” he said. “Why can’t you guys move faster? Why’s it take you so long? What are you doing? The answer really is it takes a while between when the specs are completed until there’s silicon. We really can’t do a compliance program until we have silicon. So we start as early as we can, and in reality, we’re a middle of 2024. The PCI -Express 6.0 spec released in January of 2022. It’s taken the industry a solid two, almost two-and-a-half years, to get to the point where we have testing, we have silicon. All of those pieces exist. We are actually kind of moving pretty quickly in the compliance program. I apologize if that sounds like an excuse. It really isn’t intended that way. It’s just explaining things that go into our schedule.”

The Next Platform argued last year that PCI-SIG needs to accelerate its timetables and push to get PCI-Express’ roadmap in sync with those of the chip makers and server vendors. It’s a widely used interconnect for an industry that also has Ethernet, InfiniBand, and Nvidia’s proprietary built-for-GPUs NVLink, and it’s expected that demand for PCI-Express will grow with the increasing use of CXL-based tiered and shared main memory.

But an organization with so many members – it is somewhere in the area of 970, and growing – and a highly deliberative process for each spec may not be built for speed. There are myriad committees and workgroups for the specifications that can lead to various changes, the pre-FYI and FYI testing, and compliance workshops.

“Devices that complete our compliance program have the option to be listed on our website, on the Innovators List, and members – and actually non-SIG members, because that’s a publicly accessible site – can go look at that and decide as they’re making purchasing decisions and design decisions, which products they may want to consider based on our compliance test,” Solomon said. “Our compliance program is not a validation or a certification program. We are really focused … on interoperability. Our compliance program tests the kinds of things that are most important to interoperability. With a high-speed signaling bus like this, a lot of that testing is electrical.”

Looking at all that, with the PCI-Express 7.0 specification expected to be ratified sometime between the middle and the end of next year, the Integrators List for it likely will come out in 2028, he said, adding that “I wish we could do it faster. I wish silicon came out faster. … That’s kind of where we are realistically.”

That said, Solomon said PCI-SIG has been able to stay ahead of industry demands. Looking at the chart below, he noted that the bandwidth capabilities in the PCI-Express 6.0 and 7.0 specification are about three years in front of the cadence of I/O bandwidth doubling every three years, and that’s despite the very late release of the 4.0 spec.

“Some of you have been around long enough to point out the PCI-Express 4.0 little hiccup there and give me grief for that … but for the last few years, we’ve managed to keep that three- to four-year gap between when we develop this spec and when the industry really needs the bandwidth,” he said. “There are always parts of the ecosystem that are demanding more and more bandwidth. But … we’ve done a pretty good job of staying on top of that curve and continuing to develop solid specs that people can go to.”

So where to things stand now for PCI-Express 7.0? Version 0.5 – the first official draft of the release – is out now. PCI-SIG boosted the top data rate to 128 giga transfers per second (GT/s), improved the power efficiency, and retained backward compatibility with previous generations of the spec. It also retained the Flit Mode encoding and PAM4 signaling that started with PCI-Express 6.0.

“Our main piece here is to maintain that PAM4 signaling, maintain the Flit Mode that we developed for PCI-Express 6.0, all of those things, and really just focus on the speed doubling,” Solomon said. “Moving to 128 giga transfers per second is the focus. We’re first going to maintain backward compatibility. That’s a huge part of what makes PCI-Express. We ship all the PCI-Express specs successful over the years. We’re always trying for better and better power efficiency, although I laughed a little bit because you look at 128 giga transfers per second compared our original two-and-a-half giga transfer per second. Yeah, it takes more power than it used to in 2003.”

A balance in capabilities also is important, he said, adding that “PCI-Express is not necessarily the fastest technology you can buy. It’s certainly not the cheapest technology to buy. But we try for this balance of the best bang for the buck – trying to provide really high bandwidth, really reasonable implementation. So the silicon technology you choose for the PHYs, it’s the PCB technology you choose.”

PCI-Express 7.0 also follows on previous specifications by offering organizations an array of options depending on their product needs, as outlined in the chart below:

Such options are a key point of PCI-Express. The number of lanes along the top of the chart are relative to silicon area, Solomon said, noting that 16 lanes will take up more silicon space than two lanes. However, you can implement the 16 lanes with a less expensive process technology; having two lanes eats up less area but to implement 128 GT/s will likely need more expensive silicon. “It just that it gives the ecosystem the opportunity to choose,” he said. “You can pick your bandwidth and then kind of see what’s important for your particular product and pick out one of the rectangles you want.”

Some vendors used the PCI-SIG event to unveil their newest PCI-Express 7.0 wares. Rambus announced its PCI–Express 7.0 IP portfolio aimed at handling the high data counts that come with generative AI and HPC workloads. Included in the package are a high-bandwidth and low-latency controller, a retimer, a multi-port switch, and its PCIe XpressAgent to help customer quickly bring up first silicon.

Synopsys came out with its own PCI-Express 7.0 portfolio with a controller, IDE security module, PHY, and verification IP that will help chip makers address bandwidth and latency needs for moving AI workloads while Cadence demonstrated its PCI-Express 7.0 IP for transmitting and receiving 128 giga transfers per second over non-retimed optics.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

7 Comments

  1. During my 9 years (2012-2021) as a “Linux admin with a screwdriver” in a data center for a financial firm, PCI-Express 3 sure felt like the “Ground Hog Day” version of server hardware.

  2. Where is PCIe 5.0. I want to build a new system, but I can’t find a MOBO with enough PCIe 5.0 slots. YOu’re lucky if you can find a MOBO with even 1 PCIe 5.0 slot. I want at least 4 PCIe5.0 slots.

  3. TBH I remember when one had to be a bit careful about buying the right bloody cable to connect two IB endpoints, nevermind whether IB products from different vendors would even talk to eachother.

    PCIe has to be the most compatible problem free protocol in the computer racket. That is a good thing and I would not see it screwed up.

    If there is a great breakthrough needed in PCIe it is increasing run lengths without retimers. That’s what ppl should be worried about – because the PCIe5 spec is too expensive and inflexible on the mobo already & it is only going to get worse.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.