What If Omni-Path Morphs Into The Best Ultra Ethernet?

UPDATED: Nvidia is a member of the Ultra Ethernet Consortium.

The jury is still out on a lot of things about this exploding AI market and the re-convergence that it will have with traditional HPC systems for running simulations and models. But one of the ideas that a lot of people are getting behind in a big way is that ultimately, Ethernet will be improved to an extent that it makes InfiniBand unnecessary.

That is the mission of the Ultra Ethernet Consortium, which was launched nearly year ago to, among other things, break the hegemony of InfiniBand for low-latency networking for AI training and HPC simulations. They also want to scale this enhanced Ethernet to 1 million endpoints in a single, relatively flat fabric that does not require as many network tiers as would be required by InfiniBand and other proprietary interconnects. And they want an option that has high bandwidth, low latency, and immense scale that is not controlled by a single vendor, as Nvidia’s InfiniBand most certainly is.

The pricing for 200 Gb/sec and 400 Gb/sec InfiniBand fabrics proves it, where it is not uncommon for the network to comprise 20 percent or more of the cost of a cluster in HPC and hyperscale/cloud markets where customers are used to spending under 10 percent on the interconnect.

The basic idea with the UEC is to completely rework the Ethernet stack so it has the kind of end-to-end fabric – and telemetry for it – that InfiniBand has, which allows it to do congestion control and adaptive routing. The idea is to implement this in switch and network adapter hardware as well as in the networking software that runs on these devices. The UEC wants to also implement flexible packet ordering in Ethernet – what is often called packet spraying – to avoid congestion in the first place. They also want to create a new remote Direct Memory Access method that builds on InfiniBand’s implementation of RDMA as well as Ethernet’s RoCE. And all of this is to be done in a standardized way that allows for differentiation across vendors but commonality for compatibility.

The founding members of the UEC include AMD, Arista Networks, Broadcom, Cisco Systems, Eviden (Atos), Hewlett Packard Enterprise, Intel, Meta Platforms, Microsoft, and Oracle, and the effort and its intellectual property is being shepherded by the Linux Foundation.

In November of last year, when membership in the UEC was first opened, 27 new companies were added, including some well known names. It is important to know who is on board, so here are the ones who identified themselves in this group: Alibaba, Alphawave Semi, Baidu, Bytedance, Cadence, Cornelis Networks, Dell, DriveNets, DreamBig, Enfabrica, Fujitsu, Huawei Technologies, IBM, Infraeo, Juniper Networks, Keysight, Marvell, NeuReality, H3C Technologies, Nokia, Samsung, Salience Labs, Spirent Communications, Synopsys, Tencent, VNET, and XSight Labs.

By March of this year, 45 new companies had joined the effort above and beyond the original ten on the steering committees, for a total of 55 companies. From logo counts, we know that there are at least 63 members of the UEC today, and the following are who we have identified: Accellink, Asterfusion, Centec, Ciena, Credo, Edge-Core Networks, Fathom Radiant, Graphcore, Grovf, Internet Initiative Japan, Kalray, Lawrence Livermore National Laboratory, Lenovo, MangoBoost, MemVerge, Molex, Preferred Networks, Qumulo, Ruijie Networks, Sandia National Laboratories, Scala Computing, Stelia, Supermicro, YunSilicon, Zenosic, and ZTE. Eight of these companies – we don’t know which ones – joined after March.

As far as we know, Google and Amazon Web Services are not members, but they could be quietly involved – they surely stand to benefit from an InfiniBand alternative. And maybe even Nvidia is because it needs its Spectrum Ethernet to eventually be up to UEC snuff. We have reached out to Nvidia to find out if it has joined and will update this story when we find out.

UPDATE: Here is the statement from Nvidia: “Nvidia is a member of the UEC because our strategy is to support networking specifications that can be beneficial to our customers. We may want to offer a UEC version of Ethernet in the future, alongside Spectrum-X and potentially other specifications in the future.”

So, we guessed right, and Nvidia is doing what you would expect to do, and that is to cover all of its bases and your options.

The point is, all of the important companies want Ethernet fixed, and there are 715 techies who are working together to make that happen. The UEC 1.0 specification is expected to be released in the third quarter of this year.

Phil Murphy, co-founder and chief executive officer of Cornelis Networks, is one of those techies who is not only working on the UEC spec but also on Omni-Path interconnects that will make use of it.

Yes, you heard that right.

Murphy probably knows as much about InfiniBand as Nvidia, being the co-founder and vice president of SilverStorm Technologies, which was acquired by QLogic in 2006 and was part of its InfiniBand portfolio until Intel bought that TrueScale InfiniBand switch and adapter business from QLogic for $125 million back in January 2012. Significantly for Intel and now Cornelis Networks, Intel paid $140 million to buy the “Gemini” and “Aries” interconnects from Cray in April 2012, with the idea of making an even better InfiniBand. And in September 2020, Murphy was one of the people behind Cornelis Networks acquiring the Omni-Path business – including those IP portfolios as well as existing products and support contracts from customers – from Intel.

Many of the HPC centers in the United States – importantly Sandia and Lawrence Livermore as well as the Texas Advanced Computing Center at the University of Texas – wanted an alternative to InfiniBand or proprietary interconnects like HPE/Cray’s Slingshot, and they have been funding the redevelopment of Omni-Path. And now, Cornelis Networks is going to be intersecting its roadmap with Omni-Path switches and adapters with the UEC roadmap.

We drilled down deeply into the Omni-Path roadmap from Cornelis Networks back in August 2023, which was only a month after the UEC was launched and well before the company had a chance to chew on what was up. Here is a review of that roadmap for your reference:

When we talked to Murphy recently, aside from talking about the nature of AI training and who was going to ultimately be the one doing it – Murphy thinks that the hyperscalers and cloud builders will be the only ones who can afford to do training and the rest of the world will license models from them and run them either locally or in the cloud – we had one question: Just like RoCE is Ethernet trying to pretend to be InfiniBand, can you make Omni-Path pretend to be UEC and skin it that way?

“That’s exactly what we are going to do,” Murphy said with a laugh. “We are going to have an Ethernet capability with Omni-Path Express. The hyperscalers and clouds want UEC to be multi-vendor and interoperable, so we are going to have to abide by the specifications, but we already have these technologies – credit-based flow control, congestion control and dynamic adaptive routing – that are part of UEC spec.”

Omni-Path Express, or OPX for short, is what Cornelis Networks calls its Omni-Path product line.

For Cornelis Networks, the ability to support Ultra Ethernet protocols atop its Omni-Path iron has been greatly simplified by its adoption several years ago of the libfabric library from the Open Fabrics Interfaces workgroup, replacing the InfiniBand Verbs and QLogic PSM software layers that had been used in prior generations of QLogic and Intel products. UEC is also standardizing on libfabric as its northbound API. Which means it is not going to be that hard for Cornelis Networks to make Omni-Path look like it is speaking Modern Ethernet as embodied in the UEC spec.

Some history is in order to make sense of this. As we have said many times, the original goal of InfiniBand was to replace PCI-Express, Fibre Channel, and maybe Ethernet and to create a universal, converged fabric for all devices, PCs and servers. The TruScale variant of InfiniBand from QLogic employed a technique called Performance Scale Messaging, or PSM, that QLogic certainly believed was better than the InfiniBand Verbs approach, offering better scale. But even with that, AI and HPC systems are scaling far beyond the design specs from more than two decades ago, which is why Cornelis Networks put together a new software stack together based on the libfabric driver that is part of the Linux operating system and replaces the PSM provider that was part of the QLogic TruScale and Intel Omni-Path stack with the OPX provider from the Open Fabrics Interfaces working group.

Here is how that InfiniBand and Omni-Path stack have evolved through the current 100 Gb/sec Omni-Path Express switches:

And here is what the UEC stack looks like:

Given that both Ultra Ethernet and Omni-Path talk through the libfabric API, as long as Cornelis Networks adds extensions to its libfabric driver in lockstep with the UEC, then it should be relatively easy to make Omni-Path speak Modern Ethernet at the adapter level and then revert to native transport in the switches.

By the way, this is exactly what Cray did with the Cluster Compatibility Mode setup for the “SeaStar” interconnect used in the XT series of massively parallel supercomputers launched back in 2010. Applications that were written for Ethernet atop Linux hit this CCM driver and didn’t know they were not actually talking to Ethernet at all. Cray also added Open Fabrics InfiniBand drivers to CCM for the same reason.

With this year’s 400 Gb/sec Omni-Path Express CN5000 switches and adapters from Cornelis Networks, all of that OFA Verbs and QLogic PSM2 support is deleted and the OFI libfabric provider layer is what is supported. This product, says Murphy, was already cooked and will not support UEC specifications. But things start to get interesting with the 800 Gb/sec Omni-Path CN6000 switches and adapters coming in 2026.

“We are coming out in early 2026 with the 800 Gb/sec product, and that will probably be too early to be fully compatible with Ultra Ethernet, but it will have some capabilities,” Murphy tells The Next Platform. “And for most of the hyperscalers and cloud builders, what they care about is a path to Ultra Ethernet. But in late 2026 and early 2027, you will see real Ultra Ethernet products out there.”

And oddly enough, Cornelis Networks will be supplying some of them. To which we say: Intel should have never sold off Omni-Path. Everything was coming its way.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


    • At the hyperscalers and cloud builders, Ethernet is already not Ethernet. They have stripped down things with comparatively limited protocol support. If it runs BGP and looks like it is running TCP/IP, will it still be Ethernet?

      • Ethernet is a well defined protocol stack covering layer 0 to layer 2 if I recall the ISO OSI model correctly. If you strip out parts of it leaving just enough to move upper layer protocols I would say you no longer have Ethernet. Just like if you take all the stuff that was built into the IB spec that lets one actually manage and reconfigure a network, and you add it to Ethernet, well that might as well be IB with a different frame format. Really, the reason to do that is to get something as functional as IB without being beholden to the Borg, er, ah, to nVidia.

        • Functionally, “Ethernet” today is a frame-format, and not much else. It does not cover Layers 0 and 1 itself (tough IEEE
          defines L0/L1 interfaces, such as PHY’s and their functionality, mostly to fit Ethernet Frames being Sent/recieved), and a lot of what used to define “ethernet” is simply omitted/obsolete (CSMA/CD, anyone?)

  1. It should be interesting to see how this “hybridization” pans out (for backwards compatibility, and/or special features), with, say, Ultra-Ethernet/OPX, Ultra-Ethernet/Infiniband, Ultra-Ethernet/Slingshot, and Ultra-Ethernet/TOFU breeds of converged “mutants” of interconnection. Cornelis’ CN6000 focus on Libfabric looks like the right decision to foster this on the Omni-Path side of things IMHO (very much easing the process).

    • Color me skeptical.
      If I get it right, the plan is to make Software think it is using Ultra-Ethernet by the usual SHIM/ABstraction layer, and then the frames on the wire will be some “Native” transport, such as Omnipath? this means all switches along the way need to know OP?
      The point is to have multiople vendors offering the HW, so it is as cheap as Ethernet, and as functional as OP or IB (and better), not to just be compatible at ehe SW level

      • Some of the best things about Ethernet are the range of vendors, the competitive pricing and the sheer ubiquity. If that happens with U/E then that will be nice.

  2. Hi, Timothy – there is a typo in the second to last paragraph: “But in late 2026 and early 2017” should be “ But in late 2026 and early 2027”, I think.



  3. Color me skeptical.
    If I get it right, the plan is to make Software think it is using Ultra-Ethernet by the usual SHIM/ABstraction layer, and then the frames on the wire will be some “Native” transport, such as Omnipath? this means all switches along the way need to know OP?
    The point is to have multiople vendors offering the HW, so it is as cheap as Ethernet, and as functional as OP or IB (and better), not to just be compatible at ehe SW level

  4. True but that’s to make it so those who already have a proprietary but high speed network can interoperate with this ultra ethernet, replacing gear over time rather than one big rip and replace, if I’m not mistaken.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.