Upscale AI Nabs Cash To Forge “SkyHammer” Scale Up Fabric Switch

The first company that can make a UALink switch with high radix – meaning lots of ports – and high aggregate bandwidth across those ports that can compete toe-to-toe with Nvidia’s NVSwitch memory fabric and NVLink ports is going to make a lot of money. Aptly named chip upstart Upscale AI wants to be that first company, and it has just raised $200 million in its Series A funding to help fuel the development of its SkyHammer ASIC, which will support UALink as well as the ESUN standard put forth by Meta Platforms at the Open Compute Summit last October.

With slightly over $100 million in seed funding from September 2025 from Mayfield and Maverick Silicon, that brings the total for Upscale AI to $300 million, and its valuation has crested above $1 billion, giving it unicorn status – whatever that is worth in today’s inflationary world. Nonetheless, this is a very large amount of money for a datacenter networking company starting out, and indeed for any tech company with 150 employees – most of them engineers – trying to break into the very crowded AI market.

Tiger Global, Premji Invest, and Xora Innovation led the Series A round, with participation from Maverick Silicon, StepStone Group, Mayfield, Prosperity7 Ventures, Intel Capital, and Qualcomm Ventures participating.

Upscale AI did not precisely start out on its own, however. The company’s founders, Rajiv Khemani and Barun Kar, founded a company called Auradine, in February 2022 that was working on AI and blockchain computing and networking chips at the 4 nanometer and 3 nanometer nodes. Auradine had itself raised $161 million in two rounds through 2024, and added another $153 million in Series B in April 2025. Khemani and Kar decided to spin off some of the networking efforts at Auradine into a new company, which it called Upscale AI, in May 2024, to more directly chase the $100 billion TAM for AI interconnects that is expected by the end of the decade.

Khemani is well known to readers of The Next Platform. Way back in 1990, Khemani was a senior product manager in charge of SparcServers and Solaris at Sun Microsystems, and he did stints at NetApp and Intel being in charge of strategy and marketing for various business units. In 2003, he was chief operating officer at chip startup Cavium Networks, which was founded in 2000 to make MIPS processors but famously got into the Arm server racket with the ThunderX server CPUs that launched in 2014. Cavium acquired upstart programmable switch ASIC maker XPliant that year, and in June 2016 it paid $1.36 billion to buy QLogic for its storage business. And in November 2017 chip giant Marvell paid $6 billion to acquire Cavium to being its push into the datacenter in earnest. Khemani left Cavium in 2015 to be the co-founder and chief executive officer of Innovium, a designer of high bandwidth, minimalist hyperscale Ethernet switch ASICs, called TeraLynx, that Marvell acquired in August 2021 for $1.1 billion to further advance its datacenter chip aspirations.

If Marvell had not just two weeks ago acquired XConn, a maker of PCI-Express and CXL switches for $540 million, and aimed at bolstering its UALink efforts according to Marvell, we would have thought that Khemani would end up back at Marvell again through an acquisition of Upscale AI. Perhaps Marvell did try to buy Upscale AI? Almost certainly, given the history. In any event, Marvell plus XConn plus Celestial AI means Marvell can better compete against Upscale AI, Astera Labs, Broadcom, Marvell, Cisco Systems, Nvidia, and possibly MicroChip in the scale up fabric part of the datacenter networking market.

Kar, the other Auradine and Upscale AI co-founder, was senior vice president of engineering and member of the founding team at Palo Alto Networks, a maker of firewalls and other security stuff. Before that, way back in the wake of the Dot Com Boom, Kar was senior systems manager for Juniper Networks and managed its Ethernet router and switch products.

Upscale AI has been pretty tight lipped about the specifics of its ASICs, but we hat a chat recently with Arvind Srikumar, senior vice president of product and marketing at the company, to try to get some insight into what SkyHammer is – and is not – and how Upscale AI plans to differentiate itself.

First and foremost, Upscale AI wants to give customers choices – and with scale up AI networks, there is only really one practical choice these days, and that is NVSwitch. Which is one of the reasons why (there are others) that Nvidia has been so successful in the GenAI Boom.

“I always believed heterogeneous compute is the way to go, and heterogeneous networking is also the way to go,” Srikumar tells The Next Platform. “People should have the choice to mix and match things because everyone is specific and special and mix and matching allows you to optimize according to everyone’s need. Upscale AI focuses on democratizing networking for AI compute, and we believe in heterogeneous compute. We believe that Nvidia has great technology and is an amazing company when it comes to innovation. But going forward, with the with the pace of AI innovation, I don’t think any one company is going to provide all technology that is needed for AI – especially with what lies in the future. So that that inevitably means there is going to be different kinds of compute from different vendors.”

Like us, Srikumar believes that PCI-Express switching works fine when you have a few CPUs talking to a few more GPUs and the relative memory bandwidth of the GPUs is fairly low and the CPUs and GPUs are all pretty closely packed in a server node. When Upscale AI started back in early 2024, the UALink consortium and the ESUN standard being proposed by Meta Platforms did not exist as yet, but the idea of heterogenous infrastructure certainly did, and not just for creating a single set of infrastructure to do all jobs but to create an infrastructure that matches the workflow better for different jobs.

“In the future, a single GPU might not do everything and there is going to be heterogeneous compute,” explains Srikumar. “Certain CPUs or GPUs or XPUs might be great with precode and prefill, and other devices might be great with decode. But what happens when Vendor X is great at prefill and Vendor Y is great at decode. Switching has now become the heart of this machine, it ties all of these things together, and it has to provide fairness in connectivity, and it has to scale and be reliable, too. The reliability is what matters most because whatever you do directly affects all compute in the system.”

The advent of UALink and ESUN at Meta Platforms and SUE at Broadcom are meant to provide a memory coherent fabric for compute engines. The UALink protocol in particular is able to match the feeds and speeds of the NVLink protocol from Nvidia – in terms of latency and bandwidth per port. It is not clear about the others. But just because the UALink specification is ready does not mean that PCI-Express and Ethernet ASIC vendors have come up with a UALink switch that can match NVSwitch. The threat that they might is why you see Nvidia licensing its coherent fabric interconnect designs (and possibly making the chiplets) with its NVLink Fusion effort. Initially, Nvidia would only allow customers to add NVLink ports to their custom accelerators if they were going to use Nvidia’s “Grace” or “Vera” Arm server CPUs or to add them to their custom CPUs if they were going to link them to Nvidia GPUs. But we think that is slipping, and we think that Amazon Web Services has the capability to add NVLink ports to both its future Graviton CPUs and Trainium XPUs and that means Nvidia is selling AWS NVSwitch ASICs as well to link them together.

We don’t know what that costs, but what we do know is that UALink switches plus UALink ports has to be less expensive and have at least the same performance as the NVLink-NVSwitch combo from Nvidia. And if it is wise, Upscale AI will also run UALink, ESUN, or SUE protocols atop its switch as well. Upscale AI is fine with supporting these protocols, and would no doubt be fine to support NVLink should that ever become an option.

But Upscale AI looks down its nose at the efforts where people are making a UALink or ESUN or SUE switch by glorifying a PCI-Express switch ASIC or ripping the guts out of an Ethernet switch ASIC.

“A lot of what I see is more or less like retrofitting PCI-Express, taking the PCI-Express substrate and trying to do something else, or another vendor comes in and takes Ethernet and tries to retrofit it. But the point about this whole memory domain is that it cannot be retrofitted. That will not provide your customers with a truly optimized, scale up only stack because what you will end up doing is taking a substrate and try to remove stuff that is not needed. People who have been in the ASIC business for a long time know that you can remove a lot of blocks, but the primitives still remain the same. There is a basic DNA of every ASIC that remains the same.”

Which is why Khemani and Kar set out to build a memory fabric ASIC from the ground up to do only that, and then make sure it supports the protocols for memory semantics as they come out.

Thus far, Upscale AI has not said much more about what it is up to. More details about the SkyHammer ASIC will be released at the end of this quarter – no doubt around the same time as the GTC 2026 conference hosted by Nvidia. The plan, says Srikumar, is to ship samples of the SkyHammer to customers at the end of 2026, with volume shipments in 2027 with the generations of GPUs, XPUs, switches, and racks coming out at that time. The switches need to be in the hands of OEMs and ODMs two quarters before the compute engines are ready to ship so they can put systems together and test them.

“I can’t say much right now,” says Srikumar. “But I can assure you of this: We are coming up with a high radix switch and an ASIC that enables all of this and we will be able to talk about it by the end of Q1.”

The UALink consortium says the 1.0 spec can allow for 1,024 compute engines in a single-level UALink fabric. We would love to see the ASIC that can do this. Wouldn’t you?

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.