Nvidia, AMD, And Intel Help Stuff The Coffers At Ayar Labs

Everybody wants to get rich in AI these days, and if you can’t do it by investing in the compute engine makers or the hyperscalers and cloud builders, then the next best thing to put your money into is probably some form of optical I/O.

It is no secret that high performance compute engines have a bandwidth and signaling problem. If you want to get data into and out of them quickly and at a reasonable capacity per second, and thus keep the dozens to tens of thousands of cores in an engine busy, then if you are going to stick to copper wires, you have to gang them up as tight as possible, whether they are traces on interposers feeding into stacked memory or wires feeding into and out of SerDes to link compute engines together to operate in parallel.

The trouble is that wires are running at out length. Each time you double the bandwidth, you have to cut the length of wire in half because of distortion in the signal. This is a matter of physics and materials science, and everyone knows that eventually copper wires will have to be replaces by fiber optics. This seemed inevitable a decade ago, and thanks to the enormous bandwidth appetites for AI workloads, it is looking to be truly inevitable within the next several years. We also think that some form of optical switching, such as that employed by Google for the backbone of its TPU clusters, might also be inevitable, but that is further out in the future, perhaps. Or maybe not if some of the people we have been talking about recently turn out to be correct. . . .

In any event, there is much enthusiasm about silicon photonics – where electronic signaling meets fiber optics at the laser bar to buy each other data drinks – and in particular different approaches to co-packaged optics or optical interposers.

This is why it is no surprise that Ayar Labs, one of the innovators in co-packaged optics, has had another upround of capital with its Series D funding, which came in at $155 million and which has vaulted the company into unicorn status with a valuation that is now in excess of $1 billion.

Ayar, which was founded by Milos Popovic, Rajeev Ram, Vladimir Stojanovic, Chen Sun, Mark Wade, and Alex Wright-Gladstein in 2015, has raised a total of $374.7 million including four funding rounds, plus seed rounds and debt financing, had Advent International and Light Street Capital as the lead investors in this Series D round, with contributions by compute engine makers Nvidia, AMD, and Intel as well as chip etchers GlobalFoundries, Intel Foundry, and Taiwan Semiconductor Manufacturing Co.

Light Street is big into late stage startups, and counts Slack, Pinterest, GitLab, Unity Technologies, Uber, and Lyft as its core technology and media investments. Advent Global has been around for four decades ago and had over $92 billion in assets under management through the end of 2023; it is the eighth largest private equity firm in the world and has its fingers in an innumerable amount of companies, many of them firms most of us have never heard of. Which is what the real economy is made of, by the way.

The Series D funding round by Ayar Labs follows hot on the heels of the $400 million Series D round that Lightmatter raised back in October. Lightmatter, which has created an optical interposer, has raised $822 million in its four rounds and had a $4.4 billion valuation after it bagged that investment. At that time, the Series D round represented a 2X increase in total funding pushing a 4X increase in valuation, which we think is an interesting set of rations.

In its Series C round, Ayar Labs had raised a total of $219.7 million in funds and had a valuation of around $820 million last year when that round came in. With the Series D round, Ayar has boosted its total funding by 1.7X but has only increase its valuation by 1.2X.

It is tempting to think that the investments by Nvidia, AMD, and Intel portend that these companies are looking to deploy the TeraPHY optics transport and its SuperNova laser source in some fashion in their compute engines. This may be true, but it is also true that by investing, these companies can get an inside look at what Ayar Labs is doing and move to the front of the line if they do choose to deploy its technology in some fashion. We know that Hewlett Packard Enterprise made a strategic investment and collaboration agreement with Ayar Labs back in February 2022 to add silicon photonics to its “Rosetta” Slingshot interconnect. But don’t jump to any conclusions based on funding.

“They are all investors and companies we are exploring many interesting opportunities with – most of which we can’t talk about yet,” Terry Thorn, vice president of commercial operations at Ayar Labs, teases when The Next Platform asked point blank if they were going to use TeraPHY and SuperNova on their compute engines. We can envision this happening, but there are a number of other approaches to co-packaged optics and all three of these companies also have a habit of inventing their own stuff.

Thorn is interesting in that from 2007 through 2019 he was in charge of datacenter channel marketing and sales at Intel, specifically to the hyperscalers and cloud builders who are also natural customers of the low-level interconnect hardware that Ayar Labs has created for silicon photonics. The point is, Thorn is well aware of the long effort Intel made to commercialize silicon photonics for these tech titans and why it didn’t work, and now he is at Ayar Labs to make sure this time around, it does work.

We happen to think that there is good reason to believe that Ayar as well as other companies like Lightmatter, Celestial AI, and Eliyan all have a chance to make some inroads as silicon photonics bridges between compute engines and interconnects. There are lots of clever things that can be done. That said, what matters is when someone does something, and if it works. And thus far, we have seen very little silicon photonics deployed in testbeds, much less in proofs of concept and certainly not yet in production. The reason, as we discussed in a recent webinar with people from Microsoft Azure, AMD, Cerebras Systems, and Ayar Labs, is that no one yet knows the volume economics of silicon photonics, and companies making compute engines and those buying AI clusters are worried about reliability as well.

These issues will be worked out, and because in the near term, we have no choice. The 2025 generation of compute engines might not have silicon photonics, but we think the 2026 generation could and the 2027 generation almost certainly will. Copper’s time has run out.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. This is where the national labs; Sandia, Livermore, Oak Ridge, etc; could be at the forefront, testing out and pushing new technology like this into the mainstream. When it’s such a big step change in infrastructure, the big hyperscalers are going to be nervous about going down the wrong route at scale. They just want to take what’s common and drive the price into the ground, with homegrown custom software ontop it seems. Can’t blame them.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.