It takes money to make money, and if you want to break into the switch ASIC business in the datacenter, even if you are a low-cost designer of such chips, you had better have some rich friends to help the business take off.
The odds for success for Innovium, one of the few remaining independent upstart switch ASIC vendors, just went up substantially with a $170 million injection of Series F financing, a round that was led by Premji Invest, DFJ Growth, and BlackRock with contributions from existing investors Greylock, Capricorn, WRVI, Qualcomm Ventures, Redline, S-Cubed Capital, and DAG. With this round of funding and the traction that Innovium has seen with its Teralynx family of switch ASICs, the word on the street is that it has around a $1.2 billion valuation, giving it unicorn status. To date, Innovium has raised more than $350 million since it was established in December 2014, and Rajiv Khemani, the company’s co-founder and chief executive officer, tells The Next Platform that a chunk of the more than $180 million it had raised in the Series A through Series E rounds was still in the bank when the motherlode of $170 million just came in.
That’s not too shabby for a company that has put three different switch ASICs into the field in a cut-throat merchant silicon market dominated by Broadcom but also with intense competition from Nvidia/Mellanox, Intel/Barefoot Networks, Marvell, and sometimes Cisco Systems, Juniper Networks, Hewlett Packard Enterprise (through its Cray and 3Com acquisitions), and several others.
The 12.8 Tb/sec Teralynx 7 ASIC, which launched in April 2018 and which has been in volume production for some time, has 256 SerDes running at 56 Gb/sec, and the lower-end 6.4 Tb/sec Teralynx 5, which as 128 SerDes running at 56 Gb/sec, was launched at our Next IO Platform event last September and has been sampling for a while. The Teralynx 8 chips, which have 56 Gb/sec native signaling with PAM-4 encoding added to drive the effective bandwidth, has 256 SerDes running at 112 Gb/sec, and therefore can deliver 25.6 Tb/sec of aggregate switching bandwidth in a single chip. We detailed the Teralynx 8 back in May when it launched, and it is set to start sampling sometime here in the second half of 2020. It took about $50 million to develop each chip, according to Khemani, and with some sales and marketing costs taken out, you can get a sense of how much money is still in the kitty at Innovium – our guess is somewhere around $150 million – which is enough to do a lot of things going forward.
“To start, this allows us to expand our product line and expand our customer engagements – and do that on a worldwide basis,” explains Khemani. “We want to build a company that will, on a sustained basis, deliver breakthrough network silicon and become a long-term partner to organizations. We started with a clean sheet of paper, and we want to keep bringing a lot of innovation as we go forward. The $170 million investment validates our vision, execution, and momentum in this market and it gives us stability, it gives us a multi-year runway to accelerate our innovation.”
So far, Innovium has been able to deliver lower cost per bit for both 12.8 Tb/sec and 25.6 Tb/sec switch ASIC generations – or, it looks that way at list price anyway – compared to Broadcom and its other rivals who play in the hyperscaler and cloud builder arenas. With the coronavirus pandemic in full swing since March, there has never been a better time to have an alternative ASIC to Broadcom. These companies are seeing huge increases in traffic on their clustered systems as employees work from home and families stay at home, so the capex spending on servers, storage, and switching is expected to climb in 2020 and beyond even with the economies in the world being adversely – and uncertainly – impacted by the pandemic. Moreover, as we have pointed out in the past, the move to 5G wireless service is going to cause a massive load on the network, with device speeds increasing by a factor of 10X to 10 Gb/sec and base station density having to increase by at least a factor of 10X because the 5G signals are high bandwidth but short range. Add them up, and there is at least a 100X increase in the amount of networking at the 5G edge, and that will eventually come home to the datacenter networks at the core.
At the moment, Innovium is already in position to catch the first wave driven by cloud and hyperscaler expansion and is getting in position to catch the second wave when it starts. The company has over 200 employees right now, and while it won’t double every year as a lot of fast-growing startups do, that is because Khemani and his team are stingy with the budget and want to put as much money into product research and development as they can.
The strategy is working. In the first half of 2020, revenues for Innovium grew by a factor of over 5X compared to the first half of 2019. This is very likely against a relatively small number, of course, but that is pretty significant growth. Moreover, if you just zoom in on the SerDes count rather than the port count – you can carve a port into any number of different bandwidths, after all – for those chip vendors that are shipping 56 Gb/sec SerDes at all, Innovium cites statistics from 650 Group that shows the company having 23 percent market share of the SerDes count compared to 76 percent for Broadcom and only 1 percent for all of the others stacked up for worldwide switch ASIC sales in 2019. This is a heroic market share leap, to be sure, but still against a very small part of the market. But this is a very important part of the market, and shows that Innovium has what it takes to compete with Broadcom and others in the datacenter. By the way, the company has engagements — proofs of concept as well as paying customers — in the majority of the top 25 hyperscalers and cloud builders. That’s something, too.
This is the kind of price/performance war that AMD waged against Intel and carved out a similar share of the X86 server chip market in the datacenter in the mid-2000s with its Opteron chips and is trying to win once again in the datacenter against Intel here in the 2020s with its Epyc chips.
Just like server buyers in the datacenter want CPU chip diversity to drive innovation and price/performance, datacenter switch buyers want diversity from their switch ASICs as well as the ability to run various network operating systems on those switches – including homegrown ones. These customers buy roadmaps, as the saying goes, but they are also comparing different implementations of Remote Direct Memory Access (RDMA) protocols that are so vital to modern distributed computing and storage as well as the kinds of telemetry for doing root cause analysis that are available out of the box from the ASIC maker. And more than a few of them want a consistent set of silicon that spans from top of rack switches to leaf switches to spine switches – even if they may use different operating systems at different layers in the network. Maybe especially because they want to run different operating systems at different parts of the network – or to at least have that choice.
At the moment, Khemani says that a lot of Innovium customers are moving towards the SONiC network operating system created by Microsoft and set free in the open source community, but that doesn’t mean that they will stay there. They might go with ArcOS from Arrcus or a beefed up Cumulus Linux from Nvidia at some point in the future – or something else that emerges. And that is the point. The customers get to choose, and that choice is really limited by how fast a NOS can be qualified on a particular ASIC. With its early customers, Arrcus has been able to get through this qualification process with greater speed than rivals and also show better performance running real workloads, so it is still very early days in the War of the NOSes. Don’t think that the open source platform will necessarily win, or that ArcOS will stay closed source for that matter. There are a lot of pieces still on this chess board. In fact, ArcOS is very router-centric, as are hyperscalers and cloud builders, and we would not be surprised to see Teralynx 9 provide a lot more Layer 3 features and higher end routing features to reflect this. For instance, support for large external buffers could be coming, we think. Time will tell.