VCs Look Beyond Architecture for Next Chip Success Stories

At The Next AI Platform event in May, we brought together a few leading investors on the deep learning chip front to talk about how they consider architecture, software, competitiveness, and market potential from the various approaches for both training and inference.

Key takeaways from the panel include a focus on which workloads will drive AI chip startup success, how critical the software integration piece is to the viability of any architectural approach, how and where it is reasonable to think the largest chipmakers might face practical threats from smaller companies, and what some VCs think of in terms of how to evaluate competitive potential.

The full panel can be viewed below. This was filmed in front of the sold-out audience in San Jose on May 9. This discussion is hosted by Next Platform contributor (and long-time semiconductor writer), Stacey Higginbotham. It features Vijay Reddy from Intel Capital, Kanu Gulati of Khosla Ventures, and Michael Stewart of Applied Ventures.

“We don’t want to underestimate how strong the incumbents are in [the AI chip] space, especially on the datacenter side. Nvidia will continue to be the dominant force so as we look at investments, we don’t want companies coming in with theoretical speedups of 5-7X of a GPU. It has to be 50-100X better.” Kanu Gulati, Khosla Ventures

Gulati says that next to dramatic performance improvements, the quality of the software is the other big key. All panelists agreed mightily with this point. “It is about quality of the software and software team that can understand what kinds of neural networks should be used and where they can cut corners on networks and which kinds of mechanisms they can work with in addition to the bag of tricks everyone has in terms of pruning, quantization and leveraging sparsity. It’s about what else they can do and how quickly in a space like this that evolves so quickly.”

One other thing all panelists agreed upon was that even the best architecture is lost to the market if there is not recognition of specific workloads that can take full advantage. This is true for training and inference in the datacenter and at the edge.

“There are over one hundred hardware companies right now,” says Intel Captial’s Vijay Reddy. “There’s going to be consolidation at some points but there will be pockets of applications and areas where we can see big companies forming. Getting to know where those are, whether in automotive, surveillance, etc is key but right now, there are many competing architectures going after the same socket and that gets complicated.”

And even with targeted workloads, the right architecture, and a software stack and team that can keep pace with rapid shifts in AI frameworks and use cases, it comes down to practicality. This is the more esoteric piece of the chip startup puzzle. The price points, the leading edge over what the incumbents are already working toward, the manufacturability at scale, and other implementation matters are the details that matter.

As Reddy explains, “we have talked about analog memory, in-memory, and silicon photonics based accelerators but the key question here is yes, it might be possible to get 2-3X better performance and there’s product risk but if someone can come along with a ‘good enough’ solution at a better price point and more important, one that is compatible with the existing software ecosystem that is important. We want what is more practical for particular applications given the software ecosystem is already up and running.”

In addition to talking about some of the novel technologies for AI hardware in the datacenter and edge, Michael Stewart from Applied Ventures pointed to some technologies that exist already or are less expensive and esoteric to develop and program to, including compute-in-memory architectures. He says trends like this portend “a shift in where priorities might lie in terms of technology development as markets can take advantage of this faster and outcompete those that are more dependent on mainline compute.”

The panel often had to split the conversation when it came to talking about training versus inference silicon and again another split for edge devices. But the latter is where a VC firm like Khosla is placing some of its key bets, in part because the datacenter side of training is locked down firmly by Nvidia and the inference piece for datacenter still comes with much uncertainty.

“So far, we’ve spent more time on inference at the edge because that market is up for grabs. We’ve looked a lot of approaches with companies trying to build the next chip for datacenter applications for training and inference, but we have stayed away because of the strength of the incumbents,” she says.

“We are seeking opportunities but we need a combination of architecture, software, vertical focused applications and everything should sync and present a clear story or narrative about why this approach can win over the incumbents,” Gulati adds. “On the inference side at the edge it’s easier to distinguish because we can go after a more aggressive performance per watt number than what the incumbents have.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.