It would be interesting to find out how many recent college graduates in electronics engineering, computer science, or related fields expect to roll out their own silicon startup in the next five years compared to similar polls from ten or even twenty years ago. Our guess is that only a select few now would even consider the possibility in the near term.
The complexity of chip designs is growing, which means higher design costs, which thus limits the number of startups that can make a foray into the market. Estimates vary, but bringing a new chip to market can cost upwards of $120 million, depending on the design, software, and manufacturing choices. These high barriers stifle competition and leave the larger hardware market at the mercy of a cadence selected by only a few.
A hardware market that devours rather than encourages competition means that it is harder than ever to pull of a successful hardware startup. Considering the (arguable) figures that new silicon requires $50 million for design, $50 million for the software, and at least another $20 million for the mask, the future for emerging semiconductor engineers will also be limited to the relative few that make their own chips. This is, of course, a far cry from the $10 million startup cost the industry knew just a couple of decades ago when the price was acceptable enough to feed a vibrant startup market and provide real opportunities even for aspiring recent EE and computer science grads.
It is this very complicated problem that is the subject of recent funding from DARPA and the Semiconductor Research Corporation, which have kicked in a total of $27.5 million toward a program that aims to make hardware startups more practical in terms of design complexity and cost. One of the programs included in the funding is The University of Michigan’s new Center for Applications Driving Architectures, or ADA, a $32 million center that seeks to develop a “plug and play ecosystem to encourage a flood of fresh ideas in computing frontiers including autonomous control, robotics and machine learning,” according to the center’s director and U of M computer science and engineering professor, Dr. Valeria Bertacco.
Bertacco makes the bold claim that there shouldn’t be a PhD requirement to design new computing systems. “Five years from now, I’d like to see freshly minted college grads doing hardware startups.”
That may seem like a very blue sky vision for the near future—one that appears to be holding the status quo in terms of chip design and production costs—but Bertacco says there are workarounds for some of the most costly elements as well as those to counter some of the design complexity. She says that by focusing on the algorithmic requirements for specific applications, it is possible to build algorithmic hardware architectures or reusable efficient algorithmic accelerators for the computational blocks.
“Instead of targeting the application itself, designs will target the underlying algorithms. Special-purpose hardware designs can improve the efficiency-per-operation by several orders of magnitude over a general-purpose chip. Such special-purpose hardware design occurs today, but it can take a decade after a need is identified before mature and efficient solutions are available, and it requires extremely specialized expertise,” Bertacco adds.
In essence, Bertacco says, instead of reinventing the hardware wheel, this approach can bring the level of abstraction higher over some of the deeply technical silicon design issues, including timing and power optimization. “The idea is to take the high level application and use a specialized compiler that maps different parts of the application to different accelerators.” What this means from a hardware point of view is that compute itself becomes a packaging problem versus one to be solved from the ground-up.
Newer developments in semiconductor engineering and production, including 2.5D technology where a silicon interposer is used to mount the different chips and package them together could be useful once (and if) that becomes mainstream. The vision there is that silicon companies could produce off the shelf processor cores and accelerators and anyone could buy the interposer, mount their design and save far more by harnessing their chipmakers economies of scale for production. It is difficult to say how much this could lower the costs but a reasonable estimate is in the hundreds of thousands of dollars—if not kissing a million. Again, this is a spare change compared to the larger effort but it does show that paths to cheaper chip production (and ideally a more competitive hardware startup market) are within reach.
What this does not address is the grand challenge. Many have grown accustomed to a world with limited hardware devices with a few flourishing up-and-comers. The marketshare even for the smaller efforts for server processors is still quite thin and will likely not top 25% share in the coming years. However, for domain specific processing where FPGAs alone are not the right bet and CPUs leave too much performance on the table, this algorithmic approach to leaving much of the multi-accelerator offload to a tuned compiler makes sense.
“The idea is to blur the hardware and software dividing line,” Bertacco tells The Next Platform. “The goal is to think at the application level and consider how a compiler can automatically take pieces of the application that go to a specific accelerator to get the desired performance.” She says the future is defined by heterogeneous multiprocessors where existing accelerators play well together in a way defined by the application and compiler.
“Let’s stop doing accelerators that target an application; let’s do a machine learning or computer vision accelerator but move into a space where the acceleration is algorithmic.” It is here that Bertacco says the hardware startup opportunity lies even if it is not necessarily the silicon bootstrapping effort that has defined the last several decades.
Be the first to comment