Stanford Brainstorm Chip to Hints at Neuromorphic Computing Future

If the name Kwabena Boahen sounds familiar, you might remember silicon that emerged in the late 1990s that emulated the human retina.

This retinomorphic vision system, which Boahen developed while at Caltech under VLSI and neuromorphic computing pioneer, Carver Mead, introduced ideas that are just coming around into full view again in the last couple of years—computer vision, artificial intelligence, and of course, brain-inspired architectures that route for efficiency and performance. The rest of his career has been focused on bringing bioinspired engineering to a computing industry that is hitting a major wall in coming years—and at a time when there is more data to work with than ever before.

“It’s not about being at the right place at the right time,” Boahen says of a career spent focused on neuromorphic computing and finding only recently there could be commercial viability, “it’s having been in the right place the whole time and staying there.” Traces of his work with the retina and cochlea are clear in the software approaches he and his teams have created to handle whole-brain models, and these were forwarded to Neurogrid–one of the most successful neuromorphic chip projects from research and basis for the next iteration of neuromorphic designs they are bringing to bear.

As the end of Moore’s Law draws near, the emphasis on pushing past CMOS-derived devices is increasing. From next generation supercomputers that are required to sport a novel architecture to custom ASICs that can tackle an array of emerging workloads, it is still any architecture’s game when it comes to future dominance. In addition to quantum computing, FPGA-based designs, dense GPU accelerated systems, and completely novel architectures, is neuromorphic computing—something we have covered extensively in the last couple of years. While it did not garner much attention beyond a few research efforts, the last year has brought a fresh wave interest. Even Intel declared a few months ago that it is kicking its own neuromorphic efforts into high gear.

There are no real commercial neuromorphic offerings yet, but we are getting closer. IBM has had its True North architecture for many years, and other projects, as described here and here, could be produced at some scale. Whether or not the manufacturability, reliability, and programmability are going to be steady enough for production at scale remains to be seen, but researchers at Stanford (where the Neurogrid research effort is rooted) have just taped out a new 28 nanometer device that could showcase the next generation of neuromorphic capabilities.

Boahen has been continuing his work for the last several years at Stanford as a bioengineering professor, leading teams on new brain-derived computing projects, including Neurogrid. A forthcoming device his group has developed at the Brains in Silicon lab at Stanford is called “Brainstorm” and will be a million-neuron neuromorphic device that can run whole-brain models. The project has been in the works since 2013 and is funded by the Office of Naval Research—not a surprise since it is expected to perform on embedded applications and once clustered, server workloads as well.

As one might imagine, Boahen cannot say too much about what fab and other design issues are at the heart of Brainstorm, but he did tell us that there are important differences between Brainstorm and other neuromorphic designs and did release one of the most comprehensive papers on neuromorphic device futures this month. In “A Neuromorph’s Prospectus,” Boahen charts the end of Moore’s Law, discusses when those limitations will be keenly felt (and why), and makes an extensive case for neuromorphic chips via architectural, application, and programming routes.

In a conversation with The Next Platform, Boahen pointed to the “primitive and brute force” nature of modern processors as well as the architectures that allow performance, efficiency, and scalability. “Many of the current neuromorphic devices are using routing mechanisms pulled from supercomputers, like meshes. The problem with a mesh architecture is that you can only send a signal from one point to another. If you want to a signal to many at once, the system will deadlock.” The brain does not operate this way—and unlike modern chip architectures, it is not designed for perfection in routing, but in distributed efficiency. With a more hierarchical, branching model like the brain, it is possible to get far more performance for much less energy—something that is very attractive for those looking to ultra-smart embedded applications (and their clustered server-side counterparts eventually).

Boahen says that the Brainstorm chip is the first to implement spiking neural networks that have been synthesized from a high-level description, not unlike FPGA programmers map a problem onto a device. Now that the chip will be in Stanford’s hands in the near future, the goal is to build the software stack to let researchers map complex problems to it. “We want to raise the level of abstraction so we can take an application, as with one of my collaborators in brain-machine interfaces, who has algorithms that can record neural spikes that help to infer what a paralyzed person wants to do next. These can be executed with a robot arm.” They are co-developing the hardware and software, writing a high-level description so the synthesis tool can configure the chip.

The class of problems such a chip can tackle can be described mathematically by multidimensional non-linear differential equations—or how things change over time based on a current state and present input. “We are developing a framework now where you write down these non-linear differential equations for a task and we auto-map that to a network of spiking neurons. There will also be a formal way of encoding and decoding those into the neurons so the system will be, in real-time, doing this can other cognitive, goal-driven work in the middle.”

Work will continue on Neurogrid, which is used in neuroscience research and forms the basis of the latest Brainstorm chip and software approach, but this is a broader application-based effort. Boahen says his team is not working with partners at Intel or others on this project, although he says that many of the Neurogrid developers had worked on IBM’s True North neuromorphic architecture. Although he admits there are not any commercial viable devices yet, we are just beginning to see that neuromorphic devices could fit the bill for the post-Moore’s Law era.

One should not be fooled by the embedded application focus of Brainstorm, however. Like the brain, the focus is on inherent scalability across devices. While clustering neuromorphic devices together for large-scale supercomputing or other applications is still some ways off, we can expect to see a new wave of research featuring a host of applications once Boahen’s Stanford team has silicon in hand.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. BrainChip has been engineering one-to-many architectures in neuromorphic computing since 2004. We do have Spiking Neural Networks doing useful work in commercial installations at casinos, airports, aircraft assembly facilities, manufacturing etc. so I beg to differ that there are commercial installations of Spiking Neural Networks. BrainChip devices are unique, in that they use Dynamic Learning – a proprietary implementation of STDP – and not Deep Learning. Our US patent on this technology dates from 2008.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.