The Second Coming of Neuromorphic Computing

Just a few years ago, the promise of ultra-low power, high performance computing was tied to the rather futuristic-sounding vision of a “brain chip” or neuromorphic processor, which could mimic the brain’s structure and processing ability in silicon—quickly learning and chewing on data as fast as it could be generated.

In that short amount of time, broader attention has shifted to other devices and software frameworks for achieving the same end. From custom ASICs designed to train and execute neural networks, to reprogrammable hardware with a low-power hook, like FPGAs, to ARM, GPUs, and other non-standard CPU cores, the range of neuromorphic approaches are less likely to garner press, even though the work that is happening in this area is legitimate, fascinating, and directly in range with the wave of new deep learning, machine learning, in-situ analysis, on-sensor processing and other capabilities that are rising to the fore.

The handful of neuromorphic devices that do exist that are based on a widely variable set of architectures and visions, but the goal is the same–to create a chip that operates on the same principles of the brain. The goal that has not been met, however, is the delivery of a revolution in computing. But the full story has not played out quite yet—neuromorphic devices may see a second (albeit tidal) wave of interest in coming years. To call it a second coming might not be entirely fair since neuromorphic computing never really died off to begin with. What did dissipate, however, was the focus and wider attention.

There was an initial window of opportunity for neuromorphic computing, which opened as a few major funding initiatives were afoot. While these propelled critical research and the production of actual hardware devices and programming tools, attention cooled as other trends rose to the fore. Still, the research groups dedicated to exploring the range of potential architectures, programming approaches, and potential use cases have moved ahead—and now might be their year to shine once again.

There have been a couple of noteworthy investments that have fed existing research for neuromorphic architectures. The DARPA Synapse program was one such effort, which beginning in 2008, eventually yielded IBM’s “True North” chip—a 4096-core device comprised of 256 programmable “neurons” that act much like synapses in the brain, resulting in a highly energy efficient architecture that while fascinating—means an entire rethink of programming approaches. Since that time, other funding from scientific sources, including the Human Brain Project, have pushed the area further, leading to the creation of the SpiNNaker neuromorphic device, although there is still a lack of a single architecture that appears best for neuromorphic computing in general.

The problem is really that there is no “general” purpose for such devices as of yet and no widely accepted device or programmatic approach. Much of this stems from the fact that many of the existing projects are built around specific goals that vary widely. For starters, there are projects around broader neuromorphic engineering that are more centered on robotics versus large-scale computing applications (and vice versa). One of several computing-oriented approaches taken by Stanford University’s Neurogrid project, which was presented in hardware in 2009 and remains an ongoing research endeavor, was to simulate the human brain, thus the programming approach and hardware design are both thus modeled as closely to the brain as possible while others are more oriented toward solving computer science related challenges related to power consumption and computational capability using the same concepts, including a 2011 effort at MIT, work at HP with memristors as a key to neuromorphic device creation, and various other smaller projects, including one spin-off of the True North architecture we described here.

A New Wave is Forming

What’s interesting about the above referenced projects is that their heyday appears to be from the 2009 to 2013 period with a large gap until the present, even if research is still ongoing. Still, one can make the argument that the attention around deep neural networks and other brain-inspired (although not brain-like) programmatic and algorithm trends might bring neuromorphic computing back to the fore.

“Neuromorphic computing is still in its beginning stages,” Dr. Catherine Schuman, a researcher working on such architectures at Oak Ridge National Laboratory tells The Next Platform. “We haven’t nailed down a particular architecture that we are going to run with. True North is an important one, but there are other projects looking at different ways to model a neuron or synapse. And there are also a lot of questions about how to actually use these devices as well, so the programming side of things is just as important.”

The programming approach varies from device to device, as Schuman explains. “With True North, for example, the best results come from training a deep learning network offline and moving that program onto the chip. Others that are biologically inspired implementations like Neurogrid, for instance, are based on spike timing dependent plasticity.”

The approach Schuman’s team is working on at Oak Ridge and the University of Tennessee is based on a neuromorphic architecture called NIDA, short for the Neuroscience Inspired Dynamic Architecture, which was implemented in FPGA in 2014 and now has a full SDK and tooling around it. The hardware implementation, called Dynamic Adaptive Neural Network Array (DANNA) differs from other approaches to neuromorphic computing in that is allows for programmability of structure and is trained using an evolutionary optimization approach—again, based as closely as possible to what we know (and still don’t know) about the way our brains work.

Schuman stresses the exploratory nature of existing neuromorphic computing efforts, including those at the lab, but does see a new host of opportunities for them on the horizon, presuming the programming models can be developed to suit both domain scientists and computer scientists. There are, she notes, two routes for neuromorphic devices in the next several years. First, as embedded processors on sensors and other devices, given their low power consumption and high performance processing capability. Second, and perhaps more important for a research center like Oak Ridge National Lab, neuromorphic devices could act “as co-processors on large-scale supercomputers like Titan today where the neuromorphic processor would sit alongside the traditional CPUs and GPU accelerators.” Where they tend to shine most, and where her team is focusing effort, is on the role they might play in real-time data analysis.

“For large simulations where there might be petabytes of data being created, normally that would all be spun off to tape. But neuromorphic devices can be intelligent processors handling data as it’s being created to guide scientists more quickly.”

What is really needed for these potential use cases, beyond research like Schuman’s and many others at IBM, HP, and others that are working toward such goals, is the development of a richer programming and vendor landscape. One promising effort from the Brain Corporation, a Qualcomm-backed venture, appears to be gaining some traction, even if it is slightly later to the neuromorphic device game relative to its competitors. Although it is more robotics and sensor-oriented (versus larger-scale computing/co-processing, which is encapsulated by Qualcomm’s coming Zeroth platform for machine learning, which is based on neuromorphic approaches), the team there is reported to have developed in silicon a neuromorphic device and the companion software environment as an interface for programmers.

Although the concept has been floating around since the 1980s and implemented in hardware across a number of projects, including some not mentioned here, the future of neuromorphic computing is still somewhat uncertain—even if the exploding range of applications puts it back in the lens once again. The small range of existing physical devices and an evolving set of programming approaches match a growing set of problems in research and enterprise—and this could very well be the year neuromorphic computing breaks into the mainstream.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

5 Comments

  1. Brainchip Inc in California have developed true AI on a FPGA board with their IP. With hardware only, no software. It’s thousands of times faster than CPUs or GPUs or VPUs. It uses 1/1000 of the power too. Have a chat to Peter van der Made and update your article.

    • None of Brainchip’s claims have been independently validated.

      Neuromorphic circuits have been running on FPGAs for over 10 years. It is well within the capability of any competent electrical engineer.

  2. Wonderful article Nicole, as usual. It’s unfortunate you only mention HP in regards to memristors and neuromorphic computing when Knowm Inc actually sells them, has a complete framework for achieving memory, logic and machine learning with them, and has also been featured on the nextplatform.com. Other than my obvious disappointment that you failed to mention Knowm…keep up the good work! I certainly agree that a new wave is forming.

  3. Major Technological Advancement of BrainChip Artificial Intelligence – Autonomous Feature Extraction (AFE) System Developed
    In a major advancement to its existing and patented SNAP (Spiking Neuron Adaptive  Processor) technology, the research and development team at BrainChip have completed  development of a unique Autonomous Feature Extraction (“AFE”) system.    Utilising the hyper‐speed SNAP neural processor, the AFE system is able to process and  learn complex and overlapping real‐world digital features, and has been used on a range  of input patterns and shapes.    Highlights     The AFE system is digital and hardware based (not a software program);   Using SNAP technology, it is able to process 100 million input events per second,  which are distributed to thousands of dynamic synapses;   The hardware is commercially ready and demonstrable to potential licensees;   It  has  been  established  in  a  completely  configurable  design  for  use  as  a  commercial product with other neural network designs;   The AFE system has substantial application opportunities in easily applied  markets;    CEO and inventor of BrainChip’s SNAP technology, Peter van der Made said “This is a  major breakthrough in the high speed application of Artificial Intelligence systems and  presents a world‐first in cutting edge technology.  Our AFE system is possibly the fastest  autonomous learning network invented so far.  The potential applications of the AFE  system are considerable, and we look forward to integrating this exciting technology into  a number of applications as the year advances”.   
    ASX PRESS RELEASE
    ______________________________________________________________________________ 
    BrainChip Holdings Limited ACN 151 159 812 Level 2, 6 Thelma Street, West Perth WA 6005 T: +61 8 9444 2555 | F: +61 8 9444 1600 | W: http://www.brainchipinc.com
      Recent events and media coverage have highlighted areas of technological demand that  BrainChip believes can be met by its AFE system, examples of which are:    Financial Data Analytics   Global hedge funds and brokerage firms including JP Morgan Chase have highlighted the  opportunity that predictive analytics presents.   http://www.wired.com/2016/01/the‐rise‐of‐the‐artificially‐intelligent‐hedge‐fund/    Sports Performance Analytics   Sporting teams are embracing artificial intelligence to help to analyze individual player  performance  and  to  predict  in‐game  play  scenarios.  http://www.wired.com/2016/01/football‐coaches‐are‐turning‐to‐ai‐for‐help‐calling‐plays/    Border Protection Systems   A system designed with multiple sensors including a camera (for facial recognition), an  artificial retina (for shape recognition) and an e‐nose (for scent detection) would be  deployable to detect persons of interest and contraband.     Driver Experience Enhancement   Car manufacturers including Tesla, have highlighted their desire to improve the driving  experience  with  autopilot  devices  and  other  significant  improvements,  focused  on  bettering  the  driver  experience  from  a  number  of  angles  including  safety.  http://fortune.com/2015/10/16/how‐tesla‐autopilot‐learns/    Drone Safety   The need for safer use of commercial drones in public airspace has driven the likes of  Amazon and the US Government to investigate how AI can enhance the control of these  vehicles deployed over flight paths.     http://mashable.com/2015/11/30/amazon‐prime‐air‐reality/#Zble1x1olkqH      BrainChip’s  AFE  system  can  be  configured  to  provide  solutions  not  only  to  those  mentioned as examples above, but across a significant number of sectors.         
    ASX PRESS RELEASE
    ______________________________________________________________________________ 
    BrainChip Holdings Limited ACN 151 159 812 Level 2, 6 Thelma Street, West Perth WA 6005 T: +61 8 9444 2555 | F: +61 8 9444 1600 | W: http://www.brainchipinc.com
      About the Autonomous Feature Extraction system (AFE)    The AFE system is comprised of a SNAP spiking neural network that learns from input  patterns and performs autonomous feature extraction and data labeling for pattern  recognition. The system can be used to process a wide range of real‐world events and  digital data from a multiple of sensors. The system can learn from both recorded and realtime input data. The learned features can be stored in a knowledge library. This library of  learned behavior can be loaded on to further systems in order to instantaneously  assimilate the learned features.    The system is composed of three SNAP units.  The first unit consists of sensory neurons  that map input data to spike patterns that are distributed spatially and over time. Similar  to a biological brain, these sensory neurons will fire spikes at different times depending  on the input they receive.  The outputs of the sensory neurons are then transmitted at a  rate of up to 100 million events per second to the next unit that performs autonomous  feature extraction, also called unsupervised feature learning. These 100 million events are  distributed to thousands of synapses in parallel, which are updated one million times per  second. This unit learns the main features that characterize a given set of data. For  example, this unit learns the features of letters and digits when the input data consists of  handwritten characters.     The autonomous feature extraction unit is composed of a spiking neural network that  utilizes an unsupervised learning rule (e.g. Spike Timing Dependent Plasticity: STDP), and  lateral inhibition so that different neurons in the network learn different features. Lateral  inhibition means that the first neuron to recognize a specific input pattern inhibits all  others in the same layer. The learning process that occurs in this unit can run continuously  or over a specific period of time. The SNAP learning method has proven to be very fast.     The output of the autonomous feature extraction unit is forwarded to the labeling unit.  This unit consists of a spiking neural network that is trained in a supervised way to map  learned features to output labels. For the handwritten characters’ example, the output  labels could be letters and digits, or complete words that the system has learned to  recognize. The output labels can be used in an external device like a Central Processing  Unit (CPU) for post–processing.   
    ASX PRESS RELEASE
    ______________________________________________________________________________ 
    BrainChip Holdings Limited ACN 151 159 812 Level 2, 6 Thelma Street, West Perth WA 6005 T: +61 8 9444 2555 | F: +61 8 9444 1600 | W: http://www.brainchipinc.com
     
    Figure 1: Schematic diagram of BrainChip’s Autonomous Feature Extraction System 
      The  system  is  implemented  in  digital  hardware  using  BrainChip’s  unique  parallel  technology. All neurons and synapses are updated at a rate of one million per second.  BrainChip’s spiking neural network connectivity can be externally configured.  Synapses  are dynamic, that is, the properties of synapses change over time using the STDP learning  rule.  The output of many synapses is integrated by dendrites and a soma. The output of  a soma is fed to a neuron’s axon that emits one or more output spikes when a previously  learned pattern is recognized. The spike from an axon is transmitted to the connected  synapses using a proprietary fast communication protocol.   
     
    Figure 2: Method of input processing at 100 Million events per second 
          Peter van der Made continued, “BrainChip is very pleased and excited to unveil this major  technological  breakthrough  in  Artificial  Intelligence  technology,  as  a  significant 
    ASX PRESS RELEASE
    ______________________________________________________________________________ 
    BrainChip Holdings Limited ACN 151 159 812 Level 2, 6 Thelma Street, West Perth WA 6005 T: +61 8 9444 2555 | F: +61 8 9444 1600 | W: http://www.brainchipinc.com
    advancement of its patented neural processor, SNAP.  We will be providing the market  with a visual demonstration of our AFE system, alongside Milestone 3 later in the quarter.  We look forward to providing further updates on the application of this exciting new  technology through the year.”    

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.