Neuromorphic Processors Leading a New Double Life

Not so many years ago, the promise of neuromorphic processors as the key to ultra-efficient computing was being touted, especially following work between DARPA and IBM on new chips that emulated the synaptic system of the human brain.

The product behind this effort was Big Blue’s “True North” architecture, which, if you look at it as an actual brain-on-chip, sported one million “neurons” and 256 million programmable synaptic units that could beam data to neurons. The final product, even though it never achieved much attention beyond its role as a brilliant IBM research endeavor, only pulled around 70 milliwatts while doing the heavy processor lifting of approximately 45 billion synaptic operations per second. This is all impressive, but as one might imagine, had the critical flaw of only being applicable for a select number of real-world applications.

The technology is coming to life again in some interesting places, driven in part by trends in the general computing world that are perfectly aligned with what True North does well—image and pattern recognition. Further, select applications in that wider class that are fed by sensor data are an even better fit because of the way the processor addresses—and then learns from—patterns in those meshed data streams.

On that note, the stunted adoption of True North might have been a case of a technology that was too early for a wider market since it missed the timing of a new generation of sensor-fed massive data streams where large-scale pattern recognition is critical. The ability to use a low power processor to quickly comb through and detect matches now has a potential user base, so why did True North seem to go south?

For one thing, this isn’t the only architecture in recent memory to make similar claims for the same types of applications. And for that matter, several new processing technologies are on the horizon or in various stages of development. From Micron’s unique Automata processor to the optical processors that are being deployed as prototypes at the Genome Analysis Centre, there is a growing cadre of low-power processors that tear down Von Neumann architectural barriers to target select applications.

The other answer is that a lot of that is being done in memory and in software, but that’s a separate conversation. The real question is now is how IBM and DARPAs amazing little postage stamp-sized brain on a chip might find new life doing the most meta of all possible things—serving as a small supercomputer that runs simulations of other much larger supercomputers in order to make those massive supercomputers more efficient and reliable.

Even for those who do not follow Department of Energy news around large supercomputing efforts, it should be clear that predicting and eliminating the impact of component failures is important for future exascale systems, in part because there are simply so many potential points of failure that can bring an application running across a massive cluster to a standstill. The wasted energy and cycles can be accounted for in the hundreds of thousands of dollars over the course of a year for existing systems—a problem that compounds as the breadth of applications running on ever-larger core counts grows.

To address this, and to work toward better supercomputer designs for the future, a team of researchers at the Rensselaer Polytechnic Institute led by Christopher Carothers, Director of the institute’s Center for Computational Innovations described for The Next Platform how True North is finding a new life as a lightweight snap-in on each node that can take in sensor data from the many components that are prone to failure inside, say for example, an 50,000 dense-node supercomputer (like this one coming online in 2018 at Argonne National Lab) and alert administrators (and the scheduler) of potential failures This can minimize downtime and more important, allow for the scheduler to route around where the possible failures lie, thus shutting down only part of a system versus an entire rack.

A $1.3 million grant from the Air Force Research Laboratory will allow Carothers and team to use True North as the basis for a neuromorphic processor that will be used to test large-scale cluster configurations and designs for future exascale-class systems as well as to test how a neuromorphic processor would perform on a machine on that scale as a co-processor, managing a number of system elements, including component failure prediction. Central to this research is the addition of new machine learning algorithms that will help neuromorphic processor-equipped systems not only track potential component problems via the vast array of device sensors, but learn from how these failures occur (for instance, by tracking “chatter” in these devices and recognizing that uptick in activity as indicative of certain elements weakening).

These and other current and future processors are being simulated on an existing large-scale BlueGene supercomputer at the center. This means that the teams can not only get a reasonable approximation of performance of future neuromorphic machines, they can also swap out different configurations of GPUs, CPUs, and processors that have not hit production yet to see how they might perform at different scales. The fun part here is that we know of a system and companion simulation software that are capable of modeling some complex configurations on upcoming supercomputers that will make up the first series of pre-exascale machines. These feature a known set of GPU, CPU, memory, and latency characteristics that can be tuned and optimized–thus modeled at scale in the RPI simulation if desired.

We will follow up with Carothers at another point to dig into some of their findings for ideal supercomputer configurations, but for now, it is at least noteworthy that the potential performance of neuromorphic processors is not based on mere “pie in the sky” projections. With a known set of variables, tunables, and specs, the team can gauge what processor technologies are worth a second look—and which might not stack up to expectations.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.