On the Fringes of Useful Neuromorphic Scalability

When it comes to novel computing architectures, whether in quantum, deep learning, or neuromorphic, it can be tricky to get a handle on how incremental improvements in processor counts translate to real world improvements since these bumps in element counts often don’t have perfect parallels to CPUs or even GPUs.

For instance, we can all understanding adding cores and upping the transistor tally, but what does such a jump mean for a quantum annealing machine, for instance? It’s complicated because complexity is added to different parts of the hardware and software stack, so while there might be more, it is not necessarily without some tradeoffs, even if it’s just a matter of programmability.

And there’s another issue to consider when we talk about what adding increments of computing capability—and it’s one that’s even more difficult to quantify. How well does the addition of those compute elements translate into practical, real-world value for actual applications, especially since for many novel devices, that production-level application count is still slim? That’s the question because if there’s not clarity on this soon, it is going to be increasingly difficult to see a future for the neuromorphic efforts that exist now. There’s some product on the market, but not much, and from our view, Intel is the only one with a credible offering at scale. So, is there a potential market?

One of the “cleanest” stories in scalability for unique architectures comes from the neuromorphic world in that adding more neurons and computational capability does not mean a complete rewrite of algorithms or total change in how one thinks about using such devices. This is according to Brad Aimone, Principal Member of the Technical Staff in the Cognitive and Emerging Computing Group at Sandia National Laboratories, who helps lead the Neural Exploration Research Lab, focused on neural algorithms and architectures.

A close-up shot of an Intel Nahuku board, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

Aimone and his team at the lab entered into a three-year agreement with Intel to explore the potential of neural algorithms on scaled-up neuromorphic architectures, focusing on the Nahuku boards, which contain between 8-32 Loihi neuromorphic chips inside the Pohoiki Beach systems (more on the boards and machines here). While a fully loaded Pohoiki system can provide 100 million spiking neurons, Sandia went with the lower chip count (8) and has a total of 50 million neurons at the ready, something that is new for the team, which has investigated a number of neuromorphic devices over the last several years, including chips designed for AI as well as the expected research-focused devices (Spinnaker and BrainScaleS, most notably) and IBM’s True North architecture.

Aimone says that scalability from an applications and programmability standpoint is not beholden to some of the same rules that other devices, including GPUs, have. “I anticipate it’s going to scale pretty easily since with neural algorithms we basically lay out a circuit, kind of like programming a big FPGA. Since we’re laying that algorithm as a circuit over the chips, adding more chips means we can put a bigger circuit on. Some of the challenges you’d see with something like an HPC system with MPI, for example, we don’t have to deal with.” He adds that there are already several codes ready to go as they begin exploring what is possible with added scale.

Looking ahead at increased capability, Aimone points to two things his team is looking forward to. “When it comes to neural algorithms, it is important they exist at scale. You can take a small-scale neural algorithm and make essentially a mini-app and put it on the smaller platforms we have but ultimately, for real-world impact we need a lot of neurons—bigger is fundamentally better here.” He says aside from the application angle, seeing what’s possible from the hardware with the shift to millions of neurons is another exciting prospect. “Whether it’s AI or numerical applications, a few hundred thousand neurons is a toy example. In the many millions we expect to see a real impact and we’re exploring that now.”

But is 50 million or even 100 million spiking neurons enough to do anything truly revolutionary that could displace even some traditional computation?

Aimone says it’s still an open question whether or not the problem sets that can be tackled even with multi-million neurons (50-100m, about the computational capability of a mouse, although that’s a much more nuanced comparison than it sounds) will be useful enough for a sufficiently wide and high-value application net. Considering the R&D investment necessary to keep scaling will be hefty (as it is in quantum, for that matter) the team will be seeing what high-impact applications could benefit, even if just from an efficiency perspective. This is important in scientific computing since the exascale era will mean massive power consumption and the need to look to more efficient, capable architectures. But for now, the exploration of where the applications will be that could make neuromorphic a viable business for Intel is still ongoing.

“We are seeing that certain numerical tasks (linear algebra operations, for example), when performed at enough scale of neurons start doing better, at least in theoretically, than traditional systems in terms of efficiency. There are definitely some use cases out there, more at the formal computing level, and also in AI.” He adds that the bulk of brain-inspired algorithms only make an impact at large scale and with things that are akin to episodic memory (content addressable memory like the brain).

For high-value physics simulations, increased scalability is key. “Imagine building a graph with a million mesh points for some numerical simulation with 20-50 neurons per mesh point. A million point simulation can give reasonable resolution, not incredible, but workable. But it is really more like a billion to reach the level we would want for high resolution physics simulations.”

Aimone and team have worked with True North, Spinnaker, BrainScaleS and other chips but he says the difference with the Loihi based architecture is that they started a bit later with scalability in mind, therefore some things like adding more neurons and chips and keeping learning on the chip itself in a dense architecture makes Intel’s offering stand out. “There’s something useful about having your hands on the ability to have the whole system right there, where you’re not dealing with scheduling and so forth. It’s the only system we’ll be able to use at real scale and while the others are scalable, there are differences in how they scale and their communication architectures.”

If neuromorphic computing has a commercial future for large-scale datacenter use cases (the edge/AI side of this story is entirely different) massive scalability will be needed. This might sound like an obvious statement since it’s generally true across the board in computing, but the difference here is that scalability has not been the utmost focus from any hardware devices or research efforts we’ve seen to date and if they have, it has been very research and application/neural algorthm development oriented. It could be that there is a real opportunity for neuromorphic architectures in the datacenter and if so, Intel has pulled ahead of anything out there with the Loihi-based scalable architecture.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

3 Comments

  1. Scaling it up vertically with multiple stacks of thousands of layers will make it really “clean” and more brain-like. Luckily, NAND is already going in that direction, so 3D neuromorphic designs could build on that work. Maybe 100 billion “neurons” in a less than 1 liter pile of cubes is possible.

    Getting a single “neuron” right could be the real challenge, and the key to creating “strong AI” rather than just a brain-inspired algorithm runner:

    https://spectrum.ieee.org/nanoclast/semiconductors/devices/memristor-first-single-device-to-act-like-a-neuron

  2. What about Brainchips contribution to the neurotrophic process and does it project its algorithms as far reaching as Intel’s?

    • Last I understood they were tackling edge stuff. One of many AI chip startups that decided there was no more room in the datacenter and they’d hedge with automotive and edge. More opportunity? Maybe. Lower margins. Oh goodness, yes.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.