MPI on Neuromorphic Hardware Shows Greater Promise

While there are not many neuromorphic hardware makers, those on the market (or in the research device sphere) are still looking for ways to run more mainstream workloads. The architecture is aligned with low power consumption and high computational efficiency, but mapping generalizable, high-value graph-driven problems to these devices is still an evolving art.

Among the limited cadre of neuromorphic devices is the SpiNNaker platform, a massively parallel Arm-based architecture with spiking neural networks in mind. The project got its start in 2019 as a European research effort and since that time, has handled specific offload for the Human Brain Project, among other scientific computing endeavors.

Over time, the neuromorphic platform’s software stack has been enriched, most recently with MPI hooks to prove the architecture’s mettle in more generalizable workloads, namely PageRank. The MPI piece of this is the most interesting because it opens a world of possibilities into the types of applications the neuromorphic platform can tackle, especially those in the scientific computing realm—one that is well-aligned with the research-oriented nature of the SpiNNaker effort.

Collaborators from Politencnico di Torino implemented the SpinMPI library, which provides both synchronization and core primitives for message passing. The library “allows users to easily port any MPI algorithm implemented for standard computers to the SpiNNaker neuromorphic platform, effectively acting as an interface between any C language, MPI-compliant program, and the native SpiNNaker communication framework, without the need to modify the original program.”

“The highly efficient interconnection architecture in the SpiNNaker platform, besides being well-suited for SNN applications, shows promise for the low-power parallel execution of tasks in the edge computing domain. On the other hand, being a collection of massively parallel computation elements immersed in a distributed-memory environment with linearly scaling inter-core communication, the SpiNNaker architecture may in fact be the ideal silicon implementation for the MPI paradigm, to the extent that it might be worth consideration even for the realization of systems on a higher scale.”

While the PageRank benchmarking effort using the MPI library hit a few snags in terms of scalability, which we’ll get to in a moment, the point is that with functional MPI, neuromorphic devices based on the spiking neuron architecture can find a much wider set of applications to tackle. This would also include Intel’s Loihi neuromorphic architecture, which has been in research phase for a number of years without significant hope of big commercial opportunity—at least not yet.

Scaling neuromorphic devices like Spinnaker or Loihi to datacenter-class use cases will still take some footwork. Even with robust, optimized MPI for the SpiNNaker device, there were some scalability limitations. Specifically, the benchmarking team notes that the SpinMPI framework itself doesn’t scale infinitely and the irregular computation times for PageRank compounds with core count usage.

Although neuromorphic devices might be well-aligned with MPI in terms of scalability, in practice, there were some gaps.

For instance, the team found some limitations in terms of balancing computation time via parallelization versus the cost of MPI Broadcast communication, something that worsens at increased scale. They also found that memory location and data size variability caused contention as well as different memory access times.

None of these are fatal flaws. The team thinks that they can keep plugging away at the SpinMPI librabry, especially by adding multicast vs. broadcast communication) to improve the outcomes for large graph problems like PageRank. With these tweaks and optimizations we might start to see spiking neural network hardware find a wider set of use cases at larger scale—perhaps outside of edge use cases.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.