An OS for Neuromorphic Computing on Von Neumann Devices

Ziyang Xu from Peking University in Beijing sees several similarities between the human brain and Von Neumann computing devices.

While he believes there is value in neuromorphic, or brain-inspired, chips, with the right operating system, standard processors can mimic some of the efficiencies of the brain and achieve similar performance for certain tasks.

In short, even though our brains do not have the same high-speed, high-frequency capacity of modern chips, the way information is routed and addressed is the key. At the core of this efficiency is a concept similar to a policy engine governing information compression, storage, and retrieval.

To achieve brain-like performance, Xu says the cognitive kernel and storage system are two areas that hold the most promise. “Separating the cognitive kernel from the processing unit and parallel processing units make information processing and calculation more like the human brain, which may improve the efficiency of the system.” Even with limited “storage” in our brains we are able to hold onto and transmit vast amounts of information. For Xo’s brain-inspired operating system, a storage system with lossy compression will allow for more data to be stored in the system.

In his vision of an OS for a Von Neumann-based neuromorphic computing approach, Xo says storage policies are a key piece. He likens the input of data into a storage system as “sinking” if it is not used again. “When one piece of information reaches the bottom of the storage system, it may lose all the connections with upper layers and cannot be retrieved in normal ways by the OS,” he explains. “But it has not completely disappeared; incidents (like a strong request) may happen that another, new connection is established and the information can be used again.”

In the software, the OS will be in charge of pushing the information down to “sunk” status via strict, rich policies. This happens already to some extent on devices. The difference with this neuromorphic operating system is that instead of using a daemon to scan all the data stored in buffer and update timestamps (a slow process that is handled serially), each part of the buffer is self-regulating and does a status check at regular intervals and pushes unused information down. The interesting part is that the resolution would be gradually lost so the storage space required is reduced.

Xu says that a neuromorphic approach to an operating system for traditional chips would also require complex compression and restoration capabilities organized by type (image, voice, documents, etc.). “We can set basic or complex policies based on basic ones for the OS or even have the OS try to generate its own policies,” but he admits that compressing information without breaking key components is very difficult. “The OS must have the ability to understand and extract important information, elsewhere the lossy compression may be useless.”

In terms of how such an OS would handle input and output of information, pre-processing would be emphasized. The information would be passed to the cognitive kernel only if the policies deemed the information important enough but most information could be discarded. Output information is mainly sent by the cognitive kernel, he explains.

As we have covered here extensively in the last few years, there are already a number of neuromorphic computing devices and projects available. These include academic efforts like Neurogrid from Stanford, Spinnaker from the University of Manchester, the EU-funded BrainScales project, and of course, commercial chips like the TrueNorth architecture from IBM, and similar efforts from Qualcomm. While the software and programming interfaces for these is an ongoing challenge, requiring specialization and custom-written codes, Xo thinks that with the right chip, standard programmatic interfaces could eventually be used to make traditional chips do the same efficient work as the efforts above.

Part of what makes this concept (it is in development as an idea at Peking University and is certainly not fully cooked) is that we are seeing the arrival of more sophisticated processors coming to the market in the next several years that have deep learning elements integrated. If standard chips with smaller GPUs to help the chip work with such an OS on short-term training/inference runs to decide what information “sinks” and what “floats” come to market, this idea might not be so farfetched for production pattern-matching-like workloads.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

3 Comments

  1. Sounds like a stupid idea the main advantage a Von-Neuman machine has compared to a human brain is that it can store many orders of magnitude more data without ever loosing any of it. This will take the single big advantage away

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.