Bioinspired computing is nothing new but with the rise in mainstream interest in machine learning, these architectures and software frameworks are seeing fresh light. This is prompting a new wave of young companies that are cropping up to provide hardware, software, and management tools—something that has also spurred a new era of thinking about AI problems.
We most often think of these innovations happening at the server and datacenter level but more algorithmic work is being done (to suit better embedded hardware) to deploy comprehensive models on mobile devices that allow for long-term learning on single instances of object recognition (as one example) on the actual mobile device without a major battery or memory hit.
With this in mind, on this audio interview episode of “The Interview” with The Next Platform we talk with Heather Ames, a co-founder at bioinspired AI startup, Neurala about the co-evolution of neurology and computing into a range of real-world use cases for AI. Her company is trying to blend the world of large-scale deep learning and mobile processing to allow for on-the-go neural network training to suit a wide range of use cases including for instance as analysis for police body cameras that can be trained to look for a particular object of interest on the fly with real-time detection of the subject.
Ames has a PhD in Cognitive and Neural Systems from Boston University and a BS in Cognitive Neuroscience from UC Berkeley. She and a small team founded Neurala after hitting limitations with neural networks running on workstations, pushing them into GPU acceleration territory with an emphasis on small devices.
Ames talks about the early days with the DARPA Synapse program and the use of new approaches like memristors to efficiently enable AI computations. She also addresses other emerging hardware devices like MPUs and other novel architectures that seek to make memory the platform for more compute than we traditionally see. This emphasis on efficiency for AI computations is what pushed Neurala toward on-device GPU computing for machine learning; something that is much different than what we see with the emphasis on very big, beefy GPUs running training inside large datacenters.