Bioinspired computing is nothing new but with the rise in mainstream interest in machine learning, these architectures and software frameworks are seeing fresh light. This is prompting a new wave of young companies that are cropping up to provide hardware, software, and management tools—something that has also spurred a new era of thinking about AI problems.
We most often think of these innovations happening at the server and datacenter level but more algorithmic work is being done (to suit better embedded hardware) to deploy comprehensive models on mobile devices that allow for long-term learning on single instances of object recognition (as one example) on the actual mobile device without a major battery or memory hit.
With this in mind, on this audio interview episode of “The Interview” with The Next Platform we talk with Heather Ames, a co-founder at bioinspired AI startup, Neurala about the co-evolution of neurology and computing into a range of real-world use cases for AI. Her company is trying to blend the world of large-scale deep learning and mobile processing to allow for on-the-go neural network training to suit a wide range of use cases including for instance as analysis for police body cameras that can be trained to look for a particular object of interest on the fly with real-time detection of the subject.
Ames has a PhD in Cognitive and Neural Systems from Boston University and a BS in Cognitive Neuroscience from UC Berkeley. She and a small team founded Neurala after hitting limitations with neural networks running on workstations, pushing them into GPU acceleration territory with an emphasis on small devices.
Ames talks about the early days with the DARPA Synapse program and the use of new approaches like memristors to efficiently enable AI computations. She also addresses other emerging hardware devices like MPUs and other novel architectures that seek to make memory the platform for more compute than we traditionally see. This emphasis on efficiency for AI computations is what pushed Neurala toward on-device GPU computing for machine learning; something that is much different than what we see with the emphasis on very big, beefy GPUs running training inside large datacenters.
More on Neurala can be found here and on Heather Ames here.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
“on-the-go neural network training to suit a wide range of use cases including for instance as analysis for police body cameras that can be trained to look for a particular object of interest on the fly with real-time detection of the subject.”
Can we be getting any closer to that Terry Gilliam sort of dystopian sort of society and those other sorts of AI Drones with Hell Fire/Sniper sorts of things where there is no human in the loop to maybe put a check on that AI algorithm that most certainly amoral.
Who can be held legally responsible for properly vetting any AI that does not have any sorts of morals and looking at that DARPA funding does not give one much assurances that even Asimov’s Rules can be always adhered to.
I fine with AI’s used to maybe Vett some CPU/Processor logic more efficinently against side channel vulnerabilities and other Logic Bugs in the Processor’s hardware, or even AI’s that can speed up Ray Tracing interaction calculations and other such things.
I do not Trust any Modern Digital Processor or the complex state machines that Processors are based on until those millenium math problems related to proving and vetting a Processors’s design is as error free as possible to a much higher degree than is currently possible. So until that part of any Procesors’s logic is Vettable, I’ll not likely Trust any AI that’s armed to such a degree that I’ll very likely not survive any AI error in judgment and that includes any automotive AI that can very well send me over that guardrail at the highet point of Devil’s Slide on that scenic California route or many others similar routes around the world.