Whether in the brain or in code, neural networks are shaping up to be one of the most critical areas of research in both neuroscience and computer science. An increasing amount of attention, funding, and development has been pushed toward technologies that mimic the brain in both hardware and software to create more efficient, high performance systems capable of advanced, fast learning.
One aspect of all the efforts toward more scalable, efficient, and practical neural networks and deep learning frameworks we have been tracking here at The Next Platform is how such systems might be implemented in research and enterprise over the next ten years. One of the missing elements, at least based on the conversations that make their way into various pieces here, for such eventual end users is reducing the complexity of the training process for neural networks to make them more practically useful–and without all of the computational overhead and specialized systems training requires now. Crucial then, is a whittling down of how neural networks are trained and implemented. And not surprisingly, the key answers lie in the brain, and specifically, functions in the brain and how it “trains” its own network that are still not completely understood, even by top neuroscientists.
In many senses, neural networks, cognitive hardware and software, and advances in new chip architectures are shaping up to be the next important platform. But there are still some fundamental gaps in knowledge about our own brains versus what has been developed in software to mimic them that are holding research at bay. Accordingly, the Intelligence Advanced Research Projects Activity (IARPA) in the U.S. is getting behind an effort spearheaded by Tai Sing Lee, a computer science professor at Carnegie Mellon University’s Center for the Neural Basis of Cognition, and researchers at Johns Hopkins University, among others, to make new connections between the brain’s neural function and how those same processes might map to neural networks and other computational frameworks. The project called the Machine Intelligence from Cortical Networks (MICRONS).
While neural networks are certainly brain-inspired, there is still no question that our own brains are far more efficient at processing, collating, and understanding information to learn. However, improvements to neural networks are on the way as researchers consider how one key difference between convolutional neural networks, for example, lag behind brains. The answer might lie in the synapses—the connections between neurons that let each neuron talk and listen to its neighbors or bosses, in basic terms.
According to Lee, “What is missing is that, if you look at the input of a particular neuron in the brain, only 5% to 10% of the input is actually coming from the previous layers. On the other hand, a neural network is almost 100% learning from a previous layer. And further, in a real neuron, 90-95% of the activity is listening to neighbor or boss neurons from downstream. So the circuit has a lot of loops and is highly interactive—more like a social network.”
Taking that idea one step further, as opposed to existing approaches to neural networks. The brain is building models—it extracts features, makes inferences from those, and propagates that information back. “The brain is like a scientist in that sense,” Lee explains. “It makes observations, comes up with a hypothesis and tests it based on what it expects to see—that feedback path and hypothesis testing is based on those predictions.” It is exactly this ability for the brain to make predictions—to use that 5%-10% of input versus all trained information—that is one important missing element in current neural networks that Lee’s IARPA-funded team, is seeking to recreate.
“Over the last forty years, neural networks have remained mostly the same. The major difference is the computing power available, but the algorithms haven’t changed much. What this project, which is larger than just what we are focused on, seeks to do, is use that computing power and build up neural networks without the same reliance on training models as we know it. To reduce the number of examples and make the network free to from such extensive training—unsupervised training.”
These ideas are not necessarily new, but the funding effort will allow for a more extended scope of research. In the early days of neural networks, for example, there were network concepts that targeted cognitive principles like interactive activation model and in many senses, this gap was what the Boltzmann machine concept targeted. The problem with these is that there are not suitable algorithms that map well to these neural network approaches, Lee says. The goal with the IARPA project then is to also look at these and other existing models, mesh them with existing neuroscience research, and find that secret algorithm that makes the brain capable of “training” with far less input using a predictive model, in essence.
The research goes beyond selecting and analyzing frameworks and mapping them to concepts in neuroscience. The question of which computational methods and frameworks, beyond the core algorithms, is also a target for Lee and his team, but they are focused on how processing units, rather than an entire system for computing networks, operate within the entire network. The simulations they expect to run will not be high performance computing based, but Lee says, GPU computing is essential for the early stage research for now, even if they are watching how specialized hardware, including neuromorphic chips, will fit into the larger picture.
At its core, MICRONS and the associated research amounts to creating a predictive capability inherent to the overall neural network in software—a grand challenge, but one that the $12 million research grant will target via MICRONS. “Extracting the brain’s secret algorithms in learning and inference from this massive amount of data to advance machine learning is extremely ambitious, ad might be the most uncertain part of the project,” notes Andrew Moore, head of the School of Computer Science at Carnegie Mellon University. “It’s the equivalent of a moonshot, but we have a very strong tradition and community in artificial intelligence.”
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.