The emerging field of neuromorphic processing isn’t an easy one to navigate. There are major players in the field that are leveraging their size and ample resources – the highest profile being Intel with its Loihi processors and IBM’s TrueNorth initiative – and a growing list of startups that include the likes of SynSense, Innatera Nanosystems and GrAI Matter Labs.
Included in that latter list is BrainChip, a company that has been developing its Akida chip – Akida is Greek for “spike” – and accompanying IP for more than a decade. We’ve followed BrainChip over the past few years, speaking with them in 2018 and then again two years later, and the company has proven to be adaptable in a rapidly evolving space. The initial plan was to get the commercial SoC into the market by 2019, but BrainChip extended the deadline to add the capability to run convolutional neural networks (CNNs) along with spiking neural networks (SNNs).
In January, the company announced the full commercialization of its AKD1000 platform, which includes its Mini PCIe board that leverages the Akida neural network processor. It’s a key part of BrainChip’s strategy of using the technology as reference models as it pursues partnerships with hardware and chip vendors that will incorporate it in their own designs.
“Looking at our fundamental business model, is it a chip or IP or both?” Jerome Nadel, BrainChip’s chief marketing officer, tells The Next Platform. “It’s an IP license model. We have reference chips, but our go-to-market is definitely to work with ecosystem partners, especially who would take a license, like a chip vendor or a ASIC designer and tier one OEMs. … If we’re connected with a reference design to sensors for various sensor modalities or to an application software development, when somebody puts together AI enablement, they want to run it on our hardware and there’s already interoperability. You’ll see a lot of these building blocks as we’re trying to penetrate the ecosystem, because ultimately when you look at the categoric growth in edge AI, it’s really going to come from basic devices that leverage intelligent sensors.”
BrainChip is aiming its technology at the edge, where more data is expected to be generated in the coming years. Pointing to IDC and McKinsey research, BrainChip expects the market for edge-based devices needing AI to grow from $44 billion this year to $70 billion by 2025. In addition, at last week’s Dell Technologies World event, CEO Michael Dell reiterated his belief that while 10 percent of data now is generated at the edge, that will shift to 75 percent by 2025. Where data is created, AI will follow. BrainChip has designed Akida for the high-processing, low-power environment and to be able to run AI analytic workloads – particularly inference – on the chip to lessen the data flow to and from the cloud and thus reduce latency in generating results.
Neuromorphic chips are designed to mimic the brain through the use of SNNs. BrainChip broaden the workloads Akida could run by being able to run CNNs as well, which are useful in edge environments for such tasks as embedded vision, embedded audio, automated driving for LiDAR and RADAR remote sensing devices, and industrial IoT. The company is looking at such sectors as autonomous driving, smart health and smart cities as growth areas.
BrainChip already is seeing some success. It’s Akida 1000 platform is being used in Mercedes-Benz’s Vision EQXX concept car for in-cabin AI, including driver and voice authentication, keyword spotting and contextual understanding.
The vendor sees partnerships as an avenue for increasing its presence in the neuromorphic chip field.
“If we look at a five-year strategic plan, our outer three years probably look different than our inner two,” Nadel says. “In the inner two we we’re still going to focus on chip vendors and designers and tier-one OEMs. But the outer three, if you look at categories, it’s really going to come from basic devices, be they in-car or in-cabin. be they in consumer electronics that are looking for this AI enablement. We need to be in the ecosystem. Our IP is de facto and the business model wraps around that.”
The company has announced a number of partnerships, including with nViso, an AI analytics company. The collaboration will target battery-powered applications in robotics and automotive sectors using Akida chips for nViso’s AI technology for social robots and in-cabin monitoring systems. BrainChip also is working with SiFive to integrate the Akida technology with SiFive’s RISC-V processors for edge AI computing workloads and MosChip, running its Akida IP with the vendor’s ASIC platform for smart edge devices. BrainChip also is working with Arm.
To accelerate the strategy, the company this week rolled out its AI Enablement Program to offer vendors working prototypes of BrainChip IP atop Akida hardware to demonstrate the platform’s capabilities for running AI inference and learning on-chip and in a device. The vendor also is offering support for identifying use cases for sensor and model integration.
The program includes three levels – the Basic and Advanced prototypes to the Functioning Solution – with the number of AKD1000 chips scaling to 100, custom models for some users, 40 to 160 hours with machine learning experts and two to ten development systems. The prototypes will enable BrianChip to get its commercial products to users at a time when other competitors are still developing their own technologies in the relatively nascent market.
“There’s a step of being clear about the use cases and perhaps a road map of more sensory integration and sensor fusion,” Nadel says. “This is not how we make a living as a business model. The intent is to demonstrate real, tangible working systems out of our technology. The thinking was, we could get these into the hands of people and they could see what we do.”
BrainChips Akida IP includes support for up to 1,024 nodes that can be configured into two to 256 nodes connected over a mesh network, with each node comprising four neural processing units. Each of the NPUs includes configurable SRAM and each NPU can be configured for CNNs if needed and each is based on events or spikes, using data sparsity, activations, and weights to reduce the number operations by at least two-fold. The Akida Neural SoC can be used standalone or integrated as a co-processor a range of use cases and provides 1.2 million neurons and 10 billion synapses.
The offering also includes the MetaTF machine learning framework for developing neural networks for edge applications and three reference development systems for PCI, PC shuttle and Raspberry Pi systems.
The platform can be used for one-shot on-chip learning by using the trained model to extract features and adding new classes onto it or in multi-pass processing that leverages parallel processing to reduce the number of NPUs needed.
Here is the one shot:
And there is the multi-pass:
“The idea of our accelerator being close to the sensor means that you’re not sending sensor data, you’re sending inference data,” Nadel said. “It’s really a systems architectural play that we envision our micro hardware is buddied up with sensors. The sensor captures data, it’s pre-processed. We do the inference off of that and the learning at the center, but especially the inference. Like an in-car Advanced Driver Assistance System, you’re not tasking the server box loaded with GPUs with all of the data computation and inference. You’re getting the inference data, the metadata, and your load is going to be lighter.”
The on-chip data processing is part of BrainChip’s belief that for much of edge AI, the future will not require clouds. Rather than send all the data to the cloud – bringing in the higher latency and costs – the key will be doing it all on the chip itself. Nadel says it’s a “bit of a provocation to the semiconductor industry talking about cloud independence. It’s not anti-cloud, but the idea is that hyperscale down to the edge is probably the wrong approach. You have to go sensor up.”
Going back to the cloud also means having to retraining the model if there is a change in object classification, Anil Mankar, co-founder and chief development officer, tells The Next Platform. Adding more classes means changing the weights in the classification.
“On-chip learning,” Mankar says. “It’s called incremental learning or continuous learning, and that is only possible because … we are working with spikes and we actually copy similarly how our brain learns faces and objects and things like that. People don’t want to do transfer learning – go back to the cloud, get new weights. Now you can classify more objects. Once you have an activity on the device, you don’t need cloud, you don’t need to go backwards. Whatever you learn, you learn” and that doesn’t change when something new is added.
No no. Don’t break into the datacenter. It is bad for cleanliness if the servers. Creates lots of dust. Knock on the front door and ask nicely.
Break into the BANK! That is where they keep the money!
It seems that Akida has a huge advantage over Intel and IBM, as their concept chips are nowhere near commercialisation. Brainchip is certainly the future as the cloud will soon not be able to process the exponentially growing amount of data sent to it, especially for autonomous driving and robotics. Data will need to be processed instantly on chip and resultant actions delivered instantly. Akida is the future and Brainchip will deliver a more efficient, cost effective and secure world.
The dawn of the next big step forward for human kind. Armstrong would say again…’one step for man, one giant leap for mankind’.
Mimetic computing will only be useful if I can move the way the rest of the wetware moves. Data centers aren’t all that mobile.