Intel Gets Serious About Neuromorphic, Cognitive Computing Future

Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side.

However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation—an emphasis on neuromorphic devices and what Intel is openly calling “cognitive computing” (a term used primarily—and heavily—for IBM’s Watson-driven AI technologies). This is the first time to date we’ve heard the company make any definitive claims about where neuromorphic chips might fit into a strategy to capture machine learning, and marks a bold grab for the term “cognitive computing” which has been an umbrella term for Big Blue’s AI business.

Intel has been developing neuromorphic devices for some time, with one of the first prototypes that was well known in 2012. At the same time, IBM was still building out efforts on its own “True North” neuromorphic architecture, which we do not generally hear much about outside of its role as a reference point for new neuro-inspired devices we’ve watched roll out in the last couple of years. Some might suggest that a renewed interest in neuromorphic computing from Intel could be aligned with the DoE’s assertion that at least one of the forthcoming exascale machines must utilize a novel architecture, (although just what classifies as “novel” is still up for debate) and some believe that neuromorphic is a strong contender. The problem is, if neuromorphic is one of the stronger bets, there are some big challenges ahead. Near term, there are really no neuromorphic devices being produced at scale enough to warrant an already-risky DoE investment and second, albeit longer term, is the fact that programming such devices, even to handle offload workloads for existing large-scale scientific simulations, is a tall order.

Leaving aside exascale, however, is the fact that there are many emerging use cases that could benefit from a powerful pattern matching-pro device like a neuromorphic chip. These have far less to do with supercomputing and much more to do with self-driving cars and real-time sensor-fed networks. Either way, Intel is getting serious about neuromorphic chips again—and they’re backing that with a lot of talk about what’s next for “cognitive computing”.

Intel Fellow and CTO for the HPC ecosystem in the Scalable Datacenter Solutions Group, Mark Seager, has informed insight on the coming course of computing. Before joining Intel six year ago, he led advanced supercomputing platform efforts at Lawrence Livermore National Lab for 28 years, overseeing the introduction of top supercomputers. Seager sees a merging of traditional HPC and machine learning in the years ahead and said different architectural approaches can provide the kind of efficiency required at extreme scale. “Several new and emerging use cases for computational neural networks and machine learning are being integrated with high performance computing simulation applications; that’s extremely exciting and a harbinger of an architectural convergence in the future.”

When asked what type of device looks like an early winner for the high-end of machine learning—a difficult bet to place when the algorithms are changings so quickly that getting something to market in time to meet the demand efficiently is a challenge—Seager pointed to neuromorphic as a big opportunity. “A model of the brain in silicon; if we can get anywhere close to the neuromorphic capabilities of the human brain at 15 watts, and we have heard before that the brain is equivalent to exascale computer, it would be a major victory, especially since an exascale computer in the mid-2020s is targeting 20 megawatts.”

“Somehow, the human brain—our own biology—has figured out how to make the human brain one million times more efficient in terms of delivered AI ops than we can build into a traditional supercomputer. Neuromorphic is an opportunity to try to come up with a CMOS-based architecture that mimics the brain and maintains that energy efficiency and cost performance benefit you get from a model of a human brain.”

If there is a real opportunity for neuromorphic devices, then why there is a lack of product momentum, even from companies like IBM that have invested heavily in the technology without rolling something into the market? Is it just still such early days or is the promise far larger than the potential application set? Seager said Intel is not hedging bets, as we suggested, and was careful to note that this is not a product development effort in earnest yet. “We are working on at it as Intel Labs—as a research project and we don’t know when the product intercept will happen.” Having said that, Seager explains that “the things that are possible in this space are pretty amazing; the IBM effort has been going on for a while with a substantial amount of funding and yes, they are making progress, but so are we.”

“Machine learning and deep learning are just one part of overall AI. They are important and have been successful in creating a lot of market opportunities, as well as demand for HPC platforms, but that is not all of AI,” Seager tells The Next Platform. “At Intel, we are serious about other aspects of AI like cognitive computing and neuromorphic computing…our way of thinking about AI is more broad than just machine learning and deep learning, but having said that, the question is how the technologies required for these workloads are converging with HPC.”

The goal, Seager suggests is to focus on architectures that have a great deal of floating point vector units, a high degree of parallelism, and can deal with deep memory hierarchies in a fairly uniform way. “One of the things we are doing research on is how to parallelize a machine learning workload over an interconnect, in this case, OmniPath, to solve bigger, more complex neural network problems across multiple nodes to scale out. At the moment, that scalability is limited to tens or hundreds of nodes, but we thinking as computational neural network algorithms and models advance, that scalability can increase substantially.”

Back to the idea of hedging bets for the future of machine learning hardware… Intel has put a number of chips on the table and laid down some sizable investments to back each. The most significant example is last year’s acquisition of FPGA maker, Altera, which the company expects to integrate and outfit in cloud datacenters en masse over the next several years. The Nervana Systems acquisition, while not high-priced in relative terms, offers Intel a novel software stack for deep learning, as well as a largely memory-driven hardware platform to tie to a Xeon sometime in the next few years. Of course, it’s hard to say how much the rapidly-evolving ecosystem will have moved by then algorithm wise to make such a chip outdated. “We are moving towards workload optimization in the datacenter. Our roadmap does support that. We see that to a large extent with Nervana for machine learning training and inference and FPGAs for scoring in that market and beyond. It’s what we’re calling scale-out AI,” Barry Davis, GM of Intel’s Technical Computing team adds.”

Even though neuromorphic devices and cognitive computing, which is the software piece of Intel’s AI puzzle, are expected to be important areas, the FPGA story remains strong, especially with the Stratix 10 coming from the Altera acquisition, which offers 10 teraflops of single precision performance and a terabyte of on-package memory bandwidth. For machine learning and general datacenter application acceleration workloads, this is a strong story. Where neuromorphic chips land—and what happens when IBM gets wind that its “cognitive computing” term have been snatched both remain to be seen.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now


  1. Well IBM is probably out of the game anyway. “True North” Chip didn’t really excite anybody in the community and didn’t receive much praise from Yann LeCun for example. So I would say “True North” was nothing more than a publicity stunt.
    There is not even a road map for it. Having lost most of its chip making capabilities as well it is hard to see how IBM is coming back into the game sure they can go to pure-play foundries but they will be far back in the pecking order there. As simply they lag the volume to compete with Qualcomm, Apple, nVidia & AMD, Mediatek or other mobile SOC companies.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.