IBM Watson CTO on What’s Ahead for Cognitive Computing

After close to twenty years at IBM, where he began as an IBM Fellow and Chief Architect for the SOA Foundation, Rob High has developed a number of core technologies that back Big Blue’s enterprise systems, including the suite of tools behind IBM WebSphere, and more recently, those that support the wide-ranging ambitions of the Watson cognitive computing platform.

Although High gave the second day keynote this afternoon at the GPU Technology Conference, there was no mention of accelerated computing. Interestingly, while the talk was about software, specifically the machine learning behind Watson, there was also very little about the software underpinnings. Disappointing as this might have been for the hardware-oriented folks in the crowd hoping to understand how OpenPower Foundation-spurred efforts using GPU-backed, Power-based systems make Watson’s gears turn (we can fairly assume that is the case), High did provide a summary of Watson’s evolution since 2011 as well as a look ahead at what the Watson research teams are looking to next.

High says he is frequently asked what about the differences between AI and cognitive computing, noting that while they aren’t much different conceptually, the goal of the Watson team is far more about making humans better at what they do than recreating the human brain in machine form. To that end, he outlined the progression of Watson since the Jeopardy! appearance, beginning at the point of asking what can make a computer an “expert” in any one field when the truths that define expertise are not always grounded in fact—when they are, they are clouded by human influences, including perception, assumptions, and subtle differences.

watson2

The “factoid pipeline” that is a key feature of Watson was best showcased on the quiz show, where the system had a question, posed in human language, and had to run off and scan over 200 million pages of literature to find answers not just to fact-based questions, but more nuanced queries that required contextual understanding of the text. We are all well aware of the Jeopardy feat, but that same technology, as High explains, was quickly rolled into the commercial sector, starting in earnest in 2012 with the “discovery advisor” which took the same factoid pipeline, and wrapped around the ability to enhance human analytics by looking for potential questions or solutions that human analysts might not have considered. This is the true value of Watson, High says, “not replacing human intelligence, but augmenting it, making humans better at what they do.”

“This notion of creating ideas and inspiring new thoughts and new ways of asking questions is critical to so many things people do in the professional world with this. We got exposed to a lot of demand in healthcare in particular, especially around treatments for things like cancer.” For a complex disease like cancer, particularly one where the literature base is of staggering volume, there is no way for healthcare providers to keep pace with the latest research. It is here that Watson shines, High says. For doctors to keep pace with what is being published, it would take them 160 hours each week, just to keep pace with what’s new. Ultimately, he says, for this field, Watson is looking for new patterns, solutions, and treatments, and serving as an engine for doctors to take a more case-by-case approach to individual patients based on the most current literature.

“In 2013, as we were developing these cognitive computing capabilities, it was just before the uptick in deep learning and we could see this was big. This was going to change a lot of lives, businesses, and opportunities, we knew we couldn’t hold this was a proprietary thing, so we took a platform approach.”

2013 marks the year that Watson was rolled out to a much wider base. With the Jeopardy publicity providing a healthy kickstart to mainstream recognition of IBM’s cognitive computing capabilities, Watson teams pushed for greater engagement capabilities inside the platform. The goal was to allow everyday people to interact with Watson by asking questions that went beyond fact-based answers. For instance, going to a retailer site (the example he cited was North Face) and describing the outdoor conditions to get an answer about what gear might be best for the mountain, land, or sea trek. From this point, the Watson team started to see that the opportunities were rich, but with so many domain experts and specific industries that could make use of the various blocks of technology inside of the Watson software range, they decided to open several of the services (beginning with just the engagement pipeline piece) by placing those as APIs inside the BlueMix cloud for developers to tailor.

watson3

And so begins the next phase of Watson’s evolution. One that is accessible for developers and that has an increasing number of features being worked in for further APIs. For instance, High says that the focus over the next year for the Watson team will be on expanding the engagement factors to create a system that understands not just mere sentiment, but personality based on text. One that can intuit tone from small samples of text, and then apply this knowledge to create tuned services.

This next stage of what he terms “emotional analysis” will add the critical human factor to AI, and will power one of IBM’s research efforts—Watson Robotics. Partnering on the robotics hardware side, the goal is to create robots that are tuned to the humans they are speaking with, to pick up on their emotions, and to use the vast knowledge base to provide personalized answers. In short, making Watson more human.

While the hardware side of Watson was overlooked in High’s talk entirely, there is little doubt that the future capabilities being refined within the OpenPower Foundation and in the Power architecture will be of increasing value as Watson systems require more (and more balanced) compute capability. For these services to be offered in the cloud to a greater extent, this means IBM’s own cloud will need to be beefed up with accelerated compute—something that Nvidia’s GTC event has showcased this week.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Good luck, Microsoft is already hot on their tail since they opened up their NLL APIs for third parties running on Azure of course. IBM might have pioneered this but I am afraid they will be left in the dust eventually as usually.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.