GPUs Speak Volumes in Semantic AI Platforms

No matter how inadvertently, DARPA has helped spawn a number of new companies and mainstream technologies over the years, including recognizable mainstays like the Siri speech recognition engine, which evolved from the artificial intelligence CALO (Cognitive Assistant that Learns and Organizes), a five year, $200 million backed effort backed by the agency.

DARPA is still exploring how deep learning, particularly with a semantic and natural language processing angle, can find a fit for government use, including more recently with its Deep Exploration and Filtering of Text (DEFT) effort, which analyzes massive volumes of text to build information around concepts or topics—essentially putting together the pieces of a puzzle based on language, word frequency, and for the most part, fully unsupervised collections that yield their own learned topics of interest. More companies will pour out of this effort as well over the next few years, especially as companies start to look at ways conversational information can be autonomously gather, collated, and spit back out into some kind of coherent narrative or list of main ideas.

What is interesting is that among all of these new companies that are sure to continue to roll out of DARPA work, there is an increasingly pressing problem, even in this era of on-demand hardware in the cloud. There are specialized systems required to handle these types of deep semantic learning at scale—and that is expertise that many companies don’t have in house, particularly when it comes to throwing the massive parallelism of GPUs into the mix.

There are a few companies trying to mesh together a hardware and software story around the value of semantic understanding at scale by offering the only thing that makes sense. An appliance. Pre-configured, pre-wired with the right deep learning algorithms, and ready to chew through petabytes of unstructured text data without (much) human intervention. But as one imagine, having all of that compute in house is going to cost you.

Yet another offshoot from the CALO program has made just the sort of effort described. San Francisco-based startup, Loop AI Labs has found its way into the enterprise mix with a complex natural language processing and deep learning technology that is backed by GPUs, both for model training and advanced queries. According to Patrick Ehlen, Chief Scientist at Loop AI Labs, there is a growing market for deep semantic learning, and accordingly, for hardware and software systems that can keep pace with expanding, complex deep learning models. “While it’s one thing to have technology that lets people talk to a computer, creating a system that can understand and extract information about the relevant topics and issues from people talking to each other is a completely different thing.”

There is equally an expanding base of potential users for far more complex models, which is part of what pushed Ehlen and Peintner to start a company, especially since they were seeing a shortfall of other startups working on large-scale semantic applications. Their Loop Cognitive Computing platform is an unsupervised semantic engine that processes unstructured text data around domains with specialized lingo to help companies build a vocabulary—then without intervention, have the engine understand the important points, topics, and provide summaries and other information in a way that is similar to how people hear and learn.

Ehlen and Loop AI Labs CTO, Bart Peintner have been developing their feel for the software and systems required for deep semantic learning since their research careers at Stanford, where both were focused on artificial intelligence and semantic learning platforms under the DARPA-funded initiative. The goal then was to work toward more advanced speech recognition capabilities for military and government use, but the business applications are clear in healthcare, telco, and beyond. As deep learning and neural network approaches evolve to meet those demands, so too do the systems required to push such capabilities to wider market segments.

Ehlen says that during their time at Stanford (then later in his experience at AT&T labs) they were able to confine natural language processing models to laptops and workstations before moving out to the Amazon cloud to capture EC2 nodes. With the addition of GPU capabilities on AWS, the opportunities to do more intense, larger networks helped AI and natural language processing researchers step up the number of concepts that could be processed—and the demand for parallel compute keeps growing. Central to this parallelism is the GPU, which has become the basis of the company’s appliance, which is outfitted with two Nvidia K80 GPUs alongside 12-core Xeon host processors.

“When we were first starting and everyone was talking about huge speedups with GPUs, we just weren’t seeing it, but that is because at smaller scale, they were not making much of a difference. However, when you start to scale up the problems and the networks get bigger, those 8,000 cores go a long way in allowing us to keep adding more concepts and building complexity.” Ehlen says that the company’s software is also available on the Amazon cloud, again using GPUs, but the performance is nowhere near what users get with bare metal and further, thus far AWS has not updated their GPU lineup to include the K80—only the K40 is available.

The single node appliance with its two K80s and processor, along with the Loop AI stack runs around $13,000, but for companies that need to better understand how large volumes of textual data fit together and provide a bigger picture around a particular topic, it is a small price to pay, especially since there are only a few other offerings powerful enough to sift through petabytes of information, says Loop AI Labs CEO, Gianmauro Calafiore. Although he was unable to name customers, he said that they have users in healthcare as well as some large telco customers in Asia.

The company believes that semantic deep learning networks are the next unexplored frontier for large companies who need collate and understanding big semantic data—and by solving the hardware piece of the puzzle they have an advantage over competitors and a way to carve out a slice of that potential market by making deep learning at scale within reach.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.