OpenAI, Hyperscalers See GPU Accelerated Future for Deep Learning

As a former research scientist at Google, Ian Goodfellow has had a direct hand in some of the more complex, promising frameworks set to power the future of deep learning in coming years.

He spent his first years at the search giant chipping away at TensorFlow, creating new capabilities, including the creation of a new element to the deep learning stack, called generative adversarial networks. And as part of the Google Brain team, he furthered this work and continued to optimize machine learning algorithms used by Google and now, the wider world. Goodfellow has since moved on to the non-profit OpenAI company, where he is further refining what might be possible with generative adversarial networks.

The mission of OpenAI is to develop open source tools to further many of the application areas that were showcased this week at the the GPU Technology Conference this week in San Jose, where the emphasis was placed squarely on the future of deep learning, and of course, the role that Nvidia’s accelerators will play in the training and execution of neural networks and other machine learning. There has been a fair bit about VR and gaming, of course, but for a company that is placing its best on where the big money for its graphics chips will be in the next decade, the focus is likely not misplaced.

What is interesting about the GTC event is that nearly every major company with deep roots in deep learning is here—and talking publicly about why acceleration is critical for such workloads. Facebook, Baidu, Alibaba, Twitter, and of course, OpenAI, which develops open software, but with a keen eye on what shape the hardware needs to take. And what is interesting about Goodfellow’s work (and that at the aforementioned companies) is that without acceleration, the training and execution would be far less efficient–and perhaps even impossible at scale. And if there is anything that is important to the companies present, ultimately, it is scale.

As CEO Jen-Hsun Huang noted in his keynote, this is a foundational year for AI’s propulsion into the mainstream—and there have been no shortage of examples of what the latest GPU-accelerated systems mean for a rapidly changing base of applications that go far beyond mere image and speech recognition. Goodfellow’s work at OpenAI on generative adversarial networks highlighted how a new generation of hardware, matched with fresh algorithmic capabilities like the approach he and his teams have developed, will keep pushing deep learning forward. And as Nvidia and other hardware makers who have pivoted to meet the swell of deep learning interest, the application base will go far beyond image and speech recognition, extending in the coming couple of years into enterprise. The data is there. The hardware is there. The algorithms are there—and so too are the frameworks, including those developed by Goodfellow and comrades.

What the work at OpenAI, as well as on frameworks like TensorFlow, among many other efforts signifies—at least in 2016—is that we are at the edge of a revolution in computing. In his keynote yesterday, Huang introduced a “new model of computing” wherein AI is the new platform for both hardware and software. He said 2015 was a transformative year for this revolution, but as we listen to sessions at GTC that highlight that convergence of the right hardware, algorithmic, and development tools matched against a rising tide of new demand for applications since the data is finally there, accessible, and fit for deep analysis—automatic, deep analysis, no less—it is clear we are the edge of something much more profound than images painting themselves based on samples (not to diminish the creativity and brilliance of the developers there).

Given such convergence, and the fact that Nvidia, a prime player in the acceleration of training and inference of deep learning workloads, is arming a new generation with tools to build more advanced, smarter applications on the backs of OpenAI and TensorFlow efforts, in the next couple of years we’ll see a shift. That shift will be from consumer and “wow factor” examples of GPU-accelerated neural network-driven applications to practical business and research applications. Things that could potentially make a difference in intelligence, medicine, finance, and general enterprise.

Although there is an Nvidia bent to this piece since we are here at GTC this week to understand what is happening on the ground in GPU computing, the mainstream arrival of the deep learning technologies on display here this week is still some time off. While the OpenAI work in creating smarter neural networks by gaming two approaches against one another to improve the overall accuracy (hence the “adversarial” nature of Goodfellow’s work), or on TensorFlow, Caffe or other frameworks is important, it has yet to hit wider commercial appeal beyond the fringe (but promising) cases here. For instance, In addition to the OpenAI work on display, iQiyi is talking about using CNNs to power video recognition to deliver more context-aware ads, Nvidia architects are showing what’s possible using deep learning across large aerial image datasets to understand features for safety and productivity, and a number of earth systems researchers from the around the world are showing how machine learning, particularly with GPU acceleration, is powering new approaches to climate, atmospheric, and geologic research.

In addition to launching new libraries to support deep learning, including GIE (more about that here), as well as announcing the deep learning (not to mention HPC and hyperscale) oriented Pascal architecture, and a new deep learning appliance to help new users onboard their ideas, the GTC event was rife with examples of how the hardware is being spun into action to take deep learning to new levels, including the production of generative adversarial neural networks, which in yesterday’s keynote, were showcased for their ability to take training samples of Romantic-era paintings, and use text descriptors, “paint” an original image based on the terms, “pastoral scene” or “beach” or “landscape with forest”. While this is bad news for fleshbag painters, it signifies a widening scope of potential new, creative uses of GPU-backed systems to create realistic renderings, narratives, and other multimedia experiences simply by learning from human-produced artifacts (painting, images, audio, and more).

As we should note, Nvidia is not the only hardware maker watching this space closely. From FPGAs to smarter software that can make far more efficient use of more vanilla, non-accelerated hardware, the offerings to speed training and execution will keep mounting and diversifying as the real promise of practical commercial applications for deep learning appears. This is still some time off—at least for general enterprise (beyond top-tier, research-centric companies), but Nvidia has a clear lead—and has pushed a very large stake into the ground, something that will be aided as their hardware partners, including Cray, HP, and others start pushing systems like this one designed to tackle this emerging market.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Well GANs could be seen as a cut down flavour of LSTM or a special form of RNN where the gating is controled via certain previous states. Not really revolutionary and hardly used.

    I don’t buy this whole nVidia hype to be honest, the data might be there but without having any huge labeled training data which there is plenty in speech, text and certain area of image recognition all Deep Learning algorithms are dead in the water and a total no starter as they are 95% supervised learning.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.