An Evolving View of AI Infrastructure

By all accounts, artificial intelligence is still in its early days.  And for a select group of people working in the field today, they’ve witnessed its evolution from an academic subject into a set of commercial applications and technologies that is slowly transforming every sector of the economy.

We are gathering some of these pioneers in AI infrastructure next week at The Next AI Platform event. Among those on stage for the series of live, on-stage interviews and panels is former AI mastermind at Twitter, Clément Farabet.

Farabet has the distinction of being one of those people who has been involved in AI and machine learning almost from its inception, or at least before GPUs transformed it into the market force it is today. His early work at the University of New South Wales and New York University used FPGAs as the platform for his research in analyzing video and images with convolutional neural networks.

In 2013, Farabet co-founded MadBits, a startup that developed image and video classification technology based in deep learning vision techniques. When Twitter acquired the company in 2014, he built up his new company’s AI capability, founding and managing a group that provided its machine learning technology infrastructure.

In 2017, Farabet joined Nvidia, where he now heads up the company’s AI Infrastructure/Platform team.  The application focus for the group is self-driving cars, but with a wider mandate to develop Nvidia’s machine learning and data science capabilities. When we spoke with him recently, he talked about how the field has changed over such a short timeframe and how his move to Nvidia gave him a broader appreciation of where the technology is going.

“What we’ve really seen in the past six or seven years is an explosion of deep learning, which is a specific subset of machine learning,” explained Farabet. That explosion has fueled the ability of social media companies like Twitter to connect with their user base in a much more intimate manner than was ever possible in the past.

Like most social media companies, Twitter had a tremendous amount of raw user data to draw on for all sorts of deep learning use cases. In this case, the collected data was used to drive Twitter’s recommendation systems for advertisements and other user content, as well as for optimizing the platform’s search and ranking capabilities. While working there, Farabet founded an internal group known Cortex Core, which is charged with developing the deep learning platform that powers all of Twitter’s products and services.

When he moved to Nvidia, Farabet’s application focus shifted to autonomous vehicles, which had a very different data profile than that of social media. In this case, there was no large dataset to draw upon; Nvidia had to figure out how to find and collect the data from fleets of cars in the field. That required a different type of storage architecture and different kinds of data management.

All of which brought home the fact the deep learning is, at its core, a subset of data science, a discipline that goes back more than 20 years. Today data science is at the heart of most applications in the datacenter, deep learning or otherwise. Nvidia’s Saturn V system represents a sort of a microcosm of how architectural thinking changed as data-fueled AI applications hit the mainstream and became larger and more diverse.

According to Farabet, even though Saturn V was cast as an AI supercomputer, the original machine launched in 2016 was very much a high performance computing architecture, suitable for traditional scientific workloads. In particular, that meant there was a big focus on the interconnect between compute nodes (the Saturn V nodes used EDR InfiniBand). But as neural networks grew in size and complexity, the emphasis changed from compute intensity to data intensity.

“That really shifted the way we thought about Saturn V,” said Farabet. “Instead of focusing on high performance compute and high performance interconnect between nodes, we started shifting our attention to the interconnect between storage and compute, and the cloud and compute.”

What followed was an expanding ecosystem of software that would support such a data-centric environment and its associated workloads. That brought in a whole new stack of tools and applications, including containers, lifecycle management packages, and software defined storage. Needless to say, that evolution is far from over.

We will delve into all of these topics in much more depth on May 9 in San Jose. We hope you can join us, just a few passes are left so register now.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.