The New Intelligence Economy, And How We Get There
October 28, 2016 Mark Hammond
Earlier this month, Samsung acquired Viv, the AI platform built by the creators of Siri that seeks to “open up the world of AI assistants to all developers.” The acquisition was largely overshadowed by the more high-profile news of Samsung’s struggles with its Galaxy Note smartphone, but make no mistake, this was a bold and impactful move by Samsung that aggressively launches the company into the future of smart, AI-enabled devices.
Viv co-founder Dag Kittlaus makes a compelling argument for why Samsung’s ecosystem serves as an invaluable launching pad for Viv’s goal of ubiquity – the electronics giant’s reach extending beyond just smartphones to encompass an entire range of appliances and, increasingly, embedded software. Kittlaus believes that we are experiencing a new paradigm of the Internet, which is one that will be defined by connected AI and the emergence of a new Intelligence Economy.
Many in the industry have begun to draw the same conclusions, claiming that a new economy will be born from these massive advances in machine learning. A recent study conducted by Merrill Lynch/Bank of America projects the AI market, valued at $2 billion in 2015, to grow to $36 billion by 2020 and to $127 billion by 2025. That is a staggering amount of growth, but it’s not surprising.
Look at what Google, Facebook, IBM, and Microsoft are doing in AI. They are pouring billions of research dollars into their initiatives. And we can expect to continue to see M&A consolidation of the market, not just from these major players, but also companies like Samsung realizing the opportunity to enter and gain a foothold in the emerging Intelligence Economy.
Perhaps more important to achieving this new economy, however, is that many of these companies are beginning to open source some of their tools to the benefit of the broader developer ecosystem. These tools are helping to accelerate the pace of AI innovation at rates never before seen, and it’s a necessary step to achieve the kind of “AI everywhere” vision that many are anticipating.
Merely opening up AI platforms is not enough, though. Just because something is open source does not mean it’s accessible. The low-level technologies and resources Google and others have made available (TensorFlow, Torch, and so forth) are a boon for expert users, but they don’t get uninitiated developers off the sidelines and into the game. Despite all of the investment and growth, accessibility remains the single most pressing problem in AI.
We are nearing the end of an important phase in the development of AI, a phase dominated by statistical, brute force approach to programming AI, one that has relied on dramatic advances in compute. We’ve come a long way in the past few years, but we’re finally realizing that the statistical approach to AI is simply too complex to scale.
The next phase of AI will be dominated by efforts to make the technology widely accessible to developers and distributing through more of our devices. We’re simply facing too much of a talent gap in qualified AI experts and data scientists to meet the demands to achieve the new economy.
Bridging The Talent Gap
Everyone wants to build smarter applications faster, but it is incredibly difficult for most organizations to hire the data scientists and artificial intelligence experts they need. This shortage is the most important problem we face today.
In 2012, Gartner said there would be a shortage of 100,000 data scientists in the United States by 2020. In 2014 Accenture found that more than 90 percent of its clients planned to hire people with data science expertise, but more than 40 percent cited a lack of talent as the number one problem. The demand isn’t letting up. In its 2016 Data Science Report CrowdFlower found that 83 percent of respondents said there were not enough data scientists to go around.
Some larger players have begun a cycle of funding AI and data science programs in academia and then recruiting talent from those departments. We certainly need to work with universities to produce more data scientists and AI researchers, but this approach is a stop-gap measure for those with the deepest pockets, not a systematic solution.
We will always need more developers and data scientists. But does every developer need also be a data scientist?
Rather than waiting for colleges and universities to grow a new talent pool from scratch, we need to find a way to deliver AI to the existing workforce of developers. And that shift is not without precedent.
Do As Databases Did
Before databases were so common that people took them for granted, people had to work at a really low level of detail if they were building systems that worked with data in sophisticated ways. If you wanted to build an inventory management system for a retail chain, you first had to worry about what tree structure you were going to use to store the data, when to rebalance it, and how you were going to store the data across multiple hard drives once it exceeded the capacity of one disk.
Databases helped to solve this problem, but they didn’t solve it by giving you a toolkit that collected every tree algorithm or striping algorithm you might possibly want to use. That sounds ridiculous, but that is where AI is today: it is a collection of algorithms. We are giving developers a list full of ingredients they don’t recognize and can’t put to work.
So what did databases do? They gave you a high-level of abstraction with a server process that automated all that low level stuff so that you didn’t have to think about it. Instead, you could focus on the details that were pertinent to your business.
Abstraction is the driving force behind technological progress. Developers have known this since time immemorial. Few developers want to reinvent the wheel, and we all benefit by building on the shoulders of giants. The opportunity cost is otherwise quite painful.
We should take a similar approach to AI. We should abstract away all the low level details and mechanics of AI, so more people can focus on the tasks at hand.
Intelligence At Scale
There are more than 1,000 developers for every data scientist in the world. (Recent research from Evans Data puts the number of developers worldwide at 21 million and the number of data scientists worldwide at about 18,000.)
These developers have expertise particular to their organizations. They know what they want to teach their systems and applications, but they do not have the low-level artificial intelligence knowledge to get the job done. Handing them a bunch of machine-learning algorithms is asking them to master the skills to create better learning systems, which is to say better students. What we really want to do is let them use their existing skills and become effective teachers.
Abstracting away these low-level mechanics unlocks AI for developers, enabling them to program their expertise into any application or system. If you employ any number of developers, you know that their value goes beyond their ability to code. Their expertise and creativity is the “intelligence” that we should be trying to scale.
The intelligence economy is coming, and the question that remains is just how quickly, and effectively, we can get there. That race will be determined less by technological advancements than by accessibility. Once the average software developer can quickly, and competently, imbue applications with intelligence, the real intelligence economy will have arrived.
Mark Hammond is the founder and CEO at Bonsai AI, a machine learning platform designed to abstract away all of the underpinnings for algorithm training and inference. Before founding Bonsai AI, Hammond did stints at Microsoft, Numenta, and the Yale Neuroscience Department.