Microsoft is investing $1 billion in AI research company OpenAI to build a set of technologies that can deliver artificial general intelligence (AGI). The multiyear joint effort aims to construct a supercomputing capability in Azure that provides developers a platform for building AGI applications and delivering them to the public.
AGI, sometimes known as “strong AI,” has been the holy grail of the artificial intelligence community for over 50 years. According to Sam Altman, chief executive officer at OpenAI, the goal is to be able to develop systems that display generalized intelligence, which can perform a lot of different tasks at superhuman level. That’s unlike today’s version of AI, sometimes referred to as “narrow AI,” which provides applications trained to solve particular problems like image recognition, language translation, and robotic control.
However, AGI is more than just the sum of its parts. The idea is that a general AI capability will be able to draw on learned skills and combine them in the way that humans would do, or in Altman’s telling, the way that superhumans would do. So for example, an autonomous truck driving through Europe would not only be able to navigate across multiple countries, but would also be able to develop optimal routes using traffic and weather intelligence, converse with clients about their deliveries in the appropriate local language, and coordinate with autonomous warehouses for unloading and loading merchandise.
I think this will be the most important development in human history,” said Altman. “When we have computers that can really think and learn, that’s going to be transformative.”
Founded in 2015, OpenAI already has some deep-pocketed backers, including Elon Musk, Peter Thiel and Amazon Web Services, among other Silicon Valley notables. In aggregate, OpenAI’s funders have committed $1 billion to the organization, so Microsoft’s matching investment looms large. In fact, large enough that OpenAI agreed to port its services to Azure. In turn, Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies.
How that exactly fits with OpenAI’s nonprofit status remains to be seen, but the relationship between the two entities will probably be not too different from that of Google and DeepMind. In the latter case, however, DeepMind is wholly owned by Google (both are now under Alphabet), which it bought in 2014 for $525 million. Why Microsoft didn’t do the same with OpenAI is not clear, although it’s conceivable that OpenAI’s other investors wanted to keep their options open.
Nevertheless, the matchup is a natural one, based on a relationship that began in 2016, when OpenAI selected Azure as their primary cloud platform. Today both organizations have ongoing efforts in AGI. In Microsoft’s case, the most visible example is Project Malmo, which is an AI research project that is developing technologies for self-learning in complex environments.
In OpenAI’s case, just about all of the research it does is aimed at artificial general intelligence, since developing AGI is the organization’s central mission. Their most notable project in this regard is Universe, which it describes as “a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.” Universe enables an AI to interact with a computer by looking at the pixels on a screen and based on its interpretation of those pixels, operate the computer using a virtual keyboard and mouse.
Universe uses reinforcement learning to train itself on a particular task and transfer learning to extend its acquired intelligence to new tasks. Transfer learning would seem to be natural approach for AGI, given its ability to transfer abilities from one task to another. While humans are usually very adept at doing this, even the best transfer learning techniques currently being employed fall short of their biological competition.
In general, Altman thinks AGI will be made possible with the development of much larger neural network models and is optimistic that the industry can do so, noting that AI has been enjoying an annual 8x performance increase for the last several years. He says based on what he’s seen from the AI chip developers and from the Azure roadmap, Altman is confident that the 8x pace can be maintained.
That suggests Microsoft is planning to build a more customized platform for AGI than the one it has now in Azure based on CPUs, GPUs, and FPGAs. Given the company’s enthusiasm for FPGA-powered AI, and the more powerful offerings of this technology from both Intel and Xilinx, it’s conceivable that Microsoft will base its AGI hardware on platforms like Intel’s Agilex or Xilinx’s ACAP, but there’s no guarantee of that. There is a flotilla of custom artificial intelligence processors heading toward the AI market. In Microsoft’s official press release on the partnership, there was no talk of preferred hardware or even architectures.
Microsoft and OpenAI are not the only ones taking aim at artificial general intelligence. As of 2017, a research survey found 45 AGI projects, spread across 30 countries. The three largest projects were being undertaken by DeepMind, the European Human Brain Project (HBP), and OpenAI. Since Alphabet spends a few hundred million dollars per year on DeepMind and HBP is expected to cost over a billion dollars over its 10-year span, the $1 billion influx from Microsoft, plus the other billion from other investors, should certainly put the collaboration in good stead.
A specific timeline for delivering AGI technologies through the partnership was not offered.