Google’s Cloud Platform is the relative newcomer on the public cloud block, and has a way to go before before it is in the same competitive sphere as Amazon Web Services and Microsoft Azure, both of which deliver a broader and deeper range of offerings and larger infrastructures.
Over the past year, Google has promised to rapidly grow the platform’s capabilities and datacenters and has hired a number of executives in hopes of enticing enterprises to bring more of their corporate workloads and data to the cloud.
One area Google is hoping to leverage is the decade-plus of work and research that the search engine giant has put into machine learning and artificial intelligence (AI), both of which enterprises will need going forward if they want to move more quickly to take advantage of the massive amounts of data that are being generated and develop more services for their customers. Google last fall created a machine learning group within its Cloud Platform business and brought on Fei-Fei Li, director of Stanford University’s Artificial Intelligence Lab and the Stanford Vision Lab and a longtime AI and machine learning researcher, to head the unit. At the same time, Google introduced a series of application programming interfaces (APIs) around such tasks as natural language, language translation, and vision recognition to enable customers to leverage machine learning on the Google Cloud Platform.
The work Google has done around AI and machine learning is clearly an advantage officials want to run with as they look to better compete with AWS and Microsoft, as well as other vendors, such as IBM and Oracle, that are all making a play for the cloud. They pressed that again during the opening day at the Google Next show, which runs this week in San Francisco. During the lengthy keynote section headlined by Diane Greene, senior vice president of Google’s cloud efforts, company officials outlined new moves they are making to make machine learning capabilities more widely available on the Google Cloud Platform.
Those moves include making the Google Machine Learning Engine available on the company’s cloud platform, introducing a new API that will leverage machine learning capabilities to more easily search video content, and buying Kaggle, a seven-year-old online community of data scientists and developers that that runs contests around machine learning and enables members to share code. The acquisition will enable community members to more easily access the machine learning tools on the Google Cloud Platform.
At the same time, some Google Cloud Platform customers talked about using the cloud’s AI capabilities with their own services. For example, eBay chief product officer R.J. Pittman announced that the company’s new chatbot – called Shopbot – which can be accessed through Google Home and can understand natural language and learn the shopping habits of the user, eventually being able to offer such services as recommendations.
All of the announcements are part of an effort by the company to make machine learning and AI more widely available to people, Li said during the keynote address, adding that “Google Cloud is democratizing AI.”
Google had been offering the Machine Learning Engine on the cloud platform in beta, but now it is generally available, according to Li. The technology is a managed service that customers can use to build machine learning models that – with the help of Google’s infrastructure — can handle datasets of any size. The Machine Learning Engine is based on search giant’s TensorFlow framework and can access data from such Google services as BigQuery and Google Cloud Storage. Cloud Datalab, a tool for analyzing and visualizing data and building machine learning models on the cloud platform, also became generally available. It runs atop the Google Compute Engine.
In addition, Google’s new Video Intelligence API enables developers to create tools that can more easily search and find information in videos stored on Google Cloud Storage, as well as tag scene changes. The API can identify objects within a video, and can enable users to search for entities throughout their video library. In addition, the API can identify scene changes within a video. According to Li, such search and identification of video have been a challenge for developers, with most of the image recognition and tagging capabilities being available for still images, though there has been limited video search that involved manual tagging of images. With the new Video Intelligence API, “we are beginning to shine light on the dark matter of the digital universe,” she said. In a demo, software leveraging the API was able to identify multiple objects in a video commercial, including a dachshund. In another demo, the presenter searched for “beach” and was presented with multiple videos that included beach scenes.
Google also announced the opening of its Advanced Solutions Lab in Mountain View, in which customers can work with machine learning experts at Google to develop solutions that can address their specific needs. They also can get training on machine learning development and how to use the Machine Learning Engine.
Machine learning and AI will be a key tool in Google’s box as it looks to grow its cloud capabilities and differentiate itself from its larger competitors. They are part of a larger effort by Google to accelerate the growth of its cloud. Eric Schmidt, executive chairman of Google’s parent company Alphabet, said the company has spent $30 billion on its cloud strategy, and urged enterprises and developers to come aboard. “We’re here for real,” Schmidt said. “This is an incredibly serious mission. The company has got both the money, the means, and the commitment to pull of a new platform of computation globally for everyone who needs it. Please don’t attempt to duplicate this. You have better uses of your money.”
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment