IBM has spent the past several years putting a laser focus on what it calls cognitive computing, using its Watson platform as the foundation for its efforts in such emerging fields as artificial intelligence (AI) and is successful spinoff, deep learning. Big Blue has leaned on Watson technology, its traditional Power systems, and increasingly powerful GPUs from Nvidia to drive its efforts to not only bring AI and deep learning into the cloud, but also to push AI into the enterprise.
The technologies are part of a larger push in the industry to help enterprises transform their businesses to take advantage of such trends as the rise of the cloud, the increasing use of mobile technologies and the skyrocketing growth of data that is being generated by these companies and needs to be processed and analyzed. Much of the work with AI, deep learning and analytics have been done in the cloud, promoted by hyperscale cloud providers like Amazon Web Services (AWS), Microsoft Azure and Google Cloud. IBM also has put many of its capabilities into its own cloud.
However, there is a push among vendors like IBM, Microsoft and SAP to help enterprise adopt AI and deep learning in their own environments. IBM over the past couple of years has rolled out products aimed at enterprises, including PowerAI and Data Science Experience. PowerAI is a platform that includes a package of common deep learning frameworks that are optimized to run on IBM’s Power architecture. The Data Science Experience is an interactive and collaborate cloud-based environment designed to be a place where data scientists can use such tools as RStudio, Jupyter, Python, Scala, Spark and IBM’s Watson Machine Learning technology to drive insights into their data and derive information useful to their businesses. It was rolled out last year first for the public cloud, and later was optimized for private clouds.
Both are designed to ease the path for enterprises that want to start using advanced AI technologies, according to IBM officials. The company offers an enterprise-scale version of Data Science Experience alongside a free desktop version.
To help with this enterprise push, IBM is bringing the two together by integrating the PowerAI deep learning enterprise software distribution into the Data Science Experience. In a post on the company blog this week, Sumit Gupta, vice president of HPC, AI and analytics for IBM Systems, explained that with this integration, data scientists will be able to use the IBM tools to “develop AI models with the leading open source deep learning frameworks, like TensorFlow to unlock new analytical insights.”
“The Data Science Experience is a collaborative workspace designed for data scientists to develop machine learning models and manage their data and trained models,” Gupta wrote. “PowerAI adds to it a plethora of deep learning libraries, algorithms and capabilities from popular open-source frameworks. The deep-learning frameworks sort through all types of data — sound, text or visual – to create and improve learning models on the Data Science Experience.”
The capabilities provided by the integration of PowerAI and Data Science Experience will help enterprises in a lot of industries, such as banks, which can better detect credit card fraud or offer new products that clients will find valuable, and manufacturing companies, which will be able to better predict machine failures before they happen.
The latest effort comes at a time when new “AI technologies like machine learning and deep learning are fitting ever more snugly into the shifting enterprise landscape,” Gupta wrote. It also comes soon after IBM Research unveiled the Distributed Deep Learning library in PowerAI, which officials said reduces the deep learning training time from weeks to hours, which will further drive accelerated deep learning into the Data Science Experience environment.
The trend toward AI and machine learning for driving data analytics and bringing greater intelligence to software will continue accelerating. In a report earlier this year, Gartner analysts said that by 2020, AI technologies will be in almost every new software product and that AI will be a top-five investment priority for more than 30 percent of CIOs. At the same time, they warned that the widespread use of the term “AI” by vendors in promoting their products is sowing confusion among end users and obscuring the benefits of the technology.
A foundational element of IBM’s AI and deep learning push has been the use of Nvidia’s Tesla GPUs with its Power servers. The GPUs offer highly parallel computing capabilities through their hundreds of thousands of cores, which is critical when trying to process and analyze the massive amounts of data being generated. IBM optimizes the deep learning frameworks in PowerAI – such as TensorFlow, a machine learning workflow technology originally developed by Google – for the Power servers and leverages such technologies as Nvidia’s NVLink, a high-speed interconnect between CPUs and GPUs that can deliver speeds that are twice as fast as PCI-Express 3.0 links.
But as the AI market grows, so will competition. There are a number of smaller companies that are developing silicon specifically targeting AI and machine learning workloads. For example, Graphcore is developing its IPUs (intelligent processing units) and Poplar software framework that is aimed at AI workloads in both the cloud and datacenter. Officials have said their IPUs can deliver 10 times the performance of GPUs. Another company, Edico Genome, has developed the Dragen bio-IT processor that leverages a field-programmable gate array (FPGA) and is aimed at genome sequencing workloads. The chip, which can be used in the cloud or on-premises, enables a system to analyze an entire genome in 20 minutes, a process that with traditional technologies can take three to five days, according to officials.
Be the first to comment