A New Frontier of AI and Deep Learning Capabilities

In today’s digital climate, organizations of every size and industry are both collecting and generating enormous amounts of data that can potentially be used to solve the world’s greatest problems—from national security and fraud detection to scientific breakthroughs and technological advancement. However, traditional analysis techniques and practices are not capable of rapidly delivering automated, real-time insights from the rising data volumes to the point that artificial intelligence (AI) is becoming vital to harnessing the full understanding of scientific and business data.

But is traditional AI enough?

The evolution of Big Data is driving a major paradigm shift in the field of AI, which is increasing the need for high performance computing (HPC) technologies that can support high performance data analytics (HPDA). According to an IDC report, the HPDA server market is projected to grow at a 26% CAGR through 2020, including an additional $3.9 billion in revenue by 2018.

Thanks to robust HPC systems, compute capacity and data handling are powerful and affordable enough that many organizations are beginning to invest in a new frontier of AI and deep learning capabilities. HPC solutions coupled with advanced data infrastructures are replacing the need for costly and time-consuming manual calculations, laying the groundwork for the next generation of AI that can rapidly automate and accelerate data analysis.

Deep learning (training and inference modeling) is a form of AI-based analytics that leverages pattern-matching techniques to analyze vast quantities of unsupervised data. Much like the neural pathways of the human brain, networks of hardware and software utilize training, generic code, and pattern recognition to analyze video, text, image, and audio files in real-time. Deep learning systems then observe, test, and refine information from core data centers to the intelligent edge, converging datasets into concise, actionable insight. The problem is, learning takes time.

Dr. Goh of HPE offers this example in his interview on the trends of Big Data and deep learning: Google conducted an experiment to build a large-scale deep learning software system using cat videos. They began by taking millions of pictures of cats and breaking them down into hierarchical inputs (i.e. a pixel of fur, a whisker, or paws). Using complex machine learning algorithms, the AI machines analyzed multiple layers of inputs over the course of days, weeks, and even months, until they could effectively make decisions on their own.

For today’s developers, the objective is to enhance deep learning capabilities in order to extract insight as quickly and accurately possible. Dr. Goh explains, “Enterprises want to learn fast. If you don’t want to take weeks or months to do learning because of the massive amount of data you have to ingest, you must scale your machine. This is where we come in. You have to scale the machine because you can’t scale humans.”

Some HPC systems are making huge strides in deep learning capabilities. Libratus, an AI powered by the Pittsburgh Supercomputing Center’s Bridges computer, recently took on four professional poker players in a “Brains vs. AI” competition. Across 20 days, the machine utilized strategic reasoning to perform risk assessments, empower lightning fast data analytics, and optimize its decision-making processes. At the end of the project, Libratus bested its human opponents by more than $1.7 million, and each human finished with a negative number of chips.

Deep learning systems require an order of magnitude increase in floating point performance compared to traditional HPC, and delivering ever-increasing GPU capacity is critical for the massively parallel processing performance and scalability necessary for success. HPE’s deep learning platforms feature NVIDIA Tesla GPUs which are well-suited for deep learning due to their high single and floating point performance. This is critical for deep neural network performance, particularly for training modules. Leveraging deep neural networks and cost-effective compute platforms for inference helps to promote data fusion, reduces training time, and enables ultra-scale real-time data analytics. Investing in a powerful deep learning infrastructure is key to improve time-to-insight and accelerate discovery across multiple sectors, including technology, life sciences, economics, government, and more.

Follow us on Twitter at @HPE_HPC for the latest news and updates.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now