Growing HPC and AI Convergence is Transforming Data Analytics

Data analytics and insights are fueling innovation across scientific research, product and service design, customer experience management, and process optimization. Real-time analytics with in-memory computing, big data analytics, insights from simulation and modeling fueled by high performance computing (HPC), and predictive analytics with artificial intelligence (AI) are core capabilities required by data-driven organizations looking to gain competitive advantage with their digital transformation initiatives.

HPC enables complex modeling and simulation to accelerate innovation in diverse areas—ranging from molecular chemistry to genome sequencing, energy exploration, and financial trading. AI is the foundation for cognitive computing, an approach that enables machines to mimic the neural pathways of the human brain to analyze vast datasets, make decisions in real time, and even predict future outcomes. In order to succeed, all organizations need advanced technology solutions to support their HPC, Accelerated Analytics, and AI applications to execute increasingly difficult tasks and forecast evolving trends, equipping them to solve some of the world’s biggest scientific, engineering, and technological problems.

HPC is a driving force behind business growth and innovation, empowering users to execute compute- and data-intensive workloads quickly and accurately. Purpose-built HPC solutions are accelerating performance and increasing operational efficiencies like never before, which allows organizations to continuously scale and adopt cutting-edge tools in order to take on the next great challenge. It is also being recognized that HPC environments could be a good foundation for AI and deep learning, as they provide the extreme levels of scalability, performance, and efficiency required by these complex applications. With compute innovations such as these, IT departments can confidently implement and accelerate both HPC and AI applications and put their explosive data volumes to work.


HPC and deep learning are starting to converge as organizations seek a comprehensive infrastructure solution to address evolving industry demands. Deep learning, a form of AI, uses logic-based models to complete supervised or unsupervised tasks (that is, reaching a specific target with a corresponding input or learning from a varying set of inputs). This technique utilizes training algorithms and pattern recognition to process video, text, image, and audio files—and it requires HPC levels of performance and efficiency to do it. Deep learning systems then observe, test, and refine insight into actionable intelligence. According to a research report by Markets and Markets, the deep learning sector is expected to reach $1,772.9 million by 2022, rising at a CAGR of 65.2% as enterprises invest heavily in AI capabilities.

Hewlett Packard Enterprise (HPE) is helping organizations make the most of their data with AI-driven analytics, providing the optimal infrastructure platforms designed to harness deep insights with superhuman speed and precision. HPE has developed purpose-built HPC platforms that are designed to scale to support a variety of complex workloads. And with an expanded partner ecosystem, we are collaborating with industry experts like NVIDIA to bring deep learning capabilities from the core data center to the intelligent edge for all organizations. NVIDIA delivers the best-in-class GPU accelerationoptimized for deep learning and accelerated analytics applications to rapidly and efficiently process massive data volumes. These solutions deliver the ultimate performance for deep learning, analytics, and the highest versatility for all workloads, equipping organizations to operate as quickly and intelligently as possible.

One stellar example of this collaboration is a new supercomputer at the Tokyo Institute of Technology (TITech). Based on the HPE SGI 8600 and the NVIDIA Tesla P100, TSUBAME 3.0 is a converged HPC and deep learning platform that utilizes GPU accelerators to achieve optimal performance, efficiency, and accuracy. Satoshi Matsuoka, Professor and TSUBAME Leader, reports that TITech’s relationship with HPE will fuel a number of critical research projects and future workloads in HPC and deep learning, including the pursuit of the first exascale system.


To promote further innovation and partnership in the HPC community, Hyperion Research launched the HPC User Forum, a unique market intelligence service that brings together leaders from government, industry, and academic organizations around the globe to discuss the latest developments in HPC.Recently, Hyperion has broadened this event beyond classic HPC, adding a major AI component. Now, users can explore in-depth the convergence of HPC and AI-driven analytics to learn about how HPC is promoting a new era of insight.

This week, the HPC User Forum returns to Milwaukee, Wisconsin on September 5th–7th, where technology experts will meet with HPC users to discuss next-generation IT solutions. At 9:45 a.m. on September 7th, I will present the HPE Vendor Technology Update, discussing the exciting developments on our journey to next-generation HPC and AI innovation. Attendees will have the opportunity to learn about the newly announced HPE Apollo and HPE SGI portfolio as well as HPE’s efforts to simplify deep learning and analytics for all organizations, accelerate the mission to Mars, make exascale computing, and much more.

Then at 3:45 p.m. on September 7th, HPE joins a session geared toward machine learning, deep learning, and early AI. Natalia Vassilieva of Hewlett Packard Labs will present HPE’s new Deep Learning Cookbookin her talk “Characterization and Benchmarking of Deep Learning.” The Deep Learning Cookbook is based on a massive collection of performance results for deep learning workloads using different hardware and software. This guide is designed to help customers streamline deep learning adoption for real-world applications.

The goal for HPE and NVIDIA is to help customers find new ways to harness massive amounts of data with the most powerful solutions for deep learning. As the demand for deep insight increases, we will strive to deliver dense and highly scalable solutions to accommodate these workloads, and explore the convergence of HPC and AI workloads onto a single set of infrastructure. To learn more about HPC and AI innovation, I invite you to follow me on Twitter at @VineethRam. And for more on how AI-based insights are expanding the scope of human knowledge, visit @HPE_HPC and @NvidiaAI.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now