Pankaj Goyal, VP, AI Business & Data Center Strategy, Hewlett Packard Enterprise


Deep learning is rapidly becoming one of the most sought-after fields in computer science. High performance computing (HPC) users can benefit immensely from this cognitive computing model, leveraging the extreme performance of HPC solutions to quickly train deep learning algorithms and accelerate time-to-intelligence. The problem is, many users lack the expertise or the architecture to facilitate AI/deep learning. To overcome this challenge, Hewlett Packard Enterprise (HPE) introduced the HPE Deep Learning Cookbook, a technology selection guide which provides users an easy-to-follow “recipe” to implement deep learning. The Deep Learning Benchmarking Suite, the first component of this comprehensive tool, is now available for download on GitHub—the Performance Reporting Tool is coming soon. By promoting an open source environment, HPE is creating the standard for deep learning, expanding innovation, and pioneering the next generation of systems.

A new deep learning experience

Deep learning adoption comes with a wide range of challenges—from getting started, scaling up or scaling out, to harnessing the right on-premises technology. The HPE Deep Learning Cookbook is a one-of-a-kind technology guide designed to help users estimate and refine performance, characterize deep learning frameworks, and aid the selection of hardware and software configurations.

HPE believes a tailored approach to deep learning will result in a flexible and optimal infrastructure for every user. This deeper look at end-to-end tools, applications, and services is empowering users across all sectors to implement and continue their journey toward deep insight. Based on their unique requirements, users can pursue five essential steps:

  1. Select the right deep learning framework for training (like TensorFlow, Caffe, Caffe2, or MXNet) and NVIDIA TensorRT™ for low-latency, high-throughput inference
  2. Determine the ideal hardware and software configuration (including platforms like HPE Apollo, HPE ProLiant, and HPE Edgeline systems)
  3. Leverage reference models (such as AlexNet, GoogleNet, DeepMNIST, VGG, ResNet, DeepSpeech, Seq2seq, and TextCNN)
  4. Identify performance bottlenecks to accelerate training and inference
  5. Design an architecture to support on-premises deployments

With the HPE Deep Learning Cookbook, users can enjoy faster adoption and deployment with zero guesswork, allowing them to operate more confidently and intelligently than ever. The tool also helps users design the ideal infrastructure to optimize performance, such as adopting NVIDIA® Volta® architecture, the world’s most powerful GPU computing platform designed to turbocharge complex workloads in order to bring AI to every industry. This can be accomplished all while increasing cost savings, security, and scalability. Now, users can focus on building algorithms and using the right algorithms for a growing number of applications. And with guidance from HPE experts, designing the optimal infrastructure has never been easier. Those who utilize this powerful tool will gain valuable competitive advantage as well as key insight into deep learning solutions.

Breakthroughs in deep learning utilization

In 2018, the HPE Deep Learning Cookbook will showcase a more inclusive collection of data, based on a variety of frameworks, HPE systems, and integrations of HPE systems. This update will encompass all available benchmarking data, including results from both internal and external users. Although it is impossible to run benchmarks on all viable iterations of hardware, HPE has developed a way to predict performance on untested hardware configurations. An enhanced performance analysis tool will collect performance reports from actual models—that indicate how long it will take to train or run inference using a particular model or hardware—and render results onto configurations that aren’t specifically tested.

If users run benchmarks on non-HPE hardware, they can easily submit their measurements to the HPE Knowledge Base where it will become browsable. This enables HPE to connect performance measurements from different tools and libraries in a unified way. Furthermore, HPE will use these inputs to drive innovation and make improvements to their HPC and deep learning solutions.

For users who require more guidance, HPE will offer consulting services to examine hardware options as well as different types of models. Users will learn about deep learning algorithms and what can be achieved from data, and utilize the HPE Deep Learning Cookbook to select the right framework.

Deep learning applications for every industry

AI software is on the rise as manufacturers equip new verticals, applications, and specialized hardware to support deep learning. According to a recent report by Tractica, the AI market will skyrocket from $1.38 billion in 2016 to $59.75 billion by 2025, at a CAGR of 52%. However, only 20% of organizations utilize one or more AI technologies, and only 10% utilize three or more. HPE is preparing users to make the most of this trend, offering a comprehensive adoption tool unlike anything on the market.

HPE is uniquely positioned to assist users in their deep learning endeavors, offering industry-leading products and services to create robust, end-to-end solutions. In addition to HPE-engineered solutions, NVIDIA® is making it easier than ever for users to harness the incredible potential of an AI supercomputer. NVIDIA GPU technology offers the massively parallel compute power required to tackle immense challenges in HPC, healthcare, financial services, big data analytics, and many other fields. Solutions from HPE and NVIDIA are helping to empower change with AI and deep learning. And now, the HPE Deep Learning Cookbook is more widely available than ever.

The Deep Learning Benchmarking Suite (DLBS), a main component of the HPE Deep Learning Cookbook, is an automated benchmarking tool available to both internal and external users. It provides command line tools for consistent and reproducible benchmark experiments on various hardware and software configurations. Additionally, users can browse results from a variety of models, compare frameworks, and explore tutorials for running benchmarks. DLBS is now open source and available on GitHub, expanding HPE’s goal to promote open source collaboration and empower deep learning adoption across every organization. Furthermore, HPE has announced the Deep Learning Performance Analysis Tool. This web-based tool will provide access to a knowledge base of benchmarking results, querying and analysis of existing results, and performance predictions based on analytical models. It is planned to be released in early 2018.

With a broad landscape of frameworks, tools, and hardware, the HPE Deep Learning Cookbook is demystifying deep learning, while laying the groundwork for the next generation of HPC systems. This open source tool will enable HPE to collaborate with other industry leaders to empower innovation. Today’s library of performance benchmarks indicates that most configurations utilize HPC systems with NVIDIA GPU acceleration. Investing in proven HPC deep learning solutions and accelerators like NVIDIA GPUs, users will be able to execute data-heavy workloads, seamlessly scale to train complex models, and accelerate time-to-intelligence. Together, these leaders are helping users find new ways to employ massive amounts of data for a range of deep learning applications.

The HPE Deep Learning Cookbook, in conjunction with powerful GPU accelerators from NVIDIA, will help to transform business operations with cognitive capabilities and enable users to solve the world’s greatest challenges. A guide to deep learning & artificial intelligence. I also invite you to follow me on Twitter at @pango for more information on HPE Deep Learning Cookbook – and visit @HPE_HPC for the latest news and updates in deep learning innovation.

VIDEO

WHITEPAPER

SOCIAL MEDIA