Here’s how the HPE Apollo 6500 with NVIDIA Tesla V100 GPU delivers the processing throughput to power a robust insight-to-cash cycle..
As businesses embrace unprecedented advances in high-performance computing (HPC), it can be tough to stay both competitive and within your budget. But there’s good news: You can do more to fuel your growth and cut costs by applying artificial intelligence (AI) and deep learning to the data that defines your business, and in so doing improve your organization’s processes and strategies. To do that, you’ll need the power of HPC solutions like the HPE Apollo 6500 Gen10 Server to accelerate this translation of insights into cash.
The insight-to-cash cycle
As Gartner notes, “There is tremendous opportunity for enterprises to use artificial intelligence now to reinvent how business is done by cultivating and using intelligence.” Your competitors are certainly exploring this. Like you, they want to remain competitive in a market environment where data is the foundation of the next leap in innovation. Also like you, their budgets aren’t without limit. It’s a good thing, then, that one way in which AI and HPC solutions can give you a competitive edge comes down to cold, hard cash.
Wrapping your head around that statement may require a change in how you calculate ROI. Rather than just looking at costs of ownership and productivity benefits, you can assess the financial impact of HPC by adding another dimension of analysis: the insight-to-cash cycle.
Deep insights = deep profit
Deep learning, powered by HPC, is able to reveal insights that make your business run better and smarter. Examples include genomics-based diagnosis that renders health care treatments less expensive and bioinformatic research that speeds up the development of new drugs. The finance world can benefit from insights leading to advances in trading and fraud detection. Manufacturers can reduce defects through deep learning.
You can monetize data-driven insights—the faster, the better. If you’re in retail, for instance, deep learning might help you discover a concealed aspect of your customers’ sentiment that could boost your sales if you tap into it. The sooner you acquire this wisdom, the sooner your revenue (and earnings) will increase.
It’s all about the throughput
The current revolution in deep learning has emerged from recent massive advances in compute power. The new generation of HPC servers and NVIDIA graphical processing unit (GPU) chips make possible analytical workloads that would have been hard to imagine even a few years ago. The improvements in HPC show up in throughput, the measure of how much data an HPC solution can analyze in a given time period.
A productive insight-to-cash cycle relies on strong throughput. You have a big pool of data sitting to one side of your HPC hardware. Insights flow out the other end. More throughput means more insights in a shorter time frame. More insights equals more monetization for you. The benefits of high throughput don’t stop there, either. If you master high throughput in HPC-based deep learning, you may find yourself becoming more competitive and stepping out ahead of your industry peers in terms of innovation and customer engagement.
NVIDIA amplifies HPC solutions
HPC solutions with the best throughput give you the most lucrative insight-to-cash cycles and the highest ROI. This is the goal of the close collaboration between HPE and NVIDIA®, as realized in the newly released HPE Apollo 6500 Gen10 server utilizing the NVIDIA Tesla V100 32GB GPU. Supported by robust hardware, networking, and memory, the Apollo 6500 Gen10 with NVIDIA® Tesla® V100 32GB expedites your AI and deep learning workloads at scale.
Each Apollo 6500 Gen10 contains eight V100 32GB GPUs, the highest number of GPUs in any HPE server. This architecture enables a theoretical peak performance of up to 125 teraflops in single-precision compute. Performance in this range facilitates the sort of massive throughput you need for a healthy insight-to-cash cycle.
The Tesla V100 is the most advanced data center GPU ever built to accelerate AI, HPC, and graphics, and it offers the deep learning throughput performance of up to 100 CPUs in a single GPU.
NVLink connects the dots for learnings
The Apollo 6500 platform also includes PCIe and NVIDIA® NVLink™ GPU interconnects. These provide your IT department with flexible connectivity options. The second generation NVLink GPU interconnect aids the deep learning process through GPU-to-GPU communications. Other factors that enhance your deep learning and HPC include high-bandwidth, low-latency networking adapters. These are tightly coupled with the GPU accelerators, allowing the system to take full advantage of the network bandwidth. In addition, the Apollo 6500-Tesla V100 combination offers you:
- Dependable performance and reliability for your IT operations through power and cooling designed around 350-watt accelerators and consistent signal integrity
- Flexible support, enterprise options and your choice of Ubuntu or enterprise-grade Linux distribution (Red Hat, SUSE, and CentOS)
- Accelerator topologies to suit your varying workloads, such as an efficient hybrid cube mesh for NVLink and 4:1 or 8:1 GPU:CPU flexibility in PCIe
- Storage options, allowing you to install up to 16 front-accessible storage devices—SAS/SATA solid-state drives (SSDs) with up to 4 NVMe drives
- Resiliency, security, and simplicity for your HPC environment, with easier serviceability and upgrades, and an accessible, modular design
- Efficiency for your system management duties with HPE iLO 5 (Integrated Lights-Out) and firmware-level security
Other combos for different data centers
HPE and NVIDIA are delivering a leading portfolio of datacenter solutions that transform business and industry, so you can run everything from the most basic to mission-critical applications, to deep learning, and deploy with confidence.
The Apollo 6500 isn’t the first of HPE’s servers to pair innovatively with NVIDIA. The HPE DL380 Gen10, the world’s best-selling server, continues to deliver versatility and flexibility through the Tesla V100 full-height, half-length (FHHL) format. The DL380 supports up to three double-wide or five single-wide GPUs for workload acceleration.
Powered by Volta, the Tesla V100 for Hyperscale is a FHHL PCIe version of Tesla V100 expressly designed for hyperscale data centers looking for the just-right mix of performance and power. The FHHL version of Tesla V100 PCIe is ideal for deep learning inference workloads, taking advantage of the Volta architecture’s Tensor Cores, and enabling large-scale data centers to maximize rack performance efficiency within a fixed power budget.
High potential for better HPC and AI takeaways
All of these potential sources of innovative leaps may not have been visible to you before. Now, however, insights from HPC-driven deep learning open up opportunities for you to solve hidden problems and build your business in new, aggressive ways. The combination of HPE servers—most notably the Apollo 6500 Gen10—with the NVIDIA Tesla V100 32GB GPU makes this possible, providing extreme HPC performance that’s flexible enough to drive a range of strategic needs. It gives you options to implement the hardware required for deep learning. Together, the server and GPU provide considerable throughput potential for your implementation of HPC and your realization of the insight-to-cash cycle.
Meet Server Experts blogger Hugh Taylor, president of Taylor Communications, LLC. Hugh has created marketing content for such clients as Microsoft, IBM, SAP, Oracle, Google, and Advanced Micro Devices. While at the IBM Software Group, he developed a unique financial payback model to quantify ROI for social software in the corporate environment, for which he received the Marcom Platinum Award for Whitepaper Writing. As the PR manager for Microsoft’s SharePoint Technologies, Hugh was also responsible for generating the “Billion-Dollar Juggernaut” story that helped make SharePoint a high-profile product for the company, generating over 800 pieces of press coverage in one year. Hugh is a certified information security manager (CISM) and lecturer at the University of California, Berkeley’s Law School and Graduate School of Information and has authored four books as well as more than a dozen articles on business and technology.