There continues to be an ongoing push among tech vendors to bring artificial intelligence (AI) and its various components – including deep learning and machine learning – to the enterprise. The technologies are being rapidly adopted by hyperscalers and in the HPC space, and enterprises stand to reap significant benefits by also embracing them.
As we’ve noted many times here at The Next Platform, at the most basic level, machine learning and deep learning can enable enterprises to quickly sort through and analyze the massive amounts of data that they’re collecting to find patterns that can lead to better business decisions and more efficient operations. And there’s no lack of IT vendors working to help enterprises unlock these benefits.
In recent months, we’ve seen IBM – a company that is building its future in large part around its cognitive computing vision – roll out IBM Cloud Private for Data, a software platform with its IBM Cloud Private offering that includes a fast in-memory database and is designed to ingest huge amounts of data, analyze it on the fly and essentially clean it up so enterprises can more easily use machine learning for research and application development. It’s only the latest move by Big Blue to spread the wealth when it comes to AI. Meanwhile, Google, long a proponent of making it easier for mainstream businesses to use AI technologies, in January introduced Cloud AutoML, a strategy to make machine learning services in the Google Cloud more accessible to not only developers with AI expertise, but also those engineers with fewer AI skills. At the SC 17 show in November, Dell EMC introduced it latest efforts to bring HPC and AI capabilities to the masses with bundled engineered systems aimed at deep learning using both Nvidia GPU accelerators and Intel chips as well as Hadoop.
Also at the supercomputing show, Hewlett Packard Enterprise unveiled new systems – including the Apollo 70, the company’s first HPC system to use an Arm-based chip – that are designed to help enterprises more easily adopt AI and HPC applications. HPE also rolled out the Apollo 2000 Gen10, a 2U scale-out system optimized for HPC and deep learning inference. It’s also powered by Intel’s latest “Skylake” Xeon Scalable Processors and supports Nvidia’s powerful Tesla Volta V100 GPU accelerators.
HPE this week has come back with more hardware and services aimed at helping mainstream businesses scale their use of AI and deep learning throughout their operations.
“Global tech giants are investing heavily in AI, but the majority of enterprises are struggling both with finding viable AI use cases and with building technology environments that support their AI workloads,” said Beena Ammanath, global vice president for AI at HPE’s Pointnext business. “As a result, the gap between leaders and laggards is widening.”
Included in the rollout is the Apollo 6500 Gen10, which packs in eight V100 GPUs that HPE said will deliver three times faster model training than its predecessors and up to 125 teraflops of single-precision performance. In addition, the vendor has embedded Nvidia’s high-bandwidth NVLink 2.0 interconnect to drive faster communication between the GPUs in the system – up to 10 times faster data sharing rates than traditional, PCIe Gen3 interconnects, the company said. The system includes both PCIe and NVLink to give enterprises a wider range of options depending on workload requirements.
The enhanced Apollo 6500, which will be released in May, is powered by Xeon SP 8100 and 6100 processors holding up to 28 cores, and includes up to four high-speed, low-latency network adapters – Ethernet, Intel Omni-Path Architecture, InfiniBand Enhanced Data Rate (EDR) and upcoming InfiniBand HDR – per server, and 24 DDR4 SmartMemory DIMMs that provide 3TB of memory. The system can hold up to 16 storage devices, including SAS/SATA SSDs with up to four NVM-Express drives, and runs Red Hat, SUSE, CentOS and Ubuntu operating systems.
HPE also is offering a range of services designed to help guide enterprises in their AI and deep learning efforts. The company is launching Digital Prescriptive Maintenance Services, the first in what will be a series of AI solutions aimed at pre-defined industry use cases. The prescriptive services combine such services from the vendor’s PointNext services unit, including consulting and implementation, with technologies and reference architectures from both HPE and third-party partners. The service will leverage deep learning capabilities with data sources within the enterprise to automatically detect and prevent possible failures in industrial equipment and improve productivity.
New one-day Artificial Intelligence Transformation Workshops will help enterprises identify AI uses cases within their businesses and create strategies to grow their use of AI. The Deep Learning Performance Guide, which is being added to HPE’s Deep Learning Cookbook that was released last year, is designed to help enterprises in choosing technologies and configurations for AI and deep learning based on benchmarking results and measurements in the customer’s environment.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment