Hewlett Packard Enterprise has been busy this year in the HPC space. The company in June unveiled three highly scalable systems optimized for parallel processing tasks and artificial intelligence workloads, including the first system developed from the vendor’s $275 million acquisition of supercomputer maker SGI last year. The liquid-cooled petascale HPE SGI 8600 system is based on SGI’s ICE XA architecture and is aimed at complex scientific and engineering applications. The system scales to more than 10,000 nodes and uses Nvidia’s Tesla GPU accelerators and high-speed NVLink interconnect technology.
At the same time, HPE introduced the Apollo 6000 Gen10, a reworked version of the company’s platform that delivers more than 300 teraflops per rack, reduced latency and power consumption, and improved IOPs performance. The Apollo 10 Series systems include a 1U dual-socket server powered by Intel Xeon servers and Nvidia Tesla SXM2 GPUs that also leverages the NVLink interconnect, and is aimed at deep learning and AI workloads. In addition, HPE also worked with NASA to bring a supercomputer named “Spaceborne Computer,” an Intel-powered Apollo 40-class system, to the International Space Station.
The systems were part of HPE’s larger strategy to continue to target the HPC space as a growth area for the company, competing with the likes of Cray and IBM to roll out systems that target such workloads as AI, deep learning and data analytics. HPE’s HPC portfolio also includes other systems, including the Superdome Flex Server, and the company’s Performance Software Suite. Their deep learning offerings also include Edgeline systems aimed at edge computing environments, AI Software Framework, which includes Bright Computing’s middleware and a choice of networking fabrics from Arista Networks, Intel, and Mellanox Technologies.
At the recent SC17 supercomputing conference, HPE turned its HPC focus to the enterprise, rolling out several new systems – including the dense and highly scalable Apollo 70, the first HPE HPC system powered by an Arm-based system-on-a-chip (SoC) – that are aimed at helping mainstream businesses more easily adopt AI and HPC applications.
The HPE systems are part of a larger trend in the industry of system vendors looking to bring HPC capabilities into the enterprise. Businesses are looking to adapt to the rapid changes underway in their industries that are driven by such trends as cloud computing, mobility and computing at the network edge. Data analytics is extremely important to enterprises, which are looking for ways to corral the massive amounts of data they are generating and analyze it in near real-time to gain faster insights and more quickly make business decisions. AI and machine learning are increasingly important to enterprises for analytics and vendors are looking for ways to take HPC, AI and other technologies that traditionally have been in the purview of research, education, and government institutions and bring in them in the mainstream.
At the SC17, Dell EMC announced a number of engineered systems based on its PowerEdge C4140 server and leveraging Intel’s new “Skylake” Xeon SP processors and Nvidia’s Tesla GPU accelerators. Others falling in line with the HPC-to-the-enterprise push include OEMs like IBM (most recently through the integration of its PowerAI deep learning software with its Data Science Experience solution) and Lenovo, as well as software makers like Microsoft and SAP.
“We’re seeing more and more that HPC is becoming key to enterprises, particularly as enterprises are digitally transforming their businesses,” Bill Mannel, vice president and general manager for HPE’s HPC and AI Segment Solutions, told The Next Platform, adding that now is a good time to address the needs of mainstream businesses. “Our enterprise customers tend to be a little behind research and education customers in terms of demand for new technologies.”
But that demand is growing now. The dense, scalable compute and storage systems introduced at SC 17 are designed to not only enable enterprises to embrace HPC and AI, but also to reduce the footprint of their datacenters and to lower their overall IT costs. On the compute side, HPE rolled out the Apollo 70 along with the Apollo 2000 Gen10 system, a scale-out system optimized for HPC and deep learning inference work. It’s a 2U rackmount chassis that can mix and match ProLiant X170r and ProLiant XL190r Gen10 servers, and can scale to up to 80 ProLiants in a 42U rack. It supports up to 12 large form-factor or 24 small form-factor disks and is powered by Intel Xeon SP chips with four to 24 cores. It supports Nvidia’s Tesla Volta V100 GPU accelerators and uses intelligent system tuning capabilities to accelerate application performance.
The Apollo 70, code-named “Comanche” inside of HPE, is powered by the ThunderX2 SoCs, Cavium’s latest 64-bit Armv8-A chips, and runs Linux distributions, including Red Hat Enterprise Linux and SUSE Linux Enterprise Server for Arm, and leverages Mellanox’s InfiniBand and Ethernet fabrics for connectivity.
There is momentum these days to bring the Arm architecture into the datacenter, something that has been talked about for several years. Qualcomm last month unveiled its long-awaited Centriq 2400 processor – initially aimed at cloud providers and hyperscalers – and at SC17, Red Hat finally announced full support of the architecture in its RHEL OS. Cray also is using ThunderX2 in some of a high-end supercomputer. There also continues to be movement in the market, with Marvell Technology announcing plans to buy Cavium for $6 billion. Mannel said he isn’t surprised Arm chips are getting a look from HPC companies.
“In HPC, customers are clearly comfortable with the latest technologies,” he said. “A lot of them will kick the tires early and participate in molding the ecosystem going forward.”
On the storage side, HPE unveiled the Apollo 4510 Gen10 System, optimized for object storage and offering up to 600 TB in a standard 4U server. The latest has shrunk a bit, from a previous size of 4 1/3U, Mannel said. It holds 16 percent more Xeon cores (Xeon SP chips with four to 26 cores in this version) than the previous generation, supports NVM-Express cards, and provides a front-loading, dual-drawer design for easier maintenance. In addition, the company introduced the LTO-8 tape storage drive that customers can use to offload primary storage to tape. It offers up to 30TBs of storage capacity per tape cartridge – double the capacity of the LTO-7 drives – and 20 percent faster performance, at up to 360 MB/s transfer rate speeds. It also supports the HPE TFinity ExaScale tape library, which provides up to 1.6 EB (that’s exabytes, an abbreviation we should start getting used to) of stored data.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.