IBM Back In HPC With Power Systems LC Clusters
October 8, 2015 Timothy Prickett Morgan
With the sale of its System x division to Lenovo Group last fall and the winding down of its BlueGene massively parallel computing line, IBM lost a lot of its market presence in the high performance computing space. But this week, Big Blue is back in the HPC game with lower-cost and denser Power8 systems that it says can give X86 machines running Linux for the simulation, modeling, and analytics a run for the money.
“This is our re-entry into the HPC market,” says Sumit Gupta, vice president of OpenPower high performance computing at IBM and formerly the manager of Nvidia’s Tesla accelerated computing unit. “This is our first OpenPower-based HPC server, and our go-to-market in the HPC space is an accelerated approach, which means not just CPUs but using CPUs and GPUs together. We still obviously offer a CPU-only version for clients who have not migrated their code to take advantage of GPUs yet.”
Nvidia, Mellanox Technologies, and IBM are the key technology partners in the OpenPower Foundation, which is opening up the Power server platform and creating hybrid computing platforms to run traditional HPC and parallel analytics workloads alike and, importantly, that offer an alternative to Intel’s Xeon and Xeon Phi compute.
The machine that Gupta is referring to is code-named “Firestone” by IBM, and it makes use of the “Tourismo” Power8 socket and the merchant silicon variants of the Power8 processor, which have slightly different clock speeds, thermals, and memory bandwidth specs compared to the processors that IBM employs in prior generations of its scale-out Power Systems machines, which were aimed more at commercial users running AIX and IBM i workloads more than those building scale-out Linux clusters. The Firestone systems are predecessors of “Garrison” Power8+ systems that will be the alpha for the future “Summit” supercomputer being installed at Oak Ridge National Laboratories and the “Witherspoon” Power9 systems that will actually be used to build Summit. These systems are also instrumental for large-scale HPC systems in the United Kingdom’s Daresbury Lab.
The Feeds And Speeds
The Firestone server comes in a 2U server chassis that has two Power8 processors; customers can choose an eight-core variant that runs at 3.32 GHz or a ten-core variant that runs at 2.92 GHz. Stephanie Chiras, director of scale out Power Systems at IBM, tells The Next Platform that these Power8 processors have a 190 watt thermal design point. IBM also has a 130 watt Power8 variant, but this is not used in the Power Systems LC line, but is being employed by other OpenPower partners in their respective systems.
The Power Systems S822LC, as the Firestone machine is formally called, has a total of 32 memory slots using DDR3 memory running at 1.33 GHz and the system delivers 230 GB/sec of memory bandwidth. The machine has two storage bays, which can have either a 1 TB 7.2K RPM SATA drive or a flash SSD drive with either 480 GB or 960 GB capacities slotted into it. The machine has five PCI-Express 3 slots, with two of the x16 slots being designated to supporting accelerators. The system has a generic x8 slot, and then an x8 and an x16 slot that both support IBM’s Coherent Accelerator Processor Interface (CAPI), which debuted with the Power8 chip and which offers coherent memory addressing across the Power8 processors and accelerators linked to the system over the PCI-Express bus. The current “Kepler” family of GPUs from Nvidia do not support the CAPI protocols, but other devices, including Mellanox InfiniBand adapters and FPGA cards, currently do.
The variant of the Firestone machine aimed at HPC shops is known as the Power Systems S822LC with product number 8335-GTA in the IBM product catalog, and it comes with two of Nvidia’s dual-GPU, top-of-the-line Tesla K80 coprocessors embedded in the system. They can be clocked up so long as they stay within a 300 watt thermal envelope. The base machine comes with 128 GB of memory, expandable to 1 TB. Pricing for this HPC variant has not been announced, and Gupta says that this configuration will be aimed mostly at HPC centers where customers want a direct engagement with IBM to buy fairly large clusters.
IBM expects for the commercial variant of the Power S822LC – which has a slightly different configuration and which does not have Tesla GPU accelerators bundled in the machines – to also be bought by the racks for cloud infrastructure and analytics workloads, but Chiras says the engagement model is a little different. Customers will be able to buy the systems online directly from IBM in preconfigured setups – what Big Blue is calling “waitless computing” – and will also be able to get customized versions through IBM’s reseller channel.
IBM will continue to sell the plain vanilla Power Systems S812, S822, and S824 rack-based systems, which run its AIX and IBM i operating systems as well as Red Hat and SUSE Linux atop IBM’s PowerVM hypervisor. IBM is continuing to sell the Power Systems S812L, S822L, and S824L machines, too, which are Linux-only versions that support only Linux and that have lower hardware prices to better compete with Linux-on-Xeon machines that are the preferred platforms for HPC and analytics workloads today.
Chiras says that the regular Power Systems and Power Systems L machines have more RAS features and a different means of putting main memory in the system using IBM’s “Centaur” memory buffer chip, and also offer higher memory bandwidth to customers who need that. Working with its OpenPower partners, IBM has tweaked the memory subsystem in the Power8 servers to lower the cost of memory. On IBM’s homegrown Power8 machines, the Centaur chip is embedded on memory sticks that go onto riser cards in the system. (This Centaur chip is rated at 20 watts and burns at 16.5 watts under load, says IBM.)
But with the Power Systems LC machines, IBM has worked with ODM partner Tyan to put the buffer chip right on the motherboard (used in the single-socket Power Systems S812LC we will tell you about in a minute) and worked with ODM partner Wistron to put the Centaur chip on a riser card for the Power Systems S822LC. In both cases, standard DDR3 memory can be plugged into the machines rather than the custom memory DIMMs that IBM had to make for its own earlier Power8 machines. This substantially drops the cost of the systems. (How much, IBM is not saying. But we will do some math on all this in the future.)
The commercial variant of the Firestone machine, called the 8335-GCA in the IBM catalog, does not have Tesla GPUs in it and comes with a base memory configuration of 32 GB. It also has pricing and configuration information. This variant of the Firestone is aimed at compute clouds and any workloads that need a big chunk of CPU and memory bandwidth, and is being specifically aimed at the managed service providers that IBM is courting (including its own SoftLayer cloud, which will be rolling out Power-based systems for supporting Linux instances). By the way, customers can put their own Tesla GPUs into this system, but they top out at two and for some reason the power draw is capped at 225 watts.
A base Power Systems S822LC will cost $11,990 with two of the eight-core Power8 chips running at 3.32 GHz, that base 128 GB of main memory, two 1 TB SATA drives, and a four port Gigabit Ethernet card. Moving up to the ten-core Power8 chip running at 2.92 GHz, boosting memory up to 256 GB, and using a two-port 10 Gb/sec Ethernet card raises the price to $17,515. The high-end configuration uses a pair of the ten-core Power8 chips and 1 TB of memory with the same networking, and the price jumps to $36,999. (You can see where the real cost is in these systems.)
Single Socket Power Aimed At Xeon Two Sockets
While two-socket servers dominate the datacenters of the world, IBM is trying to change that perception with the Power8 processors and single-socket machines that it says can do the work of a two-socket Xeon machine in many cases.
The Power Systems S812LC machine has one fewer processor than the S822LC, but still comes in a 2U rack enclosure. That means it has more room for storage. Interestingly, IBM is keeping the maximum of 32 memory sticks on this machine – which is code-named “Habanero” by the way – so this single-socket server can have up to 1 TB of DDR3 memory in it. The memory bandwidth on the system is cut in half to 115 GB/sec because there is only one socket, but Chiras tells The Next Platform that this is still more memory bandwidth than you can get out of a two-socket Xeon E5 v3 server. The Habanero server comes with 128 GB of memory in a base configuration, and has two rear-access disk slots plus twelve front-access disk slots. IBM is supporting 7.2K RPM SATA drives in 1 TB, 6 TB, and 8 TB capacities in the S812LC; the SSDs come in 960 GB only.
The Habanero S812LC server comes with four PCI-Express slots, which includes two x8 slots and two slots that support CAPI-enabled peripherals (one x8 and one x16 as with the Firestone machine). This machine is not really designed to use GPUs, but is aimed at running analytics workloads like Hadoop and Spark and offers the kind of performance comparison with X86 platforms that we have talked about here back in July when IBM rolled out the OpenPower roadmap formally and here again when IBM talked about the future prospects of Power-based platforms.
The base Habanero server comes with a single eight-core 3.32 GHz Power8 chip, 32 GB of memory, a single 1 TB SATA drive, and costs $6,595. Moving up to the ten-core Power8 chip running at 2.92 GHz, boosting the memory to 256 GB, adding a second 1 TB drive, and shifting to a two-port 10 Gb/sec Ethernet card raises the price of the Habanero system to $12,999. Pushing the memory on the ten-core system to 512 GB and boosting the storage to two 960 GB SSDs and a dozen 6 TB drives with the same 10 Gb/sec Ethernet costs $35,300.
The Systems Software Stack
The Power Systems LC machines support the open source OPAL microcode and bare metal hypervisor that IBM has working on with Google since founding the OpenPower Foundation two years ago, as well as the PowerKVM 3.1 variant of the KVM hypervisor that IBM crafted to make OpenPower machines look more like X86 boxes.
The machines will run Canonical’s Ubuntu Server 14.04 when it is available and a future update of Red Hat’s Enterprise Linux 7 that is expected before year’s end. It was not clear what plans there were for SUSE Linux Enterprise Server support on this machine, but SLES is a popular variant of Linux in the HPC community. (As are CentOS and Scientific Linux.)
The OPAL bare metal hypervisor comes free in the systems, but the OPAL PowerKVM combination costs $2,545 on the Power Systems LC machines (regardless of the fact that it has one or two sockets)/ Ubuntu Server 14.04 is expected to cost $3,173 per machine and RHEL 7 is expected to cost $2,523 per machine on top of this.
The Power Systems LC machines will ship on October 30. We will be taking a look at IBM’s competitive analysis for the Firestone and Habanero systems and getting reaction from Intel as well.