IBM Unfolds Power Chip Roadmap Out Past 2020
April 7, 2016 Timothy Prickett Morgan
There are two things that underdogs have to do to take a big bite out of a market. First, they have to tell prospective customers precisely what the plan is to develop future products, and then they have to deliver on that roadmap. The OpenPower collective behind the Power chip developed did the first thing at its eponymous summit in San Jose this week, and now it is up to the OpenPower partners to do the hard work of finishing the second.
Getting a chip as complex as a server processor into the field, along with its chipsets and memory and I/O subsystems, into the field is a complicated and difficult endeavor that many of us take for granted because, more or less, the server chip makers of the world have done it pretty consistently in recent years. (Not perfectly, mind you.) The roadmaps used to be a lot choppier, and we think the terrain was also a lot less bumpy, too. So it is all the more remarkable that there is the kind of consistency that we see and that conservative datacenter customers need so they can do their long range planning.
At the OpenPower Summit, Brad McCredie, the former president of the OpenPower Foundation and an IBM Fellow and vice president of development in its Power Systems division, did what he has wanted to do for years, and especially since the founding of the foundation in the summer of 2013. And that was to publish a processor roadmap that puts a stake in the ground that customers can use as they try to figure out if they can align their server and storage infrastructure with the Power platform and possibly get benefits compared to sticking with Intel’s Xeon roadmap or betting on one of the several aspiring suppliers of ARM server chips instead.
People forget sometimes that IBM and its PowerPC partners had a pretty rough go in the early years of the Power architecture, and McCredie did a little walk down the Power memory lane before unveiling the forward looking roadmap that stretches out to 2020 and beyond. Here is
Back in the late 1980s, when the Power1 chip debuted in the RS/6000 workstation line, IBM was using 1,000 nanometer processes to etch the transistors on chips. The PowerPC partnership with the “Somerset” joint effort between Motorola and Apple worked on a different class of PowerPC 600 series chips for personal computers, and IBM’s AS/400 minicomputer division created its own 64-bit PowerPC chips for transaction processing systems – and ones that had a nifty set if floating point units that are the great grandfather of the ones used in the Power8 chips today. Truth be told, it is that AS/400 variant of the PowerPC that really lived on in the Power4 and follow-on designs, and it is these processors that set the stage for IBM to go from “worst to first,” as McCredie put it, in the very lucrative Unix systems market in the 1990s and early 2000s.
But the market for Unix systems has collapsed thanks to Moore’s Law improvements in these machines outstripping the performance needs of most of IBM’s customers (enterprise capacity for back-end systems grows more or less with economic activity, which is lower than Moore’s Law growth in performance for processors), and that coupled with the rise of X86 and Linux in the datacenter as well as Windows Server among midrange customers who might have otherwise bought an IBM AS/400 or one of its successors has meant that Big Blue has to look for new markets and new partners to continue to develop Power chips and now pay Globalfoundries to manufacture them.
IBM has pinned the hopes of the Power chip and its own Power Systems business on getting Linux workloads tuned up on Power and leveraging the raw compute and higher memory and I/O bandwidth that the Power8 chip has compared to Xeon alternatives from Intel, and this is one of the reasons why Google said at the OpenPower Summit that it was working on a Power9 system for its own use in conjunction with Rackspace Hosting, one of its competitors in the cloud. (We will talk about that developing ecosystem separately.)
The new and much fuller Power chip roadmap that McCredie divulged is similar to one that The Next Platform published last August after we got our hands on it, but has more precision and adds in the efforts of IBM’s chip partners in the OpenPower effort. Take a look:
The first thing we noticed is that the chip that is being announced in 2016 from IBM is no longer being called the Power8+ and is now being called the Power8 with NVLink. As late as the summer of last year, when IBM was still using an older roadmap it had previously announced that talked about its plans for 2015, 2016, and 2017, the processor that came out in 2016 was referred to as the Power8+ chip. This may not seem like a big deal, but a “plus” chip in the IBM lingo means something very precise, and that is usually a process shrink coupled with some slight microarchitecture changes – akin to a “tick” in the Intel “tick-tock” cadence of chip introductions. One of the problems IBM and its PowerPC partners had in the early days of the Power architecture is that each chip redesign also had a process shrink, which makes getting a chip out the door doubly hard. IBM stopped doing this with the Power4 and Power4+ chips in 2001, and largely got its act together in terms of keeping a fairly constant rhythm. This is one of the reasons why IBM’s Power Systems business rolled along making somewhere between $4 billion and $5 billion a year in the late 1990s and early 2000s before the bottom fell out of the Unix and OS/400 (IBM’s proprietary operating system) markets. The Power4+ was not much of a boost, and the Power6+ was not either, but Power5+ and Power7+ represented a real performance boost engendered through the process shrink as well as tweaks.
Our point is, when the roadmap says Power8+, it means a process shrink and a performance boost. But McCredie tells The Next Platform that this is not going to happen. IBM did rejigger some of the I/O on the existing Power8 chip to put six NVLink ports for lashing Nvidia’s “Pascal” Tesla P100 GPU accelerators, which were unveiled this week, tightly to the Power8 processor and allowing the GPUs to share memory virtually with the Power8 processors. So the Power8 with NVLink is implemented in the same 22 nanometer processes as the Power8 chip and therefore we should not expect any significant performance benefits compared to the Power8 that has been shipping since 2014.
Putting the NVLink ports on Power8 chip is a big deal. “We have got a lot of pull for this technology from customers,” McCredie said, and that pull is coming from the hyperscaler community, which uses GPU accelerators to speed up the training of their neural networks in their machine learning applications, and the HPC community, which uses GPUs to radically enhance the performance of their modeling and simulation. With NVLink, multiple GPUs can be linked by 20 GB/sec links (bi-directional at that speed) to each other or to the Power8 processor so they can share data more rapidly than is possible over PCI-Express 3.0 peripheral links. (Those PCI-Express links top out at 16 GB/sec and, unlike NVLink, they cannot be aggregated to boost the bandwidth between two devices.)
Scaling Out And Scaling Up
IBM has had two distinct flavors of chips in the Power family for a while within generations, and the roadmap that was provided by Big Blue for its OpenPower partners is baking these distinctions in going forward. There were distinct Power8 chips for low-end, scale-out systems intended to be used as onesies or in clusters for distributed workloads and another variant aimed at IBM’s own scale-up, NUMA systems. With the Power9 chip, these are labeled explicitly, with the SO and SU tacked on to the moniker and designating whether the chip is aimed at scale out clusters or shared memory iron.
Speaking generally, McCredie says that the Power chip roadmap “will be design driven, not technology driven,” and that IBM will keep a steady cadence of innovation to keep the architecture moving forward and is especially interested in adding other kinds of accelerators and special buses to attach them to the processor complex for lower latency processing and sharing memory.
IBM revealed that the Power9 SO chip will be etched in the 14 nanometer process from Globalfoundries and will have 24 cores, which is a big leap for Big Blue. The Power9 SO is the one that Google and Rackspace will be using in their future “Zaius” system that we discussed separately, will come out in the second half of 2017.
That doubling of cores in the Power9 SO is a big jump for IBM, but not unprecedented. IBM made a big jump from two cores in the Power6 and Power6+ generations to eight cores with the Power7 and Power7+ generations, and we have always thought that IBM wanted to do a process shrink and get to four cores on the Power6+ and that something went wrong. IBM ended up double-stuffing processor sockets with the Power6+, which gave it an effective four-core chip. It did the same thing with certain Power5+ machines and Power7+ machines, too.
The other big change with the Power9 SO chip is that IBM is going to allow the memory controllers on the die to reach out directly and control external DDR4 main memory rather than have to work through the “Centaur” memory buffer chip that is used with the Power8 chips. This memory buffering has allowed for very high memory bandwidth and a large number of memory slots as well as an L4 cache for the processors, but it is a hassle for entry systems designs and overkill for machines with one or two sockets. Hence, it is being dropped.
The Power9 SU processor, which will be used in IBM’s own high-end NUMA machines with four or more sockets, will be sticking with the buffered memory. IBM has not revealed what the core count will be on the Power9 SU chip, but when we suggested that based on the performance needs and thermal profiles of big iron that this chip would probably have fewer cores, possibly more caches, and high clock speeds, McCredie said these were all reasonable and good guesses without confirming anything about future products.
The Power9 chips will sport an enhanced NVLink interconnect (which we think will have more bandwidth and lower latency but not more aggregate ports on the CPUs or GPUs than is available on the Power8), and we think it is possible that the Power9 SU will not have NVLink ports at all. (Although we could make a case for having a big NUMA system with lots and lots of GPUs hanging off of it using lots of NVLink ports instead of using an InfiniBand interconnect to link multiple nodes in a cluster together.)
We will be drilling down into the future Power9 architecture in a future story as we gather some more information.
Beyond that, there are other possible Power9 variants, and the Power 10 chip is slated to be in 2020 or later, three years after the introduction of the first Power9 chips and using what we expect to be a 10 nanometer process. (IBM did not confirm this process in this roadmap unveiled today, but past ones have said it was 10 nanometers.) And as we have speculated before, we think the Power11 chip will be delivered, if it comes to pass, using a 7 nanometer process sometime around 2023.
This rhythm would keeping to the three-year cadence for major Power chip designs. (Power4 to Power5 took four years, but Power5 to Power6 only took two years, and Power7 to Power8 took four years.) IBM also expects for Power chip partners such as Suzhou PowerCore to come out with their own Power8 and Power9 designs, implementing them in 10 nanometer and 7 nanometer processes.
There could be a lot of different Power chips in the coming years, provided there is demand for them.