IBM Draws OpenPower Line In The X86 Sand

History moves in spirals, not straight lines, widening our experience and our options, coming around again and again with variations on similar themes.

Take RISC servers running Unix operating systems as an example, which started driving proprietary machines out of the datacenters of the world in the late 1980s. That open systems movement brought commonality to operating system APIs and similar architectures to the servers that ran this Unix software and that initially built the public Internet. (The irony is that there was no Unix, but rather a plethora of Unixes, which you might call Multix if it were not for the actual Multics that was its predecessor.) These Unix open systems set the stage for the ascent the combination of open source software – notably Linux – and commodity X86 servers that rule large chunks of the enterprise datacenter, are dominant among cloud providers and telcos, and that have absolute dominion at HPC centers and hyperscalers the world over.

Without such commonality and uniformity, the volume economics of the hyperscale and HPC era would not be possible. You can only build a Google or an Amazon Web Services on powerful and ever-cheapening compute, storage, and networking. But not everyone is ready to give up the fight to provide an alternative to the X86 architecture in the datacenter. The ARM collective wants a piece of this action, and so do the members of the OpenPower Foundation, led by hardware partners IBM, Nvidia, Mellanox Technologies, and Tyan and fronted by hyperscaler Google, which was very clear to The Next Platform back in April that it is entertaining Power chips as an alternative to Xeon processors in its vast datacenters.

No one knows for sure how the OpenPower Foundation members do in taking market share away from Intel, particularly with that chip maker’s product lines getting broader, deeper, and richer over time for compute, storage, and networking. But Tom Rosamilia, who is senior vice president in charge of the Systems Group at IBM, gave a talk recently to Wall Street analysts about the OpenPower Foundation, and he drew a line in the sand on what it will take for him to call OpenPower a success. As you might expect, IBM wants OpenPower machines running Linux to take a bite out of the hyperscale and HPC markets while its own Power Systems machines will hold the line in the corporate datacenters that support AIX, Linux, and IBM i (OS/400) operating systems on their iron.

“I won’t predict exactly where it will get to, but we are starting in that space around zero. And clearly, for me, victory would be somewhere in the 10 to 20 percent range or it is not worth doing.”

The amazing thing about the talk with Wall Street was that Rosamilia actually put some numbers on what success will mean for OpenPower, starting with the hyperscale market, spearheaded by Google, that compelled the formation of the foundation two years ago.

“I think the easier one is really around hyperscale, because in that situation we are starting from a position of near zero,” Rosamilia explained when asked about what share of market OpenPower partners were shooting for. “And so everything we take and everything we do going forward is positive to our share. And what we are seeing is the emergence of – I think it is too early to call – but the emergence of this real opportunity starting with the work we did with Google, but also around what we are doing with Rackspace, what we are doing with others in the marketplace that I can’t comment on, but what we are seeing is each of these companies really wants to do something custom. They want to do it to optimize for their workloads. They are doing it to be able to deliver lower price points to their customers, but they are obviously driving their cost down. But in doing so, they are following the same kind of pattern we have seen with the work with Google, which is they don’t necessarily want to do it all themselves, for themselves, and they want to be able to harvest and participate in this. So, I won’t predict exactly where it will get to, but we are starting in that space around zero. And clearly, for me, victory would be somewhere in the 10 to 20 percent range or it is not worth doing.”

Back at the OpenPower Summit in March, Gordon MacKean, who is chairman of the OpenPower Foundation and senior director in charge of server and storage systems design at Google, talked to The Next Platform about the search giant’s Power8 system designs, saying that Google tested its code base on all kinds of architectures to keep it from “bit rot” and added that Google has not licensed the Power chip architecture itself. A month later, MacKean’s boss, Urs Hölzle, senior vice president of the Technical Infrastructure team at Google, was a whole lot less obtuse about the Power possibilities, but still did not make any commitments. “People ask me if we would switch to Power, and the answer is absolutely,” Hölzle said. “Even for a single generation.”

It would be helpful to the OpenPower cause if Google actually admitted that it had deployed Power8 machines in its infrastructure, if that is the case; conversely, after all this effort, if it does not do so, Google could be harmful to the OpenPower cause. The whole thing would be an Amdahl coffee mug, with OpenPower playing the role of Amdahl and Intel playing the role of IBM, some forty years later. (We doubt very much this is what is actually going on.)

That said, the Power architecture has made some headway into the HPC market where Power machines – including federated Power Systems machines as well as the BlueGene massively parallel computers and hybrid Opteron-PowerPC Cell systems – have done well over the past decade but are no longer sold by Big Blue. IBM’s focus is on Power processors accelerated by Nvidia GPUs, FPGAs from Altera and Xilinx, Mellanox InfiniBand networking, and its own flash storage. The OpenPower effort is, in a sense, the merger of the plan IBM came up with to win the “Summit” and “Sierra” supercomputer deals at the US Department of Energy and its aspirations to peddle Linux-based Power machinery to hyperscalers.

As for that HPC market, where IBM used to have a large presence on the Top 500 list thanks to very large BlueGene systems, the occasional Power Systems cluster, and a large number of System x clusters, the loss of its X86 systems business has been tough and so has the mothballing of the BlueGene line. (Al Gara, one of the BlueGene system designers, has become chief exascale architect at Intel.) But IBM believes, as does Nvidia and Mellanox, that it can win deals against Xeon and Xeon Phi clusters with hybrid Power machinery, and IBM has aspirations for OpenPower to grow its share.

“Around HPC, this market really varies,” Rosamilia cautioned. “It can be very lumpy. And so we participated in this a couple of years ago, but at this point, we really don’t have much in terms of net new sales in this area. It is an area where if I go back a couple of years, we had single-digit points of market share. Again, I would say the same thing in this space, that it has got to be in the 10 percent to 20 percent market participation range for us to be relevant, and that is clearly our target. So, I will give you our targets rather than a prediction of where it will get to.”

This is a remarkable line to draw in the sand, and you have to roll the clock back pretty far to find a time when RISC machines had a share like this in the server market.

In the most recent quarter, according to data from IDC, around 20,000 servers out of 2.29 million were non-X86 machines. These systems, thanks to midrange and high-end RISC and Itanium Unix machines and mainframes from IBM, Unisys, and a few others, generated $2.68 billion in revenues, compared to $10.8 billion for X86 machines. Unix machines were probably the lion’s share of the shipments and two-thirds of the revenue, but we are guessing there because IDC does not provide the data that way in its public reports. A year is roughly a little more than four times these figures, although we have two quarters (and the strongest one) to go. Call it around $55 billion in revenues and close to 10 million machines just for the sake of argument for the entire systems market worldwide in 2015, and maybe 50,000 servers and maybe $7.5 billion in revenues for RISC/Itanium Unix machines.

Back in 1999, we did get out hands on the IDC data, and thanks to the dot-com bubble which made Unix vendors fabulously rich and a Y2K crisis that gave proprietary systems their last big hoorah, worldwide server sales came in at $57.5 billion and the world consumed 3.92 million machines. Unix systems accounted for $25.7 billion in revenues, or about 45 percent of revenue (a revenue share the Windows platform has today, by the way), and for 701,000 server shipments, or about 18 percent of shipments.

Rosamilia did not say this would happen overnight, of course, but if it happened in 2015, the OpenPower partners would have to either consume (in the case of IBM SoftLayer, Google, and others building their own) or sell (in the case of IBM, Tyan, Inspur, Red Power, and others) somewhere between 1 million and 2 million machines to get 10 percent to 20 percent shipment share; for revenue in the 10 percent to 20 percent range of the market, the OpenPower partners would have to do somewhere between $5.5 billion to $11 billion in revenues to hit those targets. That is a lot of machinery, and a lot of money, and every year the system market grows, the numbers get larger.

What IBM wants is an open ecosystem through which it can better compete against Intel’s Xeon and Xeon Phi and defend against the ever-impending ARM server onslaught in the datacenter.

“Depending on whose external data we would look at, you can say that this is going to have a dramatic effect on the business or some effect on the business,” Rosamilia conceded. “I think we would all agree some effect at least on our business as people make purchases of capacity through third-parties whether they do it through it hosters, whether they do it through managed service providers or whether they do it through hyperscale data centers. In each of those cases my role is to be that arms supplier to those folks. I think it is critical for us that we made this change in our strategy not just to sell to end users, which we will continue to do, but also to sell through providers like the hyperscale or the mega datacenters that are out there.”


There are a lot of people behind this effort, and the first fruits of that labor will be coming to market this year. The five companies started the OpenPower effort back in August 2013, and the organization now has over 150 members that are working on everything from custom Power chips all the way out to system design and integration.

Rosamilia, who was the architect of IBM’s sale of the System x business and Microelectronics division divestitures to Lenovo Group and Globalfoundries, respectively, has run IBM’s System z and Power Systems businesses before taking on that special set of reorganization tasks for CEO Ginni Rometty. During the dot-com boom, Rosamilia ran IBM’s WebSphere division, which started out as a funky add-on accelerator to the Apache web server and grew into a key middleware business for Big Blue.

The Wall Street talk sets the stage for a set of announcements that IBM is expected to make in October concerning the Power Systems line of machines that it sells. (We think that Big Blue will be reselling machines, possibly made by Wistron, Tyan, and/or Inspur to its own HPC and hyperscale customers, but have no confirmation on that as yet.) The talk that Rosamilia have was very similar to the one that Brad McCredie, IBM Fellow and vice president Power Systems development at IBM and president of the OpenPower Foundation, gave at the Rice University Oil and Gas Workshop back in March. That was when The Next Platform got the first look, outside of the OpenPower partners themselves, at the system roadmap the partners had put together between 2015 and 2018.

Aside from the market share aspirations outlined above, the basic idea that Rosamilia and McCredie talked about is that Moore’s Law advances in chip manufacturing technology are slowing and to wrest more performance out of machines, hyperscalers, HPC centers, and cloud builders need to innovate on the full stack of hardware – including processors, accelerators of various kinds, networks, storage, firmware, and operating systems. Rosamilia also reminded everyone that IBM has deployed an army of over 50,000 of its employees contributing to more than 150 open source projects, which stands in stark contrast to the five IBMers that were contributing to the Linux operating system and Apache web server projects back in 1999. To give you a sense of scale, Google currently employs about 25,000 software engineers.

But the main message to Wall Street was that the market wants an alternative to Xeon and that IBM has come up with ways to monetize its intellectual property around the Power architecture, akin to what ARM Holdings has done.


IBM has special merchant silicon versions of the Power8 chip that it sells to companies like Google and Tyan for their systems as well as versions that it uses in its own Power Systems machines. Suzhou PowerCore, a licensee of PowerPC chips for embedded devices from China, has licensed the Power8 design and done its own tweaks and run it through a local fab to create an indigenous Chinese Power8 chip called the CP1. IDT, Synapse Design, and VeriSilicon are also working on aspects of compute and SoC relating to the Power architecture, and Altera, Xilinx, and Nvidia obviously are as well. (We walked through the various OpenPower system and motherboard designs here, and called out the Rackspace Hosting “Barreleye” system board there.)

Zoom Netcom’s RedPower is an interesting OpenPower partner in that it is putting together what amounts to its own Power-based engineered system, with a complete hardware and software stack aimed at the Chinese market, one that IBM might have hoped to sell there itself in days gone by. (China wants to buy local. Period.)


The RedPower machine is based on Zoom Netcom’s own system board and firmware and employs the PowerCore CP1 processor. It uses Mellanox network adapters and has accelerators from Nvidia and Xilinx for those who want to use GPUs or FPGAs to augment the performance of the CP1 chips. The RedPower software stack includes the Masscloud implementation of the OpenStack cloud controller and its implementation of the KVM hypervisor (which we have never heard of) and the Red Flag Linux variant of the Linux operating system. The stack also includes the GBase NewSQL database and the Transwarp Data Hub, a Chinese implementation of Spark and Hadoop.

It will be interesting to see the systems and stacks partners and customers create. The important thing is for someone to actually do a big deal. The logical place to start is SoftLayer and Google, of course. Hopefully they will make a lot of waves, more than IBM’s deals with French hosters OVH and Online, who did not talk about how many Power machines they have installed as has IBM’s own SoftLayer cloud, which is using Tyan-based machines rather than its own gear. IBM has made lots of noise with the Department of Energy systems, and has inked similar deals in the United Kingdom with the Science and Technology Facilities Council and in France with GENCI.

More noise, and deals, need to be made for OpenPower to ramp to the levels Rosamilia is talking about.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


  1. Very nice summary article and article achieves a good balance. Much heavy lifting to be done, but increasing number of hands to do the lifting. this is a very interesting journey: on the way to a mature server market, you might expect things to get boring. However, the role of “open” has unleashed a wave of innovation and opportunity that seems to be much broader than the former client-server disruption.

  2. Good luck if we are not talking about chips in the realm of 10-100 millions OpenPower will never be able to be anywhere near cost effective against x86 and will hard to compete for Foundry space against mobile, IoT and others. As IBM sold off its own manufacturing capacities they will compete with all the other non-foundry blue chip companies.

    • No IBM paid(cash and physical plant property) to Global Foundries(GF) to take their chip fab operation off of IBM’s hands, IBM has kept some research related fab equipment as well as the great majority of IBM’s strategic Chip fab/Chip IP. GF is contractually bound to provide for IBM’s internal power8/8+/9 needs for around 10 years or so, with the third party OpenPower licensees able to arrange their own fab partner agreements. IBM is assured that its production needs will be met, and most likely it will still be IBM that designs the fab process that is used to make IBM’s internal power8s/9s, until GF/Samsung’s 14nm process can mature. Both Samsung and GF(by licensing Samsung’s 14nm process) will probably be sharing some of that third party power8/8+/9 business, as they have been in a fab/IP sharing consortium with IBM for a some years now. IBM licenses a lot of IP to Samsung and GF, and others, and IBM has now gotten rid of those foundry related upkeep expenses to GF, and it’s going to be GF’s responsibility for the physical plant exigences, which GF will economy of scale spread their foundry physical plant expenses across its entire customer base.

      The world is not as locked into the x86 ISA as it once was, and most things Linux run fine on ARM, MIPS, power/power8, as well as x86. If I were Intel I’d be very worried about OpenPower, and AMD’s x86 based HPC/server APUs on an interposer SKUs! That new Arctic Islands GPU micro-architecture and completely new GPU ISA is going to have even more CPU like hardware asynchronous compute abilities baked into the ACE units, and the interposer will allow the separately fabbed CPU and GPU dies to be wired up with wide parallel traces by the thousands CPU to GPU, as well as each chip’s wide channels to the HBM memory. Intel will be getting it from both ends, and the amount of raw compute on AMD’s GPUs is nothing to sneeze at, especially wired more directly in wide parallel to those Zen based CPU cores on the interposer from the x86/GPU accelerator end, and IBM’s/OpenPower licensees from the other end in the battle for the future HPC/server/workstation business. The APUs/SOCs on an interposer will be a very powerful combination of CPU to GPU power wired up in parallel and more directly integrated than, than even IBM’s power8 and Nvidia’s GPU accelerators.

      I think you are intentionally ignoring the facts, as your posting history shows a marked accentuation on the negative with regards to the facts. Are you an Intel Investor perhaps?

      • Well we heard the same before between AMD and GF and that turned into a highly bumpy road for AMD. Wouldn’t bet my future on GF

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.