The Case For IBM Buying Nvidia, Xilinx, And Mellanox

We spend a lot of time contemplating what technologies will be deployed at the heart of servers, storage, and networks and thereby form the foundation of the next successive generations of platforms in the datacenter for running applications old and new. While technology is inherently interesting, we are cognizant of the fact that the companies producing technology need global reach and a certain critical mass.

It is with this in mind, and as more of a thought experiment than a desire, that we consider the fate of International Business Machines in the datacenter. In many ways, other companies have long since assumed the role that Big Blue once had in establishing the nature and implementation of a modern computing platform. But there is a way, perhaps, that IBM might be able to get back in the game in a big way and provide a complete alternative to the technology stack of chip giant Intel, which essentially owns server and storage processing in the datacenter and which is trying to extend its hegemony into networking.

IBM could buy graphics processing chip maker Nvidia, FPGA maker Xilinx, and networking chip and switch maker Mellanox Technologies.

We know what you are thinking. This might be a good thing for IBM, but it might not be a good thing for Nvidia, Xilinx, and Mellanox, who are the key three hardware partners in the OpenPower consortium that IBM formed with the help of hyperscale datacenter operator Google back in August 2013. Fair enough. All three companies seem to be doing fine against their respective competition, and the OpenPower effort might be a tight enough coupling to get interesting and innovative systems to market. But, we might argue, this effort to build a flexible platform – for that is what the OpenPower consortium is ultimately about – could be significantly enhanced and accelerated by a tighter coupling of the core technologies created by all four of these companies. The fourth being, of course, the Power family of processors created by IBM, which would be married to Nvidia Tesla compute GPUs, Mellanox InfiniBand and Ethernet switching, and Xilinx UltraScale Virtex and Kintex FPGAs.

We think that the Power architecture has as good of a chance of being an alternative to the X86 architecture in the datacenter as the various ARM efforts from the likes of Applied Micro, Cavium, and Qualcomm. That doesn’t mean chipping away at the 99 percent plus share of shipments that the Xeon architecture from Intel now enjoys with servers will be easy. We think both Power and ARM will be fortunate to get even 5 percent share of the datacenter each, much less the 10 percent to 20 percent shipment share that IBM is targeting with various OpenPower efforts and the 25 percent share that ARM Holdings has set for its server partners. Some very strange things would have to happen in Xeon Land for Intel’s share of server shipments to drop down to 55 percent to 65 percent by 2020 or so, as these pie in the sky numbers suggest. AMD has a much better chance of taking down 5 percent to 10 percent server share in 2018, if the “Naples” variant of the Opterons comes to market this year and is respectable in terms of price, performance, and thermals. If “Skylake” Xeons, due later this year, are priced high, AMD could do better. But for the moment, let’s set AMD and its possibilities aside and see what it might look like if IBM actually was the OpenPower compute and networking platform provider.

Rather than start with the technical arguments, which we will make in a follow-on story, we are going to start with the financials of such a combination. What would such a combination do for IBM’s hardware revenue stream, which has been on decline for five years?

It certainly would help make IBM a supplier of components that go into business machines that are sold all over the world. And it would give Big Blue a platform story that breaks out of the datacenter and that will survive the ongoing decline of the System z mainframe, which turns 53 this April.

The Network Makes A Cluster A System

We tend to focus on compute, so let’s start with a look at Mellanox, which has definitely carved out a niche for itself as a provider of silicon for switches and adapters – and sometimes switches and adapters themselves – to high performance computing centers, cloud builders, hyperscale datacenter operators.

Mellanox, one of the upstarts that commercialized the InfiniBand fabric that was supposed to be at the heart of all systems when IBM and Intel formed the alliance to make it the next generation of I/O back in the late 1990s, has grown from a relatively small niche player to one that has to be reckoned with at the high end. Mellanox has been on the leading edge of bandwidth bumps, getting 100 Gb/sec speeds out ahead of its competition for both InfiniBand and Ethernet platforms, and last fall, it unveiled 200 Gb/sec InfiniBand switches and matching server adapters that should come to market by the end of the year.

The deal in November 2010 to buy InfiniBand rival Voltaire for $210 million certainly helped Mellanox pump up its business, and the company’s push in recent years to become a supplier of Ethernet switch chips as well as whole switches has paid off, particularly among the hyperscaler set that helped form the 25G Ethernet consortium along with Broadcom to foster a change in engineering for Ethernet switch chips that better suited their low-power and low-cost needs. Mellanox is now at a run rate just shy of $1 billion in annual revenues – a factor of 10X growth since the Great Recession – and while profits are a little under pressure as it brings 200 Gb/sec InfiniBand to market and fights Intel’s Omni-Path offshoot of InfiniBand for share in the HPC and hyperscale datacenters

Thanks to the acquisition of EZChip, a maker of network acceleration processors, back in September 2015 for $811 million, Mellanox has been able to significantly bolster its Ethernet-related business, which throughout 2016 either matched or surpassed sales of 56 Gb/sec FDR InfiniBand products. Mellanox expects for 50 Gb/sec and 100 Gb/sec Ethernet and 100 Gb/sec EDR InfiniBand switches to ramp through 2017, and for 200 Gb/sec HDR InfiniBand to make its debut and start the cycle anew as the year comes to an end.

The trick IBM – or an arm’s length OpenPower company created by Big Blue – would be to make Mellanox InfiniBand and Ethernet products more broadly applicable to enterprises and thereby help boost the profit profile for these. This is how Cisco Systems has been able to maintain relatively high margins despite the intense competition from upstarts that use merchant silicon from Broadcom, Cavium, Mellanox, and now Barefoot Networks.

Adding Diverse Compute To Power9

IBM’s forthcoming Power9 processor, due around the same time as Intel’s Skylake Xeons and AMD’s Naples Opterons in the summer of this year, will be a beast in its own right, with 24 cores running at around 4 GHz. With vector coprocessors as well as lots of memory controllers for 120 GB/sec or 230 GB/sec of memory bandwidth per socket, depending on which Power9 chip you choose, one could use the Power9 to run applications without any acceleration help. But IBM believes that all workloads will be accelerated in some fashion because of the need to precisely tune hardware to software and because of the limits to Moore’s Law improvements in compute. That is why the Power9 chip has NVLink 2.0 and “Bluelink” OpenCAPI 3.0 peripheral ports running at 25 Gb/sec integrated on the die for linking to things like GPU and FPGA accelerator cards to the Power9 compute complex. The links are there, so why not buy the companies they link out to?

Nvidia has not reported its financial results for fourth quarter of its fiscal 2017 year, which ended in January. (It will do so this Thursday, and we will tell you all about it.) But we have compiled data going back to through fiscal 2011 to get a sense of the company’s revenues and profits, and sales by product line. Here is the overall revenue and profit picture:

Yes, something clearly good happened in the third quarter of fiscal 2017, as we previously reported. That something was the “Pascal” family of GPUs for graphics cards and Tesla compute accelerations taking off like crazy, giving Nvidia its first quarter where it broke $2 billion in sales and $542 million in profits. About half of Nvidia’s Tesla compute revenues are being driven by deep learning in some fashion or another, with the remainder coming from acceleration of traditional supercomputing applications and some acceleration of relational databases and other massively parallel systems used in enterprises (like risk analysis clusters and high speed trading systems).

Nvidia has only been breaking out its sales by various categories for a few years now, so we don’t have data back to the Great Recession here. But this is how the sales are for various Nvidia products:

The datacenter business at Nvidia, which includes Tesla compute for machine learning training as well as for simulation and modeling and GRID adapters for remote visualization, nearly tripled in the third fiscal quarter ended last November 1, and we have no reason to believe it will not grow like crazy in the fourth quarter ended in January 2017, either. Just for fun, let’s say the datacenter business, dominated by Tesla accelerators, has a run rate approaching $1 billion. And with gross margins north of 65 percent, it is a big reason why Nvidia has been able to grow profits a lot faster than revenues in the past year or more. It would not be surprising to find out that operating income in the datacenter business at Nvidia is on the order of 50 percent – what Intel is getting across its Data Center Group during a good quarter, by the way. Nvidia does not report operating profits for its divisions, but has hinted at them.

A CPU and a GPU are not the only kinds of compute companies want to deploy. Network devices have been deploying FPGAs for years alongside hard-etched ASICs to accelerate certain functions, and for the past decade, FPGAs have been making their way ever so slowly and methodically into the datacenter. The reason is simple. For any given algorithm, an FPGA can be faster than a CPU running an operating system and software stack doing the same work. The trouble has been that FPGAs are tough to program, but much work has been done to accelerate the coding process for the code to be accelerated. (Yes, that was worded that way on purpose.)

Intel shelled out a stunning $16.7 billion to acquire FPGA maker Altera back in June 2015, and that made Xilinx, its main rival in the FPGA space, am acquisition target by default. The wonder is that someone – AMD, IBM, Nvidia, Broadcom, or an aggressive bunch of hedge funds – has not bought Xilinx yet.

Selling FPGAs is not an easy business, but it is a good one for both Altera and Xilinx. For Xilinx, it has been able to maintain a fairly steady revenue and profit pace, even as it has undergone several major product shifts:

Xilinx posted $2.3 billion in revenues in the trailing twelve months ended in December, and brought $614 million of that to the bottom line. This is a good, diverse, and healthy business, and the fact that Xilinx has a cash hoard of $3.25 billion attests to this fact.

Here is how the Xilinx product sales break down by industry:

There are four families of FPGAs available from Xilinx: low-end Spartan and Artix devices, midrange Kintex devices, and high-end Virtex devices. The Virtex and Kintex FPGAs are branded under the UltraScale family and are available with the latest 20 nanometer and 16 nanometer process technologies etching their logic gates. These are the ones most suited to datacenter work.

By acquiring these three companies, IBM would have all of the components for building modern systems across the diversity of workflows, and it would be able to hedge its bets much as Intel has done by acquiring the networking businesses of Fulcrum Microsystems, QLogic, and Cray and the compute assets of Altera and machine learning chip upstart Nervana Systems.

The combination of Mellanox, Nvidia, and Xilinx is powerful in its own right. If you adjust Nvidia’s quarters to line them up with Mellanox and Xilinx, then in 2016 the combined companies had $9.31 billion in sales, up 20.4 percent, and net income amounted to 19.7 percent of revenues, up 48.1 percent. This is the kind of revenue and profit profile Big Blue is missing from its non-mainframe systems business.

Next up, we will ponder the technical considerations of IBM buying these three companies as well as a more detailed look at how this might change Big Blue for the better.

One note: There are also other possible combinations that might prove fruitful and interesting. Nvidia could buy Xilinx. AMD could buy Xilinx. Either could buy Mellanox as well as Xilinx. You could throw Micron Technology into the mix for even more fun. But one way or the other, we think that compute is getting closer to memory and networking, and that is going to force some changes if anyone hopes to compete against Intel, which is stacking everything up.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


  1. I doubt this will ever happen. IBM has all but divested from processor and hardware. Power/OpenPower is just a side show. Ultimately IBM wants to shove Watson everywhere and whether it runs on Power or X86 is immaterial.

  2. You seem really obsessed with the idea of company X buying company Y, for some values of X and Y. Why is that?

    What’s the value of acquisition, over partnering, or building their own? NVidia’s specialty is SIMD, for example, but IBM is no stranger to designing and fabricating their own SIMD silicon. If IBM bought NVidia, how much time and money would it take to incorporate the useful ideas from NVidia into their own processes? Is this cheaper or faster than doing it in-house, and over what time scales? On what do you base this?

    IBM doesn’t really make consumer PC hardware any more, so would you have them branch out into PC graphics cards to maintain NVidia’s current lineup, or would you have them abandon that market and hand it over to AMD and Intel, or would you have them run NVidia independently (in which case, again, what exactly do they gain from an acquisition over a partnership)?

    I see 2000+ words here on why company X and company Y have great and complementary technology, but nothing that indicates why this would be a good idea for X or Y, or even how X and Y were selected. I can name hundreds of companies with great and complementary technology — just pry the top off your PC and start reading off names of components — but that doesn’t mean any of them would be good corporate acquisitions.

  3. Nvidia may be an OpenPower big wig but Nvidia is not even a founding member of OpenCAPI as IBM and AMD/Others are, and Nvidia’s NVLink is a proprietary IP while OpenCAPI will be how AMD will connect its Vega GPU accelerators with Power9(Third Party power9s and maybe even IBM’s power9s) systems in the future. There are 2 variants of the power9(One 24 core SMT4 variant, the other a 12 core SMT8 variant) and that 24 core/SMT4 Power9 variant is what will compete with Xeon more so that the 12 core variant. IBM is not known to tie its fortunes to one makers GPU accelerator IP indefinitely as IBM’s supply second sourcing chain demands are the very reason that AMD/Others got a crack into the x86 market way back when the PC was actually only an IBM venture.

    IBM is definitely positioning itself to be a continued producer of CPU IP, and the R&D towards that end, but IBM has gone more towards the ARM holdings licensed IP business model in an effort to give its Power/Power9 micro-architecture a larger economy of scale and a larger Linux based software ecosystem availability to assure that power9(Power8s as well) will be used by many third party licensees. This and IBM going fabless will allow IBM at some future time to maybe start using some alternative competitively bid out power9 suppliers as the power/Power9 ecosystem builds out into a larger market of Power9 makers with IBM the holder of the basic power9/older power IP.

    The only reason for IBM to want Xilinx is for its FPGA IP to maybe have that insurance against both AMD’s and Nvidia’s professional GPU accelerator dominance as Intel has done with its acquisition of Altera. I would rather see AMD get some of its own FPGA IP or acquire some via acquisition. Also there is nothing stopping both AMD and Nvidia from getting some Power9 licensing going and branching out into that market with their GPU IP leveraged and used with the Power9 IP. Nvidia lacks an x86 license but Nvidia could very well produce some form of power9/Nvidia HPC/server class Interposer based SOC IP of its own to compete with AMD. AMD will be taking its Zen micro-architecture and making some very powerful HPC/Server and workstation APUs on an interposer foundation with much larger Vega GPU resources than have not to date been found on any consumer grade APUs of AMD’s current/past lines of APUs.

    AMD will be perfecting the APU on an interposer for both the consumer and professional markets this time around with its Ryzen/Vega consumers variants and some form of Zen/larger Vega designs for the workstation/HPC/Server and exascale markets. AMD has patent filings for placing some FPGA in HBM2 memory compute on the HBM2 stacks to work form the HBM2’s DRAM memory dies for is government exascale R&D grant design proposal.

    I do not see IBM ever getting its hands on Nvidia, too much ego to be overcome on the part of Nvidia’s CEO and most likely shareholders.

  4. Never going to happen. IBM is well on its way to being a Service only company. Selling off their FAB was the last sign of that transformation. They are loosing on all fronts. Power is not a real alternative anymore maybe it was 10 years ago but that time has long passed now. IBM’s cooperate mindset is even worse than Intel’s as well.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.