There is some speculation afoot that Intel is on a path to rekindle its supercomputer business, bringing it full circle to where it started in the late 1980s and early 1990s with its own distinct high performance computing division that produced top-tier national lab systems like ASCI Red, among others.
While this is not necessarily a new observation about Intel’s potential path, conversations about this have gathered force in the wake of the Aurora supercomputer deal, which features Intel as the prime contractor for the first time since those early days of Intel’s strong but relatively short-lived supercomputing systems business. Normally, one might have expected a big system win like this to go to a large company like IBM (which was awarded other similarly sized contracts in the broader CORAL procurement round) or to Cray, SGI, or another systems vendor. But in this case, Cray is playing second fiddle to Intel in the award, serving as the integrator—and thus opening the door to chatter about what a move like this portends for Intel’s many OEM partners in supercomputing and beyond.
This is not speculation that was sparked simply because of the Aurora deal, rather Intel’s award as a prime reinvigorated speculation that has been alive for some time. After all, Intel has the financial might to take the risk and requires a strong integration partner. But it does send a signal to the OEMs that Intel has the upper hand, even if that wasn’t in question to begin with. And that OEM story is what makes this all worth talking about.
Some might argue that the writing has been on the wall about Intel’s desire to push into the supercomputing systems business once again given a string of acquisitions over the last several years that look far more like what a systems company might make versus a chipmaker. From acquiring the file system assets from Whamcloud for Lustre development, to QLogic and Cray interconnect investments, to a host of compiler and software companies that cater to HPC, it is clear Intel has ambitions—although they could be as simple as delivering on the promise of the decade-old Cluster Ready program, which is designed to make it as easy as possible for OEMs stand up an HPC system. Of course, the conspiracy theorists might say that Cluster Ready was just the beginning of the slow build to a burgeoning supercomputing business refresh. Either way, the question becomes, what is left for the OEMs to innovate on top of if Intel is stacking their systems plate high with the best the industry has to offer?
For perspective, it is not unreasonable to assume that Intel has what it needs to build and integrate on par with its OEM partners. It has done so before—but at a time when the market was significantly different. At the same time that Intel was succeeding with its early supercomputer systems division (late 80s-early 90s) via some big wins at major centers, other companies had somewhat equal footing. In other words, Intel’s chip business for the HPC market was nothing like we see today. In those early pre-Linux days of HPC, most of the major supercomputer makers of the time holding tight to their own microprocessor architectures. While the margins were small then as well, there were a number of companies that differentiated across the stack, including at the chip level (Cray, Thinking Machines, HP, DEC, and a few others). In short, Intel was just another hardware maker, thus the OEM strategy made solid sense.
The defining turning point was Linux and the open source movement, coupled with Intel’s attractive price points on the early Pentium chips, which together, meant their strong list of OEM partners could build reasonably good, affordable machines—a trend that marked a swift move away from the disparate proprietary architectures of supercomputing’s old guard.
The takeaway there, other than the history of Intel’s previous supercomputing system investment, is that things can change very quickly as a new source of innovation suddenly springs to life.
All of this momentum on Intel’s side to firm up its investments across the supercomputing stack means OEMs will have to work far harder to differentiate by pushing interconnect and other developments. But as it stands now, even with a machine like the 2018 Aurora supercomputer, which is based on the future Knights Hill architecture, everything essential to the system, at least at its core, is coming from Intel. The processors, sure. But the interconnect, the switch, and even the memory via a partnership between Micron and Intel and for that matter, if there’s a flash array needed, Intel can supply that as well. Cray will be the manufacturer and integrator for Aurora. But there’s a whole lot of Intel inside.
And back to the question of what all this might mean for Intel’s OEMs in supercomputing. Why wouldn’t it make sense for a company like Cray to score this contract? Take a look at how Cray, while a successful small company, has to balance its cash reserves. Imagine if they had to burden the costs for a system unlike any other built to date, unable to put it on the books in full until it passes acceptance testing in 2018. There is a “too big to fail” paradigm here—and there’s no value judgement there. Intel simply has to be the prime in the case of Aurora because of risk, or at least that is one way to look at it. That does appear to be the way the DoE does.
But in a more overarching sense, it means that if Intel did decide to dip its toe into the supercomputing systems waters once again, the OEMs will be at the mercy of Intel far more than ever before—that is, unless there are concerted efforts to push innovation faster toward new architectures, including ARM, for instance. The problem is, all of that development is going to take time. And one heck of a lot of it, not to mention a steady influx of cash to keep some of the most expensive R&D efforts in the IT industry fed.
On that note, recall that IBM and others have used big system deals in supercomputing to fund and pave the path for future technologies that will support upcoming business lines. Intel has always been very careful to note that they are not at all in the systems business. This could all be part of that “science project” approach to developing future technologies and funding that growth and research along the way. Few other companies could shoulder that kind of burden, after all.
The big question here is not whether Intel is getting into the supercomputing systems business so much as it is what such a move, in theory, would mean for its OEMs. And what happens for those OEMs matters to the end users of supercomputing gear. We will pick up the thread on this end coming this week with some perspectives on this from outside The Next Platform thought bubble.
Be the first to comment