For those who marveled at the $16.7 billion deal Intel made to acquire field programmable gate array maker, Altera, an equal number raised eyebrows at the estimate given by Intel CEO to announce the purchase that one-third of cloud workloads would take advantage of FPGA acceleration by 2020.
While The Next Platform analyzed the market and Intel’s plans to launch hybrid CPU and FPGA nodes sometime in the near future here, it is worthwhile to go back to this stunning assertion about the future applicability of FPGA technology at such grand scale. For FPGA silicon that has found its sweet spot, until the last few years in particular, inside specialized datacenters for financial services, oil and gas, and of course, defense and embedded applications, this sudden rise to fame is striking. After all, it was not that long ago that Altera noted its total addressable market in the datacenter was somewhere in the $1 billion range.
So what does Intel see that completes the puzzle—that makes this hefty acquisition make sense from an addressable market perspective? We were able to get Altera to talk about how the market is shaping up for FPGAs, as well as how their perceptions about the total growth potential has shifted over time, especially when the chatter about programmable logic devices hit fever pitch just over a year ago when Microsoft made bold predictions about how FPGAs might power more applications beyond its Bing search and image recognition operations.
As Microsoft’s Technology and Research Group vice president, Harry Shum, told a crowd at the Ignite conference this May, that Microsoft wants to be ahead of the Moore’s Law curve before it rounds out and leaves it scrambling for more ways to deliver key services. This is not unlike how the other hyperscale companies are thinking either, with both Amazon and Facebook and others keep open minds about building increasingly heterogeneous machines.
Following success accelerating the Bing page ranking algorithms, his teams started to look to other services that could be similarly pushed with FPGAs, including machine learning and deep neural networks. “The aspiration,” Shum explained, “is that we will build this new fabric of programmable hardware to complement existing programmable software frameworks. And we will build that hardware to benefit a lot of our workloads, then open it up for third party and our own ecosystem as well.”
And so, bingo. No pun intended. It is this model of building new datacenters around software, which is of course, designed to be programmable—with hardware that is also programmable to expand a new range of services that are literally built for these devices. And it does not end with Microsoft. Presumably, Intel sees a big opportunity for FPGAs in cloud datacenters (even if a company like Google still sees them as too difficult to integrate into their workflows—and GPUs too, for that matter) if it can make sure to wrap a software ecosystem around them—something that some clever companies (including a small company founded by Seymour Cray that has done some very interesting work programmatically speaking) are doing beyond the OpenCL approaches vetted by Xilinx, Altera, and others.
Altera’s head of strategic markets, Mike Strickland, spoke with The Next Platform following the Intel deal and while of course he is on lockdown detail-wise (and much of the commentary from this point lies in what Intel will do with its newfound FPGA glory), he was able to offer some clarification about why companies like Intel are seeing a big future in what was once considered a “limited” marketplace (to the $1 billion point, anyway).
“That projection about one-third of all nodes having FPGAs in cloud environments makes sense if you look at the migration from discrete to co-packaged to integrated FPGAs that Intel talked about,” Strickland says, especially if you listen to what Microsoft had to say about extending FPGAs to their ecosystem partners and third parties.
This is all a bit cryptic, but one can assume that FPGAs will not only be used for user-facing services, but for users to access, particularly in integrated form on its Azure cloud. And if that happens, it won’t be long before (if it doesn’t happen before) Amazon offers FPGA cloud offerings for key workloads. That is definitely not out of the range of possibility, either, since AWS was the first large public cloud provider to provide GPU computing instances and tend to stay ahead of the curve by offering the latest cloud-tuned high-end Xeons. This could also mean that companies like Microsoft, who have well-developed compilers and tools (think about OpenCL plus Visual Studio) can find ways to add meat to FPGA-based servers. Further, Microsoft has done a great job in showing some interesting systems (with accessible OpenCompute designs that Hewlett-Packard, Dell, Quanta, and others can build) that show how to network FPGAs to talk to one another over SAS interconnects while the rest of the network handles other work, creating what is essentially two machines in one—all of which Microsoft released as a production concept last year. The point is, there are endless ways one could look at the potential for FPGA-based datacenters and for the first time in a long time, we’re looking to Microsoft to show what might be next in the datacenter from both a hardware and software perspective.
This part is just informed speculation, of course. But with that massive of a cash deal, we have to believe that Intel knows something about the future of the datacenter. It’s no secret that most chip and system vendors see a more heterogeneous future ahead—but it would not surprise your friends here at The Next Platform if at some point, Altera rival, Xilinx, gets snapped into the maws of another giant. And for purveyors of software that can play well with FPGA-laden systems, the time to market explosion, as much as can happen in this niche anyway, is probably right about now.
Speaking of the code, hooks for FPGAs (which as we’ve described previously, have a programming hurdle to cross), there may be some momentum there as well. Strickland pointed to the story of GPUs as a solid reference point on how a niche accelerator can build a robust software ecosystem around it, dramatically expanding its reach by reducing some of the complexity. And Altera is standing by OpenCL as the path to swim further into the mainstream.
“If you looked at FPGAs five years ago, Intel wouldn’t be interested if you had to do HDL programming for each use and customer. So yes, something changed in that time. One of the big breakthroughs, the biggest building block, is indeed OpenCL. It’s not everybody though, but it has come a long way.” Strickland continues, “If you look at what Nvidia did, they realized CUDA was too low-level so they advocated OpenACC. There’s no reason why something similar can’t be done as an extension of the work we’ve done on our OpenCL compiler. That has a front end that parses OpenCL but most of the heavy lifting is done at the backend of the compiler. There it does over 200 optimizations (external memory bandwidth, for example) and does things that would take an HDL programmer six months to do.” With that in place, he says that there is no reason why it’s not feasible to add higher-level front ends on the compiler, including OpenACC, OpenMP, and more.
The data and external memory interface management have come a long way over the last few years, Strickland says, and these are the core improvements that moved FPGAs and their OpenCL frame over the accessibility border—well past the type of low-level HDL work that gave FPGAs a bad rap usability-wise for a number of years
While Strickland would not explain in more detail how we went from the $1 billion total addressable market number in the datacenter to one third of all cloud datacenters by 2020 figure—and the mathematics on the Intel side continue to make eyes spin, there is no doubt that this is, as predicted before any of this blew up, the year of the FPGA.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment