Ahead of The Next FPGA Platform event that we hosted recently in San Jose, we talked to Manoj Roge, vice president of product planning and business development at Achronix, about the three waves of FPGAs that have occurred over the past three decades, and in the course of our live conversation, we got a little more insight into the addressable market for FPGAs and also talked about the fourth wave, which is just starting now.
While Achronix was founded in 2004 and got its first products into the field in 2007, it is the upstart in the market compared to programmable logic industry pioneers Altera (now part of Intel), Xilinx, and Lattice Semiconductor, which were established two decades earlier. But Roge spent a decade as a product line manager at Cypress Semiconductor, and then had a five year stint managing FPGA products at Altera and then did seven years of the same job at Xilinx before joining a considerably more aggressive Achronix a little more than two years ago. Roge is bullish about the prospect for FPGAs in the datacenter and at the edge, and explained in his live interview from The Next FPGA Platform event on January 22, 2020 a little more about this next wave of adoption of these programmable devices and how this is different.
“We believe that FPGAs will have a role as a programmable accelerator for deployments from cloud to edge to IoT,” Roge explains. “With that view, five years ago we set up a business model to license our IP to be integrated into a customer’s SoC or ASIC, and we believe that could be the fourth wave. We are already seeing this beginning.”
We suggested that perhaps Achronix could get an FPGA license deal with Nvidia, which might want to think about embedding programmable logic in its GPU accelerators and possibly in the SmartNICs based on the “BlueField” multicore Arm processors that are sold by Mellanox Technologies (which is not yet part of Nvidia since the $6.9 billion acquisition deal announced last March has not cleared all regulatory hurdles). The “Volta” GPUs are an ASIC with hard coded integer and floating point units, and given how quickly AI frameworks and HPC workloads are changing, we can envision that some of the portions of the GPU might be made more malleable in terms of their bitness and interconnectivity than others. In theory, at least.
As we talked about earlier, the combined revenues of the four big FPGA suppliers is on the order of $6.5 billion or so, and the total addressable market is for FPGAs is considerably larger – and how large is a matter of debate since FPGAs can, in theory, do a lot of the work that CPUs and GPUs and custom ASICs do. We put Roge on the spot about how this fourth wave, which includes licensed FPGA logic as well as whole devices, would expand the TAM and therefore, presumably, the revenue streams into the FPGA makers for datacenter products.
“We take a top down approach,” says Roge. “If you look at datacenter infrastructure spending, it is a huge number – around $100 billion or so. A big portion of that is on the technology spend. Even if we can get a small piece of it, as others have discussed earlier, you can’t just deploy CPUs to meet the exponential increase in compute requirements. We expect an incremental $10 billion plus TAM with this new wave of datacenter acceleration.”
So that begs the question of whether or not the existing TAM from networking, communications, aerospace, defense, and consumer markets plus this expanding datacenter TAM will be enough to support four major FPGA players and a bunch of smaller ones. We have certainly seen this happen with the CPU market, which has one major supplier (Intel), one returning big supplier (AMD), one surviving supplier of proprietary and RISC chips in relatively low volume but with high profit systems (IBM), and two Arm upstarts (Ampere and Marvell). And ditto for GPUs, where Nvidia dominates, AMD is an increasingly credible alternative, and Intel is an also-ran on PCs but has aspirations in the datacenter. Three or four players, with one dominant, seems to be the magic distribution.
The other question that remains unanswered is how FPGAs will be deployed in the datacenter. With the GPU and the CUDA hybrid CPU-GPU compute environment created by Nvidia, half of the desktop and laptop PCs in the world could run CUDA by virtue of having an Nvidia GPU in it. We do not expect for FPGAs to be automatically added to either PCs or servers, so there is not a built-in way for developers to learn how to program FPGAs and test out their ideas of how to integrate them into the application and data workflow. But, as Roge points out and as we concur, having FPGA instances available on the Amazon Web Services and Microsoft Azure clouds is a reasonable substitute for experimenting, and now, with all of the flexibility in interconnects coming to servers and networks, actual deployments will probably disaggregate pools of FPGAs from servers and compose them on the fly as needed, across the network fabrics, just as we expect to see with GPU compute and various kinds of persistent memory storage.
“Once we get maturity on the software stack, we think FPGAs will be going mainstream, from datacenter to cloud to edge,” Roge believes.
And we agree that there is potential for this to happen. It will all come down to cases, as it always does.