Intel To Set Its FPGA Unit Free To Pursue Its Own Path

Maybe Intel chief executive officer Pat Gelsinger has spent too much time at EMC and VMware. Because now Intel wants to spin out the FPGA business that is a small but bright spot in its datacenter and edge computing businesses.

It never made a lot of sense that EMC, the maker of traditional storage arrays and the poster child for datacenter-class storage during the Dot Com boom, snapped up VMware in December 2003 for $635 million in cash just after that boom had gone bust. EMC was looking for some place where it could park its cash and get a great return on its investment. But no one would argue that maybe EMC should have just gone public. As it was, EMC had to maintain Switzerland status for VMware so its partners would not abandon it for other server virtualization platforms from Microsoft and Red Hat, and this is always a hassle. And so, EMC never did build a hybrid server-storage platform that changed the architecture in the datacenter.

To be fair, these deals always make some sense in the financial engineering way, just like it made sense for Michael Dell and Silverlake Partners to try to get rich buying EMC (and therefore VMware) for an incredible $67 billion in October 2015 and in just the same way that it eventually made sense to spinoff VMware so it could finally be free – just so Broadcom could swoop down from the skies to buy VMware from Wall Street for $61 billion for its own financial engineering reasons. There were more twists and turns in there – including Michael Dell taking the company that bears his name private a year before announcing the EMC-VMware deal to avoid the sharp tongues of Wall Street.

Importantly, Dell also did not use VMware and EMC to build a new kind of server-storage-networking hybrid in the datacenter – something that Nutanix has tried to do – and we do not think Broadcom is trying to do this. These deals are about installed bases and subscription and maintenance streams and extracting maximum value from customers who are more or less trapped on the ESXi/vSphere stack.

When the rumors were going around back in March 2015 – the same month we started publishing The Next Platform – that Intel might buy FPGA maker Altera, we did a deep dive analysis of the state of the FPGA market and how it could be transformed by SmartNICs and other kinds of high speed computing where the FPGA is an excellent choice over a CPU or a GPU. It really comes down to software. Dedicated hardware with functions etched into ASICs are more efficient and CPUs running higher level software algorithms are more flexible, but the FPGA can sort of meet algorithm and function writers in the middle with FPGAs and hard blocked components like DSP engines. The problem is, you have to know how to code VHDL. You can convert C and C++ to VHDL, but ultimately you need to get in there and tune it for maximum performance.

When Intel did buy Altera in June 2015 for what seemed like an excessive $16.7 billion, we were also thinking that FPGAs could transform the datacenter. It was clear to us that in a lot of cases, an FPGA would be just right for workloads that do not change too much and yet needed low latency and high performance, albeit at a price/performance that could not match custom ASICs, which were getting more expensive with every node jump. (It is insanely costly at the 3 nanometer node.) There was talk about an FPGA being embedded in a third of the hyperscaler and cloud servers by 2020 – something that was driven in large part by the popularity of FPGAs on Microsoft’s Azure cloud to accelerate workloads and offload network and storage functions from the CPUs in servers. There was talk of hybrid CPU-FPGA packages, which never seem to get commercialized because no system architect likes static ratios of compute – unless they are determining the ratios. Like the hyperscalers and cloud builders, who can tell companies like Intel and AMD what their product roadmaps need to look like.

Intel hedged so many bets between 2015 and 2020 that it is hard to keep track of them all, but it started a GPU compute business for the third time, it bought Barefoot Networks and the Cray and QLogic InfiniBand interconnect businesses (which have all been sold off or shut down), and it bought up Nervana Systems and Habana Labs for a combined $2.35 billion in case custom ASIC neural network processors were going to replace CPUs, GPUs, and FPGAs. Intel looked like it could sell anyone anything. And what it ended up doing was selling very little of anything but “supply win” CPUs as others did a better job getting cutting-edge CPUs and GPUs into the field. FPGA sales just continued along their normal course because people who use FPGAs, and who have used them for decades, know what they are doing. The jury is still out on NNPs as far as we are concerned. (Sorry Cerebras Systems, SambaNova Systems, Graphcore, Groq, and Tenstorrent. We probably forgot to mention a few dozen others.) Intel has hedged so many bets that it forgot what the real bet was and it has therefore hemmed itself in.

And it is no surprise that Intel dropped Nervana’s NNPs like a hot rock for Habana’s devices – and has basically said that after Gaudi 3, which has taped out, the Habana line is over, too, and that some Habana technology – it’s matrix math engines and Ethernet interconnect – will live on in the future “Falcon Shores” GPU.

We wouldn’t place heavy bets on Falcon Shores making it to completion unless a big HPC center adopts it, and given how Argonne National Laboratory was treated, we don’t think there will be a lot of uptake unless Intel makes some pretty big pricing concessions. Which it can ill afford. Hybrid CPU-GPU devices – the original plan for Falcon Shores, have also been shelved.

So, the FPGA business formerly known as Altera is being spun out into a separate unit and will be run hands-off in the anticipation that Intel can do an initial public offering for that business sometime in the next two to three years. Thereby setting Altera free again and “unlocking shareholder value” as corporate behemoths say when they do this spinoff a few years after an acquisition.

We find this annoying, and said three years ago, when Intel’s chief financial officer, Bob Swan, was running the company and Gelsinger had not been asked to take the job (as far as we know), that Intel needed to start engineering its future and stop trying to financially engineer its future. Spinning out the MobileEye edge AI Inference business (as happened last year) or spinning out the FPGA business doesn’t really solve anything as far as we are concerned. Intel paid a fortune for these things, and it should give them a chance to be the businesses that they are and then benefit from them. Spinning them out weakens Intel even if it might – and we emphasize the might here – help an aggregate share price across Intel, MobileEye, and FPGA Co stocks when they are separate. Doing this tells Wall Street Intel doesn’t believe in its own stock, and at the very least, if Intel spins them out, then its stock should depreciate precisely by the amount of value it unlocks in the spinoffs.

The Altera team was a great team, and Sandra Rivera, who runs the Data Center & AI organization today, and Shannon Poulin, who we have known for years in the Xeon organization, will do a good job as chief executive officer and chief operating officer of an independent FPGA company and can build a great team again that can compete with the Xilinx part of AMD. The FPGA market is not going away, even if it is not exploding, and there is an opportunity to capture some of that value.

The FPGA business is a good business, but as you can see – well maybe, because those shares of blue are tough to parse – the datacenter portion of the FPGA business is not particularly large. (Our thanks to Aaron Rakers at Wells Fargo for allowing us to reprint this chart.) And it sure as heck is not as large as many of us thought it would become. But that doesn’t mean that Intel should not be in the FPGA business and that it should not have a great SerDes team and a great SoC team and a great algorithm team and infuse that across Intel. That is why AMD paid close to $49 billion to acquire FPGA maker Xilinx in February 2022.

By spinning out FPGA Co – here’s a crazy idea: You could call the spinout Altera – Intel can solicit private equity funding as well as public funding to help do the investing that Intel cannot right now afford to do to compete with AMD/Xilinx. The financial separation between Intel and Altera – why don’t we just call it Altera, people – starts with the Q1 2024 financials. By the way, Altera will keep its foundry relationship with Intel Foundry Services, but we presume Altera will have the option to go to Taiwan Semiconductor Manufacturing Co or Samsung as alternative foundries. Right now, if Intel did that with Altera still being part of DC&AI, it would be seen as a failure of its foundry roadmap.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


  1. Hmmm so in the mid 2010’s Intel said to itself “We are running out of plus signs to add to the number ’14’, so let’s walk into a casino with a really big pile of chips, and randomly place medium piles of chips onto blackjack tables, roulette wheels, etc”…And now is going around said tables and picking up the residual small piles of chips.

    Interesting biz plan, and perhaps related to my former HFT research data center moving from all Xeons to EPYCs.

    On the plus side, I did learn a lot Italian city names.

  2. Agree with lots of what is said. bottom-line, FPGA’s never made it into datacenters in any serious way and not for the lack of effort by both vendors. FPGA is a wiring device. Its strength is connecting domains and prototyping. It is not meant for delivering performance per watt per dollar which is what data center businesses is all about.

    One more point: Programming FPGA’s is still the game for folks who know VHDL/SystemVerilog and read/understand timing reports. That part of FPGA compilation never became general purpose enough to bring in other developers and that path is blocked.

    Summary: FPGA is nice niche business. It is the Goldman Sacks of banking. Intel under Pat is saying that its future lies in the Data Center and FPGAs don’t belong there. The proof is already there in balance sheets.

  3. FPGAs are great for prototyping exotic new hardware (and possibly the software to run on it, if any), but, for whichever reason, it seems that folks find barriers to entry into chip fabs low enough that ASICs are multiplying like rabbits these days (deep pockets?). Amazon, Apple, Google, Meta, Microsoft, Tesla, SiFive, and the NNPs listed in this TNP article (Cerebras, SambaNova, Graphcore, Groq, Tenstorrent) are all doing ASICs. Accordingly, I’m not sure that the prospect for FPGAs in the datacenter is very luminous, at least for “mass market” applications (high volume, lotsa moolah).

    FPGAs should shine where one needs only very few units of the target hardware, for highly specialized applications, where ASIC production would be futile due to low numbers. Examples would include control and communication interfacing equipment for CERN, FermiLab, an automotive plant, a prototype quantum computer; those sorts of low-volume applications (in my mind). Each of which requiring its own specific size and configuration of the device, and many of which may not be particularly integrated within a datacenter environment, but rather within an industrial control system, or some R&D lab.

    To me then, spinning Altera back into its own outfit is quite sensible. It is a well-recognized brand name for FPGAs, with solid reputation. And it allows Intel to further enhance its focus on its core engineering area of expertise, away from hubris and vanity, into best-in-class products.

    • FPGA market is a wide as the CPU market, you find FPGA in all kind of sizes and prices. So FPGA can go all the way down to the embedded market (where you can go to very low power usage especially if you are using very few cells) as well as in other more compute intensive market segments.
      But yes you are right FPGAs are mostly worthwhile if you are running inside units numbers that are up to a low 4-figures. Otherwise you might consider going to ASIC but ASIC development costs (not even including manufacturing) tends to be at least a low 6-figure number. So you can simply do the math what is the better options.

      As comversome as VHDL it probably is still a lot easier / cheaper to find a FPGA developer than an ASIC designer.

      • Good points.

        I just keep thinking if I was building an exascale machine that needed to last a decade, I would use FPGAs. Coding the hard blocks as the FPGA makers have had to do because of the size of their devices mititgates against absolute flexibility.

        • With great flexibility comes great responsibility (of course), but also great challenges in timing closure … One’s mileage likely varies but I could close at 48 MHz for a RISC-V on Lattice (with Yosys), but then less than 10 MHz for a pipleined ISA focused on symbolic processing … Hard blocks (ASICs) are definitely key for performance in computations that we know need to be performed (based on my verilog experimentations). But the FPGAs are great to delineate what is advantageous, or worthwhile, to implement in hardware vs software I think (eg. garbage collection would quite definitely remain a software process, for me).

  4. If Optane, aka crosspoint made sense then it made sense in FPGAs. But it never made sense.
    If FPGAs could sell as high ASP products by themselves then Intel might be the company for
    them, but FPGAs take a lot of support and the low ASP products are necessary to develop
    a market. So a dedicated company makes more sense, a company with a culture around RTL
    design and useful interfaces.

  5. Great article Tim. So weird to watch Intel pivot like a Gen1 Roomba on a room full of furniture. Soon they’ll limit themselves to CPUs, GPUs and Foundries as products. If so, I think Intel is backing themselves into a corner where they hope their GPUs will finally emerge and compete with NVIDIA (it wont because ). When the GPUs flop, they’ll have CPUs and foundries left and if the latter hasn’t built capacity yet (it also wont because that takes time), Intel will continue its death march and Xeon group will be the last to turn off the lights.

  6. Welcome back Altera, we have missed you, felt like we had to fight our way through layers of intel enforced cross to get to the real fpga people.
    Now , please AMD , realise your mistake and do the same for Xilinx before they are totally killed off the same way..
    When will management learn that FPGAs are more like ASICS , not CPUs, and things like adgile design don’t work in fpgas , some with hours days to do a compile ..

    • I doubt that will happen with AMD. If you look at the quarterly results, it is Xilinx that is floating AMD’s boat last three quarters. AMD always has been a shitty business.

  7. Developers, developers, developers.

    Had Intel made the API’s and tools to program those FPGA’s public they’d be worth money by now.
    Best guess is anything like that was killed by internal power struggles and a fear of losing control.

    Most comments above are correct, they are hard and expensive in time to work with but that’s an area where Open Source tends to be a win.

    • Seems like a fair analysis to me. If electricity cost half as much or a tenth as much and process shrinks and packaging advancements slowed even more, we might be thinking about building more reconfigurable systems that could be in the field for ten years. I am thinking this might be one possible path forward. We can’t afford to be throwing $500 million to $1 billion machines away.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.