The Tough Road Still Ahead For Intel In The Datacenter

A few years back, when Intel went up on the rocks with its CPU and GPU designs largely because its chip research and manufacturing did not keep pace with the manufacturing and packaging advances made by foundry rival Taiwan Semiconductor Manufacturing Co, we said that we were rapidly moving towards a world where Intel might have 40 percent of the CPU market, AMD might have 40 percent, and Arm and RISC-V would fight over the remaining 19 percent and 1 percent remaining for other exotic datacenter compute engine chippery.

Nothing we heard on the call with Intel’s top brass yesterday going over the company’s fourth quarter of 2023 financial results and nothing we see going on in the market leads us to believe this scenario is not the most likely new equilibrium point.

And with Nvidia having great control over the HBM stacked memory market and the advanced 2.5D packaging used in these devices thanks to its newfound, Gilded Age class wealth, we don’t see a world where Nvidia won’t command somewhere between 85 percent and 90 percent of GPU revenues over the next four years – no matter what AMD, Intel, and others do to create matrix math accelerators to run AI and HPC workloads.

It is against this expectation – a 40 percent CPU market revenue share and maybe a single-digit AI and HPC accelerator share – that we will gauge Intel’s performance in the datacenter going forward. If everything Intel does in compute engine design and manufacturing allows the company to do better than that, we feel it should be considered a win given the state of the competition in the datacenter, which doesn’t just come from Nvidia and AMD but also from the very customers that drove Intel to new heights during the unprecedented and massive hyperscaler and cloud buildouts in the past decade.

Intel has been struggling for so long that it might be hard to remember a time when Intel had 97 percent of CPU revenue share in a rapidly expanding market for datacenter compute and was regularly bringing 50 percent of its revenues down as operating income, a feat that was as remarkable as it was inevitable to come to an end thanks to missteps in manufacturing and intense competition from AMD and Nvidia.

Intel has changed its product groups since Pat Gelsinger returned to Intel three years ago to get the ship off the rocks, repair it, and get it back into the water, so comparisons are difficult. But the in the chart above, you can see how the Data Center Group and then the combination of the Data Center & AI Group and the Network & Edge Group have done in terms of revenues and operating income since the Great Recession.

That timing on our chart above is no accident. The first quarter of 2009 was when Intel launched its four-core “Nehalem” Xeon E5 processors, which came to market just as AMD had issues with its Opteron product line and companies were in no mood to take big risks.

Intel’s relentless advancement in CPUs between 2009 and 2012 essentially drove AMD out of the datacenter CPU market, given Intel great pricing and profit power, which it used to such an extent that it actually fostered all of the competition that is currently giving it grief today. This competition would have come even if Intel had not had such dramatic problems with its 10 nanometer and then 7 nanometer manufacturing processes, which made a mess of its CPU product roadmaps and which gave AMD the opening in 2017 to recommit to server CPUs and to start advancing its designs and take market share step by step.

At this point, Intel has some design wins with its compute engines, but it is largely winning business based on the fact that it can supply a server CPU when AMD is sold out of its current or prior Epyc processors, which generally offer more throughout performance for less money. There are exceptions when it comes to vector and matrix math or HBM memory or on-chip accelerators for special functions or big iron configurations with four or eight CPUs in a shared NUMA configuration to run big databases. But these are the exceptions, not the one-socket and two-socket general purpose server rule.

The bottom that Intel has hit is jarring, and that is why having a tireless and optimistic CEO, as Gelsinger most certainly is, who learned directly from Intel co-founders Gordon Moore and Andy Grove, is important. Intel may reach its five process nodes in four years goal, which is important as it delivers server and PC chips based on its 18A process, but that competition is not going away. With the hyperscalers and cloud builders designing more of their own chips, Intel’s compute engine business can only expect a reasonable share of the total market in the datacenter. And as Intel gets its foundry and packaging act together – which we have no problem believing Intel will do – perhaps Intel can even capture a sizeable share of the foundry business for advanced devices from TSMC and Samsung. But what we do not believe is that Intel can ever be as profitable in the datacenter as it was when it had hegemony in the last huge gasp of the general purpose CPU computing in the datacenter.

Intel can never get back there. This is like the position of the United States in the wake of World War II. And given that, our expectations for Intel – and its own expectations for itself – have to be more reasonable. There is always a chance that TSMC, AMD, and Nvidia will all screw up bigtime at the same time, but we wouldn’t count on it.

Intel’s Shrinking Datacenter Footprint

Make no mistake, Intel is better off at the end of 2023 than it was at the end of 2022. In the quarter ended in December, Intel brought in $15.41 billion in revenues, up 9.7 percent year on year, and posted a net income of $2.67 billion, a shift from a $664 million loss in Q4 2022. The company burned $3.3 billion in cash to get through last year above and beyond its cash flow, but it still has slight more than $25 billion in the bank and that gives Intel maneuvering room if it doesn’t do something silly with that cash. Like make blockbuster software acquisitions or expansive FPGA acquisitions as it did in the past.

In the quarter, the Client Computing Group, which makes CPUs, GPUs, and chipsets for PCs and tablets, had revenues of $8.84 billion, up 33.5 percent, and an operating profit that more than quadrupled to $2.89 billion. We don’t care much about the client chip business except for the fact that at AMD, Intel, and even Nvidia to a certain extent those clients help push the design envelope and help to pay the bills and thus support datacenter projects either directly or tangentially. So hooray for Intel CCG for making money again. It was just about the only profit Intel had excepting its Mobileye edge AI stuff. Which, again, we don’t care about except for the maneuvering room its gives Intel to fix its foundry and therefore its competitiveness in chips.

What we do care about a great deal is the Data Center & AI Group, which is called DCAI these days by Intel. In the fourth quarter, revenues were down 7.4 percent to $3.99 billion, and net income contracted by a factor of nearly five to a mere $78 million. We don’t have a real good compare to this DCAI group because it was only created in 2022. (We will talk about Intel’s “real” datacenter business in a second.)

As we have been pointing out for the past year, aside from AI servers that are being deployed for training and inference, there is a recession in general purpose computing in the datacenter, and even Intel concedes that the total addressable market for datacenter CPUs is continuing to contract. With competitive pressures and an inventory writedown for its Programmable Systems Group, which is in the process of being spun off after gussying up its FPGA product roadmap to better compete with Xilinx, DCAI only managed to wring out $78 million in operating income.

That is about half the revenue and none of the income of the peak of the old Data Center Group for which Gelsinger was its first general manager.

Any incremental revenue that Intel can get in server CPUs will add to its profits, but it will need to double those revenues to get to what will no doubt be a lower level of profitability.

The only real gating factor on profits is how much Intel and AMD want to compete on price as well as performance with each other. At this point, with Arm server CPUs on the rise at the hyperscalers and cloud builders and offering somewhere between 30 percent to 40 percent better bang for the buck than X86 instances, we think neither Intel nor AMD can afford to take on the Arm collective on price and will be very keen to maintain pricing for X86 chips that support legacy X86 applications. This is a good thing for both companies in that it will provide some profits. You can bet the cloud builders will continue to pass through that premium price for X86 compute to customers, with a little extra in there. Intel and AMD will grin and bear the Arm and RISC-V competition because doing anything else will only lower revenues and profits further. This is how mainframe makers dealt with proprietary minicomputer competition, and how they dealt with RISC/Unix competition, and how they in turn dealt with X86 competition in the datacenter. Prices will band to avoid cross-band competition and chip makers and server makers will get what they get.

Intel’s “real” datacenter business is not just located in the DCAI group. The NEX group also sells datacenter products.

In the fourth quarter, NEX saw revenues drop by 28.6 percent to $1.47 billion, and it posted a $12 million operating loss compared to a $58 million operating gain in the year ago period. Considering the revenue contraction, that operating loss was not so bad.

Add them together and the first approximation proxy for the old Data Center Group had revenues of $5.46 billion, down 23.1 percent compared to Q4 2022, and $66 million in operating income, compared to $429 million a year ago and representing only 1.2 percent of revenue.

That is nowhere near the 45 percent to 50 percent level that Intel enjoyed for so long. And we think that if Intel got back to something akin to a 20 percent to 25 percent level, it will be doing alright compared to the competition. Maybe it can get to 30 percent if there is a sudden boom in X86 serving in the datacenter or at the edge. But probably not more than that.

Here is how the various Intel groups have fared in the past several years in terms of revenues:

And here is the underlying table that shows operating income and share of revenue for the five existing groups and the dead Accelerated Computing Group (AXG), which had its revenues put into DCAI and CCG starting in Q1 2023:

What we want to know every quarter, no matter how Intel talks about its business, is what its “real” datacenter business looks like in terms of revenues and operating income. Not all of NEX revenues is related to the datacenter, and some of the things that Intel puts into the Other category are.

Our best guess is that Intel’s “real” datacenter business had sales of $5.35 billion in Q4, down 19 percent, and had an operating profit of $67 million.

The question now is where can this “real” datacenter business go from here? In Q4 2019, Q1 2020, Q2 2020, and Q4 2021, revenues kissed or broke through the $9 billion level. If the general purpose server market burns up its excess capacity and if Intel can get an appreciable penetration with its AI and HPC accelerators – that last one is a big if – then there is a chance that it might get above $6 billion per quarter and maybe even get back above $7 billion per quarter. But we think it will take a lot more time and TAM for it to ever get back to $9 billion a quarter again.

Yes, having “Clearwater Forrest,” the first Intel Xeon SP etched in 18A, in the fabs for manufacturing is a good thing. Yes, shipping 2.5 million “Sapphire Rapids” Xeon SPs, with over a third of them being used in AI systems, is a good thing. And maintaining market share in datacenter CPUs instead of slipping more is good, too. Having chips that can do more inference on the CPU is also a good thing. These are good starts on the road back to Intel having more datacenter compute engine market share.

But Intel has very little traction with its “Ponte Vecchio” Max Series GPUs and only some traction for its Gaudi2 accelerators and hope for its Gaudi3 follow-on because there is not enough matrix math capacity in the world. All things being equal, most companies want an Nvidia or AMD GPU for this AI training and inference work these days. Intel said that its accelerator pipeline “is now well above $2 billion and growing” and was up double digits sequentially in Q4, but that is against a market that might have consumed $38 billion of datacenter GPUs in 2023 and might do as much as $48.7 billion this year. Intel’s pipeline is 2.3 percent of 2023 and 2024 estimated GPU revenues, which is noise in the GenAI cacophony. And it is a long time before Intel will have its “Falcon Shores” converged GPU-Gaudi architecture out the door.

Having said all of that, the IT market really needs Intel to compete on the CPU, GPU, and foundry fronts. And however painful it looks now, Intel will hang in there and it will reconfigure itself for a much harder future, and take a bigger piece of it than it has at the moment.

We are absolutely convinced of this, as much as we ever were about the resurgence of AMD and the growth of Nvidia from scratch in the datacenter.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

8 Comments

  1. Intel Xeon on a gross basis 22,606,432 units at $985.59 and on a NET basis I rely for ‘all up’ on the inframarginal units revealed = 44,844,979 units at $474.86 that after tax is right around cost. CEO Gelsinger claims 2.5 M Sapphire Rapids in the year so what is all other Xeon and compliment related production? Sapphire Rapids channel available volume + 73.6% q/q and Ice channel available + 12.3%. Cascade Lakes showed a 2P workstation refresh in q3. The question is percent weight of other DCAI product lines? ER, SR, Ice, Cascade, D has no availability in the channel since 27/17xx series I presume D rejected against ARM in base stations. NEX definitely hit bottom in q4. Leaves the wild card as Gaudi plus whatever FPGA I scored FPGA 1.4 M in 2022. I score all DC+AI on a net basis covering all costs = 50,120,188 units in 2022 leaving 2023 < 10.6%. Desktop 2023 I have net 91,383,082 units at $111.25 each. Mobile 2023 173,067,963 units at $110.32. The gross revenue conversion is + 63%. Intel on a net basis minimally 309,296,024 for the year and AMD approximately with dGPU and embedded 109 million units or 26% production share. I think it's likely Intel may have produced as many as 370ish M unis in 2023 and q4 operating margin at $93 M is an indicator on net take per unit $1 each across 93 M units. mb

  2. Adding, when CFO Zinsner came on board his first quarterly financial call, reflected, said in so many words including related to CapEx expansion his function was auditing, managing “an at cost business”. mb

  3. “Nothing we heard on the call with Intel’s top brass yesterday going over the company’s fourth quarter of 2023 financial results and nothing we see going on in the market leads us to believe this scenario is not the most likely new equilibrium point.”

    If China invades Taiwan, what happens to the supply of chips for AMD and NVIDIA (and Apple)? Intel has CPU fabs outside that area.

    • They go very quickly to zero until the US and the EC recognize China’s right to Taiwan, at which point, the chips flow again. Or we go to nuclear war and they never flow again. Those are the two paths I see.

    • Tell me you know nothing about semiconductor production without telling me you know nothing about it. First don’t you find it strange that Intel themselves had to turn to TSMC due to their own foundry problems, then you only point out AMD, NVDA and Apple using them? Plus, TSMC has a united states facility in Arizona of which can and will be used for 4nm production, and expansion coming online to keep ahead of Intel tech. AMD can also use Global Foundries which was the FABs unit they spun out a while back, and then it has partnerships available in Tyler, Tx with Samsung and their new 2nm capabilities. And finally the AMD is well aware of the US based chips ACT for the Taiwan/China issues that could arise. Leading to proper supply chain contingencies in place for a US govt, which can’t seem to buy EPYC & Instinct fast enough. I.E all the current super computers are HPE/CRAY based with AMD! As well as some really big on-prem and cloud deals for their DoD entities. #justsayin’

  4. Intel said “Sierra Forest has final samples at customers and the production stepping of Granite Rapids is running ahead of schedule well into power-on validation and very healthy.”

    Those chips will share a platform that offers MCR DIMM support at 8000MT/s and CXL 2.0. The Granite Rapids chips will offer dual AVX512 units per core and the per core AMX tiled matrix acceleration. All this on Intel-3, which allows them to double the core counts… maybe quadruple on Sierra Forest.

    • It’s a better server CPU lineup, to be sure. But AMD is not sitting still, and the big money is clearly in selling GPUs, which Intel is not competitive with even with its ambitious designs.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.