Compute

Intel Is Still Struggling In The Datacenter, But It Could Get Better

Published

Intel has been pushing its two-core server CPU strategy for so long, in one form or another, that we have become accustomed to differentiating products the way Intel does and then try to figure out what workloads these chips might be useful for.

The Atom and E-core chips, which have their heritage in Intel’s laptop processors and which are aimed at energy efficiency, are minimalist designs aimed at high throughput per socket but modest workloads, while the true Xeon cores – now known as P-cores, short for performance – are distinct cores with different but overlapping features and higher throughput per core, which is important for single-threaded workloads that are common in the IT estate.

With the upcoming “Diamond Rapids” Xeon 7 P-core variants, Intel’s chip architects – many of whom no longer work at the company – decided to remove simultaneous multithreading, known as HyperThreading in the Intel architecture, from the design. The idea, we surmise, was to take out the overhead of SMT from the design, which allows two virtual threads per core, which can boost throughput at the expense of slightly lower single-threaded performance and also introduces another attack surface for security vulnerabilities. This is why many Arm server CPU designs do not have SMT.

To SMT or To Not SMT is a pesky question, and Intel has vacillated here. The original Atom processors from a decade and a half ago had it, then it was removed with the “Silvermont” cores in 2013 and was not added to the E-cores (code-named “Gracemont,” “Crestmont,” and “Skymont”). Given that chips these days have a lot of physical cores, some of the P-core CPU designs for desktops and laptops had SMT removed to improve their performance and efficiency, and this carried into the high-end Xeon server CPU line with Diamond Rapids, which is based on Intel’s 18A process (roughly akin to 2 nanometers) and which is expected to ship in the second half of this year. High-bin Diamond Rapids Xeon 7 parts will have four compute tiles and a total of 192 cores.

Over the past several months, after Kevork Kechichian came from Arm to be general manager of the Data Center Group (they are no longer calling it DCAI except in the financial reporting), Intel has decided to can the eight-channel variants of Diamond Rapids and focus on high-end 16-channel parts that are aimed at big workloads, including database servers, HPC systems, and AI host nodes. With 18A still ramping and in relatively short supply, Intel has to pick its production targets very carefully to chase the dollars.

Lip-Bu Tan, Intel’s chief executive officer, said on a call with Wall Street analysts that Intel was working to accelerate the delivery of the follow-on “Coral Rapids” Xeon 8 P-core part, which would add SMT back into the design. This chip was originally slated for the second half of 2027 to early 2028, and we will see how quickly it will be able to get it out the door.

We think that one of the ways Coral Rapids might be accelerated to market is to use an advanced node of 18A instead of the 14A process that was expected. So far, Intel Foundry has no external customers for 14A and the company is very clear that it needs one for the ramp to proceed. Hopefully, 18A is not the new 14 nanometers, a process that Intel was stuck at for way too long as Taiwan Semiconductor Manufacturing Co pushed down into 7 nanometers and 5 nanometers with their 6 nanometer and 4 nanometer tweaks.

To be fair, Intel is still kinda stuck at the 10 nanometer SuperFIN and Intel 7 processes for parts of its Xeon 6 chips even as it uses Intel 3 (something around a 4 nanometer to 3 nanometer process) for core tiles. With the “Clearwater Forest” Xeon 7, which is an E-core design expected in the first half of this year, the I/O are etched using Intel 7, the base tiles are etched using Intel 3, and the core tiles are etched using 18A. This may be a choice based as much on the relatively low volumes expected for Clearwater Forest. The E-core Xeon 6 processors have not exactly taken the world by storm, but there is some interest, but some manufacturing helps the 18A ramp and also helps cover the cost of that ramp.

Anyway, Coral Rapids might be the first Intel processor to integrate NVLink Fusion ports to attach to Nvidia memory fabric switches and GPUs in a coherent fashion. There is speculation that the Coral Rapids chip will support DDR6 main memory, and up to four memory sticks per channel for a big boost in main memory capacity for server nodes.

If there was one big bummer in the Intel financial report, it was the admission that Intel could not meet demand for its Xeon processors at any process because of supply constraints with Intel 7 and Intel 3 and the fact that it has to balance the needs of client device builders against server builders.

“Obviously, we are shifting as much as we can over the datacenter to meet the high demand,” said Dave Zinsner, Intel’s chief financial officer, on the call. “But we can’t completely vacate the client market. So we are trying to support both as best we can and obviously work our way out of this supply issue. I do believe that the first quarter is the trough. We will improve supply in the second quarter. And part of the challenge is that in the third and fourth quarter of 2025, we lived off of supply. But we also had a reasonable chunk of fixed finished goods inventory to also work through. Unfortunately, that is now down to kind of 40 percent of what it was at peak levels. So we don’t have that to rely on. It is just literally hand to mouth – what we can get out of the fab and what we can get to customers is how we are managing it.”

Elsewhere in the call, Zinsner said that Intel was prioritizing internal wafer supply to Xeons and leveraging an increased mix of externally sourced wafers for its client devices. It is good that Intel has that option, but that does not help the ramps and it might be more costly than using internal capacity at Intel Foundry. (Then again, it may be cheaper, and on second thought, we think it might be. . . . )

Everybody has their eyes on the 14A process, which Intel has said it will not put into production until it lines up external customers – perhaps later this year or early next. In the meantime, development continues so that Intel can do the ramp relatively quickly when it does get the go-ahead, and we strongly suspect that there will be political pressure on chip companies like Apple, Nvidia, and maybe even AMD to source some of their chips on 14A if the ramp is not terrible.

“Intel 14A development remains on track,” Tan said on the call. “We have taken meaningful steps to simplify our process flow and improve our rate of performance and yield improvement. We are developing a comprehensive IP portfolio on Intel 14A, and we continue to improve our design enablement approach. Importantly, our PDK is now viewed by customers as industry standard. Engagements with potential external customers on Intel 14A are active. We believe customers will begin to make firm supplier decisions starting in the second half of this year and extending into the first half of 2027. We also have the opportunity to provide strong differentiation in advanced packaging, particularly with EMIB and EMIB-T. We are focusing on improving quality and yield to support customer desire for ramps beginning in second half of 2026.”

In the meantime, AMD and Nvidia will be competing hard against Intel, and TSMC is absolutely not going to let up at all as it ramps its American fab capacity to do its part to help prevent World War III.

And with that said, let’s go over the numbers for Intel in the final quarter of 2025.

In the fourth quarter, Intel’s revenues were down 5.2 percent to $13.67 billion, and operating income shifted to a gain of $580 million versus a $401 million operating loss in the year ago period.

Intel Foundry continues to be a drag on the company, with revenues – almost exclusively coming from the Client Computing Group and the Data Center Group – of $4.51 billion, up a tenth of a point year on year. But operating losses at the foundry, thanks to the ramps of Intel 7, Intel 3, and 18A and development for 14A, grew to $2.51 billion.

Intel’s current inhouse Core and Xeon CPU volumes can carry the company for a while if need be, and that seems to be the plan. All profits from the CPU products are propping up the foundry – which is how it has been for a decade and a half since Intel hit the 10 nanometer wall and was knocked flat.

Here is the big table showing the numbers since Q1 2023:

What we care about here at The Next Platform is the datacenter business, which has thankfully been consolidated back into a single Data Center Group thanks to the sale of the flash storage business and the spinoff of the Altera FPGA business. Now, everything datacenter is in one place and we don’t have to model it. We can see it.

What we see in Q4 2025 is that Intel had $4.74 billion in sales for what is still called the Data Center & AI group in the financial reports but which is using the old Data Center Group name, up 8.9 percent year on year and up 15.1 percent sequentially. This is not GenAI Boom growth, but it is not decline. Moreover, with $1.25 billion in operating profits, the Data Center Group has increased its profitability by a factor of 3.3X year in year and by nearly 30 percent sequentially. This is good, given all the circumstances, and is reflective of the demand for high-end CPUs for HPC and AI systems that don’t have a lot of CPUs but they do tend to use the most expensive ones given the task of keeping even more expensive GPUs or XPUs fed with data.

It is hard to say what the steady state rate of revenue and profitability will be for the Data Center Group as we go forward, but it will almost certainly not ever attain the revenue levels of 2020 through 2022 or the profitability levels of 2017 through 2020 ever again. A steady state for this business might be somewhere around $6 billion a quarter in revenues and maybe $2 billion in operating profits – and that is if everything goes right. We think the future is Arm chips accounting for 25 percent of server revenues, with Intel and AMD arguing over the remaining 75 percent and fighting to be the one with 40 percent share compared to the other’s 35 percent share.

Last Note: AMD’s Epyc designs, which double up core counts by cutting cache in half, is a much cleaner way to get two different types of processors without having to change functions. Intel might want to think about that.