AMD 3rd Gen Epyc CPUs Put Intel Xeon SPs On Ice In The Datacenter

SPONSORED Sometimes, bad things turn into excellent opportunities that can utterly transform markets. Many years hence, when someone writes the history of the datacenter compute business, they will judge AMD tapping Taiwan Semiconductor Manufacturing Corp to etch the cores in its second and third generation Epyc server processors to be extremely fortuitous. This allowed AMD to leapfrog Intel a generation ago and set itself up for a sustainable process lead while AMD had a parallel architectural advantage over its server CPU arch-rival.

We have not seen Intel knocked down so hard in the datacenter since AMD’s 64-bit Opterons, with their integrated memory controllers, multicore architecture, HyperTransport interconnect, and other advanced features, made the 32-bit Xeon server chips look ridiculous in the early 2000s. It wasn’t until Intel cloned many of the elements of the Opteron designs with its “Nehalem” Xeon E5500 processors in 2009 that it could field a server CPU that was technically and economically competitive with the Opteron alternatives.

History is repeating itself with the third generation Epyc 7003 series processors (formerly codenamed ”Milan”), which came out in March of this year. (Our initial analysis of the SKU stacks is at this link and our deep dive into the Epyc 7003 architecture is here.) While Intel’s “Ice Lake” Xeon SP server processors, also the third generation of its most recent family, are a big improvement over their predecessors, they do not even come close to matching the Epyc 7003 series processors when it comes to single-core or total socket throughput performance. And when it comes to price/performance and compatibility with existing server designs, AMD is winning this matchup against Intel in datacenter compute – hands down. As we have said, Intel has improved considerably with its Ice Lake chips compared to the “Skylake” and “Cascade Lake” predecessors in the Xeon SP line. But AMD is cleaning its clocks. And caches. And vector units. And so on.

And now, we are finally getting the data to do competitive analysis pitting the AMD 3rd Gen Epyc chips against the Intel “Ice Lake” chips, and given how AMD is running a clean sweep, it is no surprise that Intel has brought back Pat Gelsinger to try to reinvigorate the Xeon SP lineup and save the server CPU business. AMD has broken through the 10 percent server shipment share after seven years of research, development, and product rollouts and seems poised to double that share – and maybe more – because the company will have a sustainable architecture and manufacturing process advantage. (Our best guess is that about a year from now, AMD will have 25 percent server shipment share – with some big error bars around that number to take into account macroeconomic factors and Intel’s pricing and bundling reactions.)

“We are very excited about the momentum we are seeing across our customer base,” Ram Peddibhotla, corporate vice president of product management for datacenter products at AMD, tells The Next Platform. “And if you look at the kind of total cost of ownership savings possible from 3rd Gen Epyc versus Ice Lake, you can plough that into your core business and you are able to bring efficiencies to the business across the board. I have said this before, and I will say it again. The risk actually lies in not adopting Epyc. And if you don’t adopt Epyc, I think you are actually at a severe competitive disadvantage.”

It is hard to argue that point at the server CPU level, particularly after you look at the performance comparisons we are going to do. And then let’s add in the fact that AMD is working with technology partners to bring Epyc chips to bear on particular software stacks and solutions that are relevant to the enterprise. This will significantly reduce friction in deals and drive enterprise adoption like we have already seen with HPC centers, public cloud builders, and hyperscalers.

First, let’s look at some relevant performance matchups, and we will start with the SPEC CPU benchmarks that gauge integer and floating point performance. These are table stakes to be in the server CPU; if you can’t deliver decent SPEC numbers, you won’t get hyperscalers, cloud builders, and OEMs to answer the phone when you call. If you look at the SPECspeed2017 and SPECrate2017 tests – which come in one-socket and two-socket versions with both integer and floating point performance ratings – AMD’s Epyc processors have the number one ranking on all 16 possible categories. (SPECspeed2017 measures the time for workloads to complete while SPECrate2017 measures throughput per unit of time, so they are slightly different in this regard.) And on power efficiency tests, AMD has swept the SPECpower2008 benchmarks and has the top ranking on all but one of the SPEC CPU 2017 energy efficiency benchmarks. This is unprecedented but could be the new normal for the next several generations of X86 server CPUs and maybe even across all classes of server CPUs. In many cases, the second generation Epyc 7002 series processors can beat Intel’s third generation “Ice Lake” Xeon SPs, and then the Epyc 7003s open an even larger gap. And here is the stunning thing that must have Intel fuming: AMD has now delivered better per core performance as well as better throughput up and down the SKU stack.

Here is how the top-bin parts compare, with “Ice Lake” Xeon SPs on the left, Epyc 7002s in the center, and Epyc 7003s on the right, on the SPECrate2017 integer, floating point, and Java benchmarks for two-socket systems:

amd_image_1

amd_image_2

amd_image_3

The gap between “Ice Lake” and Epyc 7002 is bad enough for these top-bin systems, but the gap between “Ice Lake” and Epyc 7003 is large. On the integer test, the advantage to AMD is 47.2 percent, on the floating point test it is 36.5 percent, and on the SPECjbb2015 test it is 49.8 percent.

So how does it look at a constant number of cores, say perhaps 32 cores? Still not good for Intel. Here are the SPECrate2017 tests for 32-core Epyc 7002, 32-core “Ice Lake,” and 32-core Epyc 7003 parts:

amd_image_4

amd_image_5

The “Ice Lake” core has a tiny bit more oomph than the Epyc 7002 core it was intended to compete against, but Intel didn’t make it into the field in time to do that, and the Epyc 7003 core, based on the “Zen 3” design, has quite a bit more performance. Therefore, a 32-core Epyc 7003 chip can do 34.2 percent more integer work and 30.6 percent more floating point work than the 32-core “Ice Lake” chip.

Even if you scale down the Intel “Ice Lake” and AMD Epyc 7003 chips, the situation is still not great for Intel, as you can see here in this comparison showing integer performance on the SPECrate2017 test:

amd_image_6

The message here is that if Intel wants to maintain shipments of its Xeon SPs, it will have to cut CPU prices and bundle in motherboards, NICs, FPGAs, and anything else it can in the deal to try to keep the revenue stream flowing. And even if it does this, Intel’s Data Center Group margins will take a big hit, as they did the first quarter of 2021. This is just the beginning of a potential price war and sustained technology campaign in the X86 server CPU market.

Here is a chart that shows how the Epyc 7002 and Epyc 7003 Epyc 7003 chip SKU stack compares against the most common SKUs in the Intel Ice Lake Xeon SP stack, which makes it easier to see the competitive positioning.

amd_image_7

“AMD purposely designed the Epyc server platform to have longevity while steadily increasing the value delivered in each generation of the Epyc family of processors,” explains Peddibhotla. “Many servers in the market will continue to support the second generation Epyc and the new third generation Epyc to co-exist together as the latest generation enhances performance per core even further and adds other core-count options to meet varying workload needs. The entry market with 8 to 16 cores will deliver great value with Epyc 7002 series with TCO-optimized volume. Per-core or high-density performance needs can be filled with the Epyc 7003. And the second generation Epyc is a great price/performance value at all available core counts.”

Intel, by contrast, is making customers move from the “Purley” platform for “Skylake” and “Cascade Lake” Xeon SPs to the “Whitley” platform for “Ice Lake” and then the “Eagle Stream” platform for the future “Sapphire Rapids” fourth generation Xeon SPs.

Although raw performance on the SPEC tests is an important thing that all enterprises consider, what they want to know is how much more oomph can they get if they are upgrading servers that are several generations back, perhaps four years old. There is always a consolidation factor, but this one is playing out in favor of AMD:

amd_image_8

As is usually the case, it will take far fewer servers to meet the same capacity or much more capacity will be available in the same number of physical servers. In this case, for just under 4,000 aggregate SPECrate2017 integer units of performance, you can replace 20 two-socket “Broadwell” Xeon E5 v4 servers with five Epyc 7003 Epyc 7763 servers to get the same performance or install 20 servers and get 4X the performance. Assuming that the Intel “Ice Lake” and AMD Epyc 7003 servers shown above cost about the same, for the same number of servers, you will get around 50 percent more performance, which means you can cut about a third of the server count to get the same performance and spend a third less money, too.

You can dice and slice this a lot of different ways, of course.

Here is a deep TCO analysis over three years that shows how this might play out for a 10,000 SPECrate2017 integer units of performance, showing the cost of acquiring the machines, administering them, paying for datacenter space, power, and cooling. It bears out what we just said above:

amd_image_9

AMD has fought a long time to get back to this position. And datacenters the world over should be grateful. We really needed some competition here.

Sponsored by AMD

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now