With Sapphire Rapids Launched, Gelsinger Focuses On The Future

Pat Gelsinger returned to Intel as CEO in early 2021, the same year that the company was supposed to launch is much-touted fourth-generation Xeon SP processor, dubbed “Sapphire Rapids.” Almost two years later and after multiple delays, Gelsinger on Tuesday finally was able to take the stage and announce the release of the datacenter silicon, along with its discrete GPU accelerator codenamed “Ponte Vecchio.”

Those intervening two years saw rival AMD carve deep into Intel’s still strong but less dominant share of the datacenter CPU space with its Epyc chips built atop of the latest generations of its Zen microarchitecture, Arm continue to find its way into datacenter systems, and a plethora of accelerators – not only GPUs from Nvidia and AMD but also FPGAs, data processing units (DPUs), and similar chips – and the rise of specific processors for such workloads as artificial intelligence.

But now the 4th Gen Xeon SPs are out and Gelsinger sounded like someone who not only wanted to boast about the myriad capabilities in the new processors and the revamped architecture but also was ready to move from answering endless questions about the products delays to talking about how their launch puts Intel in good position for the future.

“Sapphire Rapids is just the next step on that journey,” he said at the end of the hour-long event. “The Xeon roadmap is making great strides and progress and hitting the key milestones, rebuilding that execution confidence that our customers can have in Intel’s foundational technologies. We remain on track for our technology, our process technology, the bold vision we had for five nodes in four years. All of that is on track because, since our founding, Intel fundamentally believes in this pursuit of semiconductor innovation. As described by Moore’s Law, we will continue to innovate until Moore’s Law is exhausted or everything in the periodic table has been used up in that pursuit because we believe deeply in the power of technology. Today, we’ve provided a technological update for the backbone, the foundation that Xeon provides, and how critical it is as a foundation for human innovation. We are still as a company right in the thick of it. The world builds on Intel technology.”

Intel is still the behemoth in the datacenter chip market, but the industry is becoming more diverse, with AMD gaining share and the Arm collective doing so, too.

However, at the event Gelsinger and other Intel executives reiterated the vendor’s deep reach into the datacenter – not only enterprise facilities, but also in HPC and the cloud – and the strength of the innovation that, eventually, was able to put Sapphire Rapids into the market. Gelsinger left much of the presentation about Gen 4 Xeon SP – both the mainstream chips as well as the Max Series, which includes integrated high-bandwidth memory (HBM) for HPC workloads – and Ponte Vecchio, now known as the GPU Max Series, to Sandra Rivera, executive vice president and general manager of Intel’s Data Center and AI Group, and Lisa Spelman, corporate vice president and general manager of Xeon products.

Rivera noted that there are more than 100 million Xeon chips installed in systems today, from servers to networking systems to as-a-service infrastructure, and a host of high-profile executives from hardware, software, and cloud ecosystem partners and customers joined through taped video addresses to online discussions to praise the capabilities in the new chips, adding more heft of Gelsinger’s argument for Intel’s place in the industry. Those executives included Michael Dell, Nvidia CEO Jensen Huang, HPE president and CEO Antonio Neri, and others from such companies as Microsoft, Cisco Systems, Inspur, Supermicro, Lenovo, and Oracle.

In addition, a broad array of companies – including HPE, Cisco, and Lenovo – launched systems powered by the new chips on Tuesday and more are expected to the same soon.

Both Rivera and Spelmen walked the audience both at the event and online through the key points of the new processors. A deep dive into the architecture by The Next Platform can be found here. Rivera said the architecture for Sapphire Rapids was developed to address the rapid changes in computing that are being driven by the massive amounts of data being created – and the need to collect, control, and analyze that data – the constant connectivity in the world, and increasingly complex modern workloads like AI, machine learning, and analytics. Systems need not only better performance out of their components but greater security and power efficiency.

“Purpose-built workload acceleration is not just about adding more cores,” Rivera said. “It requires a true system-level approach in which highly optimized software is tuned to the differentiate feature in our hardware in order to accelerate the most critical business workloads, including AI, networking, HPC, and security. This processor represents a paradigm shift in how businesses run their workloads and solve their computing challenges.”

She and Spelman spoke about the accelerators built into the chip, including Intel’s Advanced Matrix Extensions (AMX) for AI performance, QuickAssist Technology (QAT) for offloading encryption, decryption, and compression, In-Memory Analytics Accelerator (IAA), Crypto Acceleration, and Trust Domain Extension (TDX) for improve confidentiality at the virtual machine level.

The two also outlined the various improvements many of these accelerators and other features offered, from a 2.9X improvement in performance-per-watt efficiency, 52 percent to 66 percent lower total cost of ownership, and a 53 percent average performance gain in general-purpose compute to three times higher performance in data analytics and as much as 3.7 times improvement in memory-bound HPC workloads.

Much of that comes from the integrated HBM technology in the Max Series CPUs, which addressed a significant challenge in HPC, Spelman said.

“For years, compute capacity has grown at a rate much faster than the memory bandwidth and this has led to a situation where you have workload performance that doesn’t keep up,” she said. “It’s stranded compute. We haven’t been feeding the cores enough data. This has been an obstacle to progress. It leads to wasted compute cycles, wasted energy, and cost.”

The Max Series is the only x86-based chip with integrated HBM, which not only delivers the 3.7-times performance improvement but also requires 68 percent less energy than competitive systems, Spelman said. In addition, developers can leverage the bandwidth in the new chips without having to make changes to their code.

Jim Lujan, HPC platforms and projects program director at Los Alamos National Laboratory, spoke about the integrated HBM in a tape video interview with Intel. Lujan said the lab uses modeling and simulation for a range of jobs, including the primary one of overseeing the country’s nuclear stockpile. Other tasks include epidemiology, planetary defense, and climate modeling.

“For us, we’ve done an analysis on some of our key applications,” he said. “Our applications, while they do a lot of compute, are not as restricted by compute as they are with memory. When we put together the procurement [for the Crossroads supercomputer project], we were looking for ways to improve to insight to reduce those bottlenecks on memory bandwidth.”

The Max Series chips with HBM helps do that, Lujan said. In addition, he noted that the lab’s developers have been able to easily move their existing code from legacy Xeon systems to those running Sapphire Rapids.

All this help drive a hopeful and expectant tone in Gelsinger’s address when he was on the stage.

“The past years have demonstrated how technology is increasingly central to every aspect of human existence and powered by silicon,” the CEO said. “Everything is becoming digital, and it truly is this magic of technology. We as a company are committed to continue to push forward that innovation, that discovery and growth. One hundred million Xeons. What an installed base, what an extraordinary platform. And working alongside our customers and partners, we see that this fourth-generation Xeon is building on that extraordinary foundation.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.