Sponsored Post: CPU speed matters a lot to certain types of process-intensive applications like artificial intelligence (AI) and machine learning (ML), so anything that helps to boost the raw power of the servers which inhabit modern datacenters will provide a welcome performance boost.
That’s the primary aim of the newly launched 4th Gen Intel Xeon Scalable processors, codename “Sapphire Rapids”, a product family which collectively sports enough number crunching muscle to meet some of the most demanding workload requirements currently known to man.
Intel believes the new chips deliver more built-in accelerators than any other CPU in the world, optimized not just for AI/ML but also analytics, networking, security, storage and high performance computing (HPC). Specific instruction sets include Intel® Advanced Matrix Extensions (Intel AMX), designed to boost AI inference and training, and estimated to offer up to 10X performance increase for real-time PyTorch workloads.
Adjacent accelerators include Intel In-Memory Analytics Accelerator (Intel IAA), Intel Data Streaming Accelerator (Intel DSA), and Intel QuickAssist Technology (Intel QAT) – all of which can offload various algorithms and routines from the core CPU to run applications faster and leave the central Xeon chip with more capacity to do more processing.
The new Xeon chips have been designed to deliver a balance of performance and total cost of ownership (TCO) which will appeal to organizations that need to run large scale datacenter hosting and application infrastructure without breaking the bank or racking up huge electricity bills – particularly important given the ongoing energy crisis.
Compared to prior generations of Intel Xeon processors, 4th Gen Intel Xeon Scalable processor customers can expect a 2.9X average performance per watt efficiency improvement for targeted workloads when utilizing the built-in accelerators for example. And there’s also up to a 70 watt power saving per CPU in optimized power mode with minimal performance loss, and a 52 percent to 66 percent lower TCO, said the company (see more detailed performance metrics for exact configurations here).
The 4th Gen Intel Xeon Scalable CPUs have been endorsed by some of the world’s largest server and supercomputing manufacturers, cloud service providers, networking equipment companies, web hosting firms, telcos and other organizations running large scale AI workloads, including AWS, Cisco, Ericsson, Google Cloud, HPE, IBM Cloud, the Los Alamos National Laboratory, Microsoft Azure, Oracle Cloud, OVHcloud, SAP and Telefónica.
And with nearly 50 targeted SKUs optimized for specific customer use cases and applications as well as general purpose workloads already in the bag, it’s almost certainly only a matter of time before that list expands further.
You can learn more about the 4th Gen Intel Xeon Scalable processor product family by clicking this link.
Sponsored by Intel.
Intel Legal Notices and Disclaimers.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment