Sponsored Feature Over the course of two decades, Intel drove the most important architectural change in the datacenter since the advent of the System/360 mainframe in the 1960s and the rise of RISC/Unix servers in the 1980s. And that was the creation of the general purpose X86 platform, a universal canvas upon which software developers could paint their applications from the widest possible pallet of programming language colors.
This general purpose computing platform has served us well, but the architecture of systems in the datacenter is changing, mostly as performance increases thanks to Dennard clock and power scaling and Moore’s Law advances in transistor performance and economics have slowed thanks to the limits of physics. To put it bluntly, X86 cores, and indeed the cores on any CPU architecture, are expensive and the work they do is so important that they can no longer assume many of the jobs they also had to do – jobs that were initially engendered by this universal X86 platform and pulled off of what would have been external appliances on PCI-Express cards or on the datacenter network – such as running network and storage drivers or virtualization layers like hypervisors, or doing data compression and encryption to speed up and secure the movement of data within the datacenter and across server nodes running distributed applications.
The move to what Intel calls infrastructure processing units, or IPUs, started with the bump-in-the-wire acceleration using FPGAs that many financial services organizations used for high speed trading and other applications, so data transformations commonly done, without modification, could be put into programmable logic and therefore not encoded to run on the CPU in a high level language that would perform the task much slower.
Before long, SmartNICs emerged, which had all kinds of network and storage acceleration, which means routines, algorithms, and protocols that might have otherwise run on the CPU in conjunction with the operating system on a server were put on the server network interface card. These SmartNICs evolved and eventually could run a whole Open vSwitch virtual switch for hypervisors But they did not have their own general purpose compute cores for running a broad set of software-defined functions, just hardened accelerators for accelerating specific functions.
With the IPU, this offload model has been taken up another notch, with sophisticated networking and computation being put into a server’s network controller that makes it really a system in its own right – but one that can perform tasks such as running a full hypervisor and virtual switch, the virtualized network and storage access, data compression and encryption, and myriad other programmatic routines atop a native packet processing engine that can take such work out of both the servers and the switching infrastructure. The IPU has general purpose cores and a full operating system, and is in a sense, a kind of adjunct server in its own right.
With the IPU, the server is back to doing what it was designed to do: Run an operating system and an application, and not much else. And maybe someday, the server may not even run a full-blown operating system, providing a perfect isolation between workloads running on the CPUs – very likely sometimes with their own GPU or FPGA or neuromorphic adjuncts – and the entire stack of control planes running on the IPUs.
The important side effect of this hybrid CPU-IPU architecture is all of the infrastructure control for a cloud is on the IPU and all of the resources for the tenants on that cloud are isolated over on the CPUs. This is important for delivering secure multitenancy on clouds.
The Evolution From NIC To IPU
Most companies are not ready to make the leap from a server doing just about everything to an IPU helping the server do most things except running applications. So Intel still sells plain vanilla NICs, SmartNICs, and IPUs that have server class CPUs for compute and either FPGAs or custom ASICs to provide programmability and substantial compute horsepower.
Here is what Intel’s evolution from NICs to IPUs in the past five years has looked like:
The “Creek” line of devices are SmartNICs, the “Spring Canyon” line of devices are FPGA-accelerated IPUs, and the “Mount” line of devices will be ASIC-accelerated IPUs. The first of these latter devices, called “Mount Evans,” is a very sophisticated IPU that was co-developed with Google and that will be commercialized by Intel so other companies can benefit from its substantial capabilities. (We drilled down into the Mount Evans architecture, as well as into the “Arrow Creek” SmartNIC and the “Oak Springs Canyon” IPU when they were unveiled last August.)
As usual, the hyperscalers and cloud builders are leading the way with SmartNICs and IPUs, each for their own purposes and driven by the technical needs and economics of their specific workloads. And the telecom and other service providers will be right behind them, followed by large enterprises that want the benefits of SmartNICs and IPUs for their private clouds.
Companies don’t invest in point products, they invest in roadmaps that plot out the deliveries of successive products, and that is why Intel rolled out a roadmap reaching to 2026 for its IPU lineup during its Intel Vision event hosted last week, which we covered in detail but which we will add in here to remind everyone that there is a two-year cadence to IPU product delivers into the foreseeable future.
Companies also don’t buy just hardware, they also buy software environments that allow for these devices to be used for many functions without having to create that functionality wholly by themselves. And hence the advent of the Infrastructure Programmer Development Kit (IPDK), which you see prominently on the roadmap above as a companion to the Data Plane Development Kit (DPDK) and the Storage Performance Development Kit (SPDK) previously championed by Intel and adopted widely by the IT industry. All three of these are opensource communities, which is a key factor in their rapid evolution and adoption. Without software – and specifically development kits that allow organizations to create their own custom infrastructure software or buy it from third parties – this is all just pretty hardware architecture for the history books.
“IPDK is absolutely central for us to scale the IPU business,” Patricia Kummrow, corporate vice president and general manager of the Ethernet division at Intel, tells The Next Platform. “Some customers have their own software teams and they can take a piece of hardware and they can make it sing. Other customers do not. Mount Evans is a Swiss Army knife. There is so much capability with this device, and we want people to be fully able to take care to take advantage of all the programmability. And without a good development kit, not everyone’s going to be able to use it to the fullest. And so IPDK is critical for us to scale this out, ultimately, and our involvement with IPDK is strong for that reason.”
But, with IPDK being an open source project, other companies, including the likes of Google, the first customer for Mount Evans, can participate and share control for the future direction of software supporting Intel’s IPU roadmap. And the beauty is that IPDK will create code that can work across IPUs (including non-Intel IPUs/DPUs) that are based on either FPGAs or custom ASICs as well as on CPUs or switching infrastructure that is programmable.
The IPU idea – some call them Data Processing Units or DPUs – is starting to take off, and it is because the right idea and the right technology is coming together at the right time when infrastructure has to change for the reasons mentioned above.
“We have the partnership with Google and have talked about Mount Evans, and since we have been public about it, we have talked to a lot of other customers and we see a ton of interest beyond the hyperscalers and clouds,” says Kummrow. “We have IPUs at six out of eight of the biggest hyperscalers and cloud builders, and we see applicability all the way from the datacenter out to the edge, from multitenant shared to bare metal infrastructure. The fact is, IPUs let us to configure datacenter platforms in a lot of different ways, and this trend towards disaggregation at the platform level, which IPUs enable, is just going to continue.”
Right now, there are four key drivers behind the move to IPUs, and three of them are technical and one of them is economical.
“Network offload was the original use case for IPUs, and now tiered storage and diskless storage is the second strongest vector,” Kummrow explains. “Security is now the third driver, and the need for security is accelerating in the datacenter and I think it will also be a driver at the edge.”
The fourth factor driving IPU adoption is cost, and data from the hyperscaler early adopters suggests that customers can save anywhere from 30 percent to 40 percent on their server infrastructure by offloading network, storage, and security functions from the CPUs in their systems to IPUs. Hyperscalers make architectural changes to save 20 percent, and this is much larger. So that is a pretty good gauge about enthusiasm. And so is the success that Amazon Web Services has had with its homegrown “Nitro” SmartNICs, which are evolving into IPU-like devices that support Amazon’s hypervisors. In a sense, AWS put IPU-like devices into production at scale, and now Intel is going to commercialize that idea for the other hyperscalers and cloud builders. This technology is absolutely going to trickle down to large enterprises and service providers and move over to HPC centers, too.
Which means it may be just a matter of time before all servers in the datacenter have an IPU of some sort.
This content is sponsored by Intel.