While the minimalist server processor — and the microserver concept that was based upon it — did not take over the datacenters of the world, there are still some workloads that can fit in modestly powered single-socket CPUs just fine.
That is why Intel has always created server variants of its high-end desktop CPUs that are aimed at modest server workloads, and the continuing need for minimalist compute among small and medium businesses means that these low-end server chips are available for datacenters to make use of where appropriate.
The latest iteration of the minimalist server chip, announced this week by Intel, is the “Rocket Lake” Xeon E-2300, which follows on from a slew of prior announcements from Intel. This includes the “Skylake” Xeon E3-1200 v5 chips that launched in June 2016, and the “Kaby Lake” Xeon E3-1200 v6 chips that launched in April 2017, the “Coffee Lake” Xeon E-2100s launched in July 2018, and the “Coffee Lake-E” Xeon E-2200s that came out in 2019 as a refresher.
The Xeon E-2300 is not compatible with the Socket H4 (LGA 1151) server socket that the Coffee Lake variants employed, and uses a new LGA 1200 Socket H5 to link the CPU to the motherboard. The Coffee Lake-E chips did not make use of the on-chip graphics, but on the Xeon E-2300, the Gen12 GPU on the die is enabled to drive displays — although it is not enabled for GPU compute offload, which seems like a waste of transistors. There is no good reason why the oneAPI stack that Intel is working on for offload of intense math routines to discrete Xe HPC GPUs can’t be made to see the on-die Gen12 GPU or its kickers in future chips and give it some compute work to do.
The Rocket Lake Xeon E-2300 chip took the high performance “Sunny Cove” core developed for the ten-nanometer “Ice Lake” Xeon SP processor announced in April — see our initial coverage here and an architectural deep dive there — and reimplemented the cores and uncore region in the prior 14-nanometer manufacturing processes at Intel’s fabs. The resulting cores were called “Cypress Cove” and here is what they look like:
And here is what the Rocket Lake-E die looks like:
Rocket Lake-E was not supposed to be backcast to 14 nanometers. This was a stopgap measure to get something into the field as Intel was struggling with its ten-nanometer manufacturing. Despite all of the drama and the fact that the Xeon E-2300 is larger and hotter than it might otherwise be, it is still a pretty decent microserver processor, if that is your thing. And it is dirt cheap by comparison to the “Ice Lake” Xeon SPs on a cost per unit of compute basis and in terms of absolute dollars.
The Rocket Lake-E processor has eight cores, as you can see, and each Cypress Cove core has a 48KB L1 data cache and a 32KB L1 instruction cache, plus a 512KB L2 cache. Each core also has a 2MB L3 cache, and the cores and L3 caches are linked to each other with a ring interconnect that was common in higher-end Xeon server processors. Since the “Skylake” Xeon SPs several years ago it’s been replaced by a mesh interconnect on the die. Eventually, this mesh interconnect will make its way into desktop and entry server chips when the core counts get high enough — say, above 16 cores.
The Xeon E-2300 has a single DDR4 memory controller that runs at 3.2GHz and that can support up to two memory channels, each with two DIMMs per channel, for a total of four memory sticks and a capacity of 128GB. (It is interesting to contemplate putting Optane 3D XPoint DIMMs in such a machine to extend memory capacity even further …) The SerDes on the E-2300 implements 24 lanes of I/O and 20 of them are used for PCI-Express 4.0 lanes. (It is not clear what the other four lanes are for.) The chip has an eight-lane Direct Memory Interface (DMI 3.0) link out to an Intel C252 or C256 chipset. The chipset provides support for legacy I/O, including 24 lanes of PCI-Express 3.0 and a slew of USB and SATA ports as shown in the table below:
The Xeon E-2300 processors support Hyper-Threading in all but two of the chips, delivering two threads per core, and also support the Turbo Boost 2.0 automated overclocking of cores across the line. The base core speeds range from a low of 2.8GHz to a high of 3.7GHz, and Turbo Boost can push the clock speeds on a core to anywhere between 4.5GHz and 5.1GHz.
The core counts, clock speeds, L3 cache, thermal design points, and prices scale across the ten SKUs in the Xeon E-2300 line thus:
The two four-core Xeon E-2300s without Hyper-Threading are setting a new low bar on pricing for these microserver chips, at $209 and $189, respectively.
Speaking very generally, Intel says that the Rocket Lake-E chips deliver 17 percent more raw performance than the prior Coffee Lake-E Xeon E-2200 processors that they replace.
In terms of price/performance for the raw chips, the Xeon E-2300 processors are a little bit less expensive per unit compute than the lowest-end, eight-core Ice Lake Xeon SP processors. And in fact, their clock speeds are considerably higher per core. For serial workloads that are more sensitive to clock speed than to memory capacity or memory and I/O bandwidth, these might be a better choice. Compare, for example, a 12-core Xeon SP-4310 Silver, which runs at 2.1GHz, has 18MB of L3 cache, and delivers around 4.51 units of relative performance at a cost of $501 against the top-end Xeon E-2388G, which has eight cores running at 3.2GHz with 16MB of L3 cache and delivers 4.58 units of relative performance at a cost of $539. That’s $117.62 per unit of performance for the E-2300 compared to $111.07 for the Xeon SP-4310 Silver. It’s all within spitting distance here, given the caveats above.
The amazing thing that the table above shows is how much more performance a single-socket Xeon E-2300 server has compared to a single-socket server using the “Nehalem” Xeon E5540 chip — which is our benchmark for the relative performance metrics we use to gauge the oomph of Intel Xeon chips. (This relative performance takes into account core count, clock speed, and generational IPC improvements.) That Nehalem Xeon E5540 chip cost $744 and delivered the baseline 1.0 performance running four cores at 2.53GHz with Hyper-Threading on. The top-end Xeon E-2388G has twice as many cores and almost twice as much IPC, and runs its clocks 26.5 percent faster to deliver that 4.58 relative performance — and the chip costs 27.6 percent less as well. And that is how price/performance has improved by a factor of 6.3x between 2009 and 2021 across these two processors.
Supermicro is first out the door supporting these new Rocket Lake-E server chips across its line of mainstream pizza box servers as well as within its MicroCloud and MicroBlade lines of microservers and in a bunch of motherboards that system builders can buy to create their own machines.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.