Broadwell Xeon D Chips Give Intel Datacenter Breadth

The first Xeon processor aimed at datacenter workloads that is based on Intel’s “Broadwell” core has been launched. We have already gone through the feeds and speeds of the “Broadwell-DE” chip, which will be marketed under the Xeon D brand name, and now we want to consider the places where Intel thinks the processor might find a home in the datacenter and where customers might experiment a little given the features of the chip.

The Xeon D chips are the first system-on-chip design that Intel has created based on its Xeon processors. The company has two prior generations of SoCs based on its Atom processors, which have been tweaked to offer server features and run server variants of the Windows and Linux operating systems. While the Atom chips have given Intel something to sell against peddlers of SoCs based on other architectures such as MIPS, ARM, and PowerPC, they are still based on Atom cores, not Xeons. Sare not available in the Atom cores. The need for Xeon compatibility for some customers is why Intel has created a Xeon-based SoC.

The desire to push the core count up and thermals down in SoCs is why Intel will continue to create Atom-based SoCs, with the “Denverton” Atom chip expected to come out using the same 14 nanometer processes that the Xeon D chips have been etched in. With the “Avoton” C2000 chip having eight cores, Intel could double it up to 16 cores with the Denverton, and is widely expected to move to the “Airmont” 64-bit core design. These Xeon D and Atom, along with a Broadwell variant of the Xeon E3 chip for single socket servers, are all aimed at the low-end of the server market as well as adjacent markets in networking, storage, and now sensor and data storage devices that are part of the Internet of Things. These three different entry X86 processors offer different types and numbers of cores as well as other features, and in the case of the SoCs, different feature sets are activated depending on if the chips will be used in servers, in network devices, or in storage gear.

The addition of the Xeon D chips to the Intel product line makes life simpler for those equipment makers that have experience with Xeon chips and have already ported their code to them. As Raejeanne Skillern, general manager for Intel’s cloud service provider business within its Data Center Group, pointed out in a briefing with The Next Platform on the Xeon Ds, these SoCs an run Windows, unlike ARM SoCs, and they also run commercial-grade Linux from Red Hat, SUSE Linux, and Canonical rather than a development release, as is the case with Red Hat and SUSE Linux for the ARM chips they support. Canonical has support for a handful of 64-bit ARM SoCs with its latest Ubuntu Server, but these ARM chips are only now ramping up production.

The way Intel sees the situation, the Xeon E5-2600 processor used in two-socket machines is the workhorse in the datacenter, and that the Xeon D-1500s will find their place in the datacenter edge, in the network edge, and as motors for gathering up and processing telemetry in myriad devices that are linked back to analytics systems as part of an IoT software stack. There will also be storage variants of the Xeon D chips, although these were not outlined in the presentation put together by Skillern. The network, storage, and IoT variants of the Xeon D will be available in the second half of this year, and the expectation is that they will be delivered in the fall. Some of the variants will have the certified longer life span and higher operating temperature ranges required by network gear and other embedded devices that live in harsher environments than are found in typical datacenters.

The Xeon D, says Skillern, is optimized for performance per watt and can be rapidly provisioned given its absolute compatibility with other Xeon processors in the datacenter. In a story that we see being told again and again, chip makers and their buyers who make devices for a living are talking about distributed intelligence through many layers of their networks and systems so information can be processed and actions taken as close to where the information originates as possible to cut down on the amount of data that has to be pulled back into central datacenters for processing. ARM and its partners are pushing into the datacenter in servers and switches and other devices for exactly this reason, having already gotten a piece of the chip action for all kinds of embedded and client devices that are generating the data that is saturating the networks, storage, and servers in the datacenter. The expectation is that the number of devices on the Internet and generating data will grow by a factor of 10X to 50 billion machines by 2020, and that the amount of data they generate will also grow by the same 10X to 44 zettabytes in the same time. The economics of processing and storing that data, argue both Intel and ARM, will require intelligence to be distributed from the datacenter through all of the network and storage devices out to the clients.

The big opportunity for Intel and the Xeon D chips, says Skillern, is among cloud service providers and telecommunications companies.

The cloud service providers, which includes hyperscale companies that run webscale applications as well as those companies that sell raw capacity or application framework services to run applications for a fee, are space and power constrained and they are looking for ways to cram more compute and networking in less space. An SoC based on the Xeon is something that many of them would like for portions of their infrastructure.

At the telcos, about 90 percent of the network infrastructure that is installed today is still built on static, fixed function devices and the prices of these devices have not, says Skillern, benefitted from Moore’s Law scaling to the same degree that raw compute in the datacenter has. These static devices are proprietary in nature and do not usually have excess compute installed to allow for analytics to run out on the edge of their networks. Because of the sealed box nature of these devices, it also takes a long time to provision new services on the network. This, more than any other factor, is why the hyperscalers have been pushing for open, Linux-based network operating systems and a hardware architecture that allows for tweaking and tuning of switches much as we can do with open source Linux on servers today.

Of the more than 50 design wins that Intel has thus far for the Xeon Ds, only a quarter of them are for microservers and three quarters of them are for network, storage, and IoT devices.

Nidhi Chappell, product line manager for the Xeon D line in the Data Center Group marketing organization, put together a grid of the various workloads in the IT landscape, and showed in green the places where Intel thinks that the Xeon D SoC will play:

intel-xeon-d-markets

Chappell admitted that this is a relatively simplistic way of categorizing the workloads, and said that the Xeon D would naturally gravitate to the places on this grid where customers want low cost and density in their machines but still want Xeon-class performance. In the public cloud, explained Chappell, this includes lightweight, hyperscale applications such as dynamic web serving (think PHP) and memory caching (think Memcached) as well as for dedicated hosting where customers have modest workloads and yet still want a dedicated server node for their workloads. (This is more prevalent in Europe and in the United States.)

“The rest of the workloads in the public cloud as well as in enterprise, HPC, and big data are probably going to look for higher performance,” Chappell said. “And we expect that they will stick with our Xeon E5 and Xeon E7 product line.”

On the right side of that grid, in the storage and networking area, Intel has relatively small market share for its processors, whether they are Atoms or Xeons – particularly in networking. (Storage controllers for big enterprise storage arrays have by and large moved to X86 processors, excepting a few such as IBM’s DS series storage, which still uses its own Power processors.) For both storage and networking use cases as outlined in the grid, Xeon-class reliability and integrated 10 Gb/sec Ethernet networking are key. With two 10 Gb/sec network interfaces on the die, the Xeon D has five times the raw network bandwidth of the Avoton chip it replaces, which had four ports running at 1 Gb/sec.

It will be interesting to see if any system makers or intrepid customers that build their own gear will try to use the Xeon D chip outside of the green squares that Intel has outlined above. As we pointed out in our drilldown of the Xeon D chips, the top-end eight-core Xeon D-1540 has a peak theoretical performance of 256 gigaflops at double precision on number crunching work, and does it in a 45 watt envelope. That gives you a teraflops in a 180 watt thermal envelope for $2,324 at Intel list price. That does not include the cost or thermals for the main memory for the chip, mind you. The low-end Xeon D-1520 has one four cores running at 2.2 GHz and delivers 140.8 gigaflops for $199. That gives you a teraflops at double precision using seven Xeon D-1520 chips for a cost of $1,413. We are not suggesting that anyone will build a cluster based on these chips, but in a hybrid cluster that marries a CPU with a GPU, Xeon Phi, or FPGA, this chip might be the right one rather than a single-socket Xeon E3 or Xeon E5.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.