Different Server Workhorses For Different Workload Courses

Co-design is all the rage these days in systems design, where the hardware and software components of a system – whether it is aimed at compute, storage, or networking – are designed in tandem, not one after the other, and immediately affect how each aspect of a system are ultimate crafted. It is a smart idea that wrings the maximum amount of performance out of a system for very precise workloads.

The era of general purpose computing, which is on the wane, brought an ever-increasing amount of capacity to bear in the datacenter at an ever -lower cost, enabling an explosion in the diversity of datasets and the systems designed to process them, and at scales that had not been possible with prior generations of technologies. But with Dennard Scaling long since peaked and Moore’s Law moving a bit slower than usual, depending on how you want to look at it, those who want to wring the most efficiency out of a workload have to tailor their hardware very precisely to their systems and application software, down to the number and type of cores on processors, memory sticks that give them room to run, disk and flash storage to hold data before and after processing, and networking to scale up or scale out applications across multiple compute elements.

In the early days of X86 servers, all that vendors needed to do was provide an alternative to RISC or proprietary machines. They might need only a handful of machines – ones with one, two, or four sockets – and maybe in two form factors – a few tower machines for SMBs, and a few rack-mounted machines for enterprises operating at larger scale. Then along came blade servers, and after that modular servers, and then with the rise of the hyperscalers, actual bespoke iron that was precisely tuned for specific ratios of compute and storage and with a bare-bones style not seen in enterprise servers because for hyperscalers, reducing cost and actually beating Moore’s Law price/performance improvements is the only way to stay alive.

As a result of this growing diversity of workloads, there has been a proliferation of server types at all of the OEMs and ODMs that cater to the market, and this diversity is one of the reasons why bending metal to make servers is not as profitable of a business as it once was. The customers are all more sophisticated, and there is less opportunity to oversell organizations things that they do not want and that had respectably high profit margins. Enterprises learn from the hyperscalers. They ditch lights out management servers and RAID controllers for block storage embedded in the machines for minimalist baseboard management controllers and a software-based control plane like Redfish and object storage with a file and block layer that runs on generic storage servers with loads of flash and disk and lots of memory bandwidth linking to the compute. And while they cannot afford their own software and hardware engineering teams, like the Super 8 hyperscalers can, they want to get as close as they can by compelling their hardware suppliers to provide systems that match specific workloads – and are sold as such.

Hence, the breadth of the product catalog at a company like Supermicro, or even more traditional OEM suppliers like IBM, Hewlett Packard Enterprise and Dell. They have a different server for every different kind of workload. As we reported earlier this week, IBM will be showing its co-design efforts off with its Power9 system line throughout 2018, and Dell is demonstrating its diversity with its line of machines based on AMD’s Epyc X86 server processors.

A Different Tack This Time

Back in the early 2000s, when AMD was working on its “Hammer” line of X86 processors for servers, the market was yearning for a different computing platform. At the time, Intel was still steadfast in its adherence to its two-prong strategy, peddling 32-bit Xeons as inexpensive but memory constrained and 64-bit Itaniums as a better computing paradigm and refusing to create a 64-bit Xeon because it would kill any hope for Itanium. AMD not only came up with a 64-bit architecture with the Hammer designs, but it also created on that could put multiple cores on a die and scale it up as well as creating a new memory hierarchy and HyperTransport interconnect for lashing multiple chips together.

It took a while, but IBM jumped out with an Opteron server first, followed by Sun Microsystems with a fleet of machines in its “Galaxy” line, followed by HPE, and then Supermicro, the ODMs, and finally, Dell, which did not want to jeopardize its very tight relationship with Intel but ultimately saw the demand for Opterons among enterprise customers who buy its standard PowerEdge machines as well as hyperscalers looking for some architectural advantages.

Back then, to get Opterons in the door, the idea that most server makers had was to take a Xeon machine and keep as much of the components as possible the same and replace the Xeon with an Opteron and change the ending “0” in a product model number to a “5” and call it a day. Fast forward to 2018, and for Dell at least, this doesn’t make a lot of sense, particularly given all of its experience with hyperscalers over the years.

“As we began our engagement a long time ago with AMD with regards to the Epyc chip, we realized this is very different from what is general available on the market today,” Brian Payne, executive director of server solutions at Dell, tells The Next Platform. “We don’t want to adopt a platform strategy of clones with Opterons and Xeons in them. Rather, we wanted to go and optimize our platforms around the key differentiators, and in the case of the Epyc, that is mainly the core count, the memory scalability, and the I/O capacity. We wanted to look at what was going on in the IT industry and put together unique products that are positioned for important and emerging workloads so customers can get the results they want to achieve and they could not do without these products.”

To that end, Dell is rolling out three different PowerEdge machines, all of them under the regular PowerEdge umbrella in the 14th generation, right beside the other PowerEdge machines of the same vintage that sport Intel’s “Skylake” Xeon SP chips. Here are the targets that Dell is aiming at from 30,000 feet in the air:

The first machine is the PowerEdge R6415, and it is aimed at so-called edge computing, which refers to the data gathering and processing that is happening in a distributed manner, but which is not inside the walls of a datacenter but out closer to where the compute is interfaces with various kinds of devices, be they self-driving cars, cell tower base stations, retail and distribution systems, or what have you.

While this may not be datacenter computing, the stuff that happens on the edge will feed pre-processed or summary information back into the datacenter, pushing up demand for processing there (but far less than if the data was all sent back to the datacenter proper to be chewed and stored). Ravi Pendekanti, senior vice president of server product management and marketing at Dell, says that depending on who you ask and over what term you put on it, there could be anywhere from 20 billion to 60 billion devices out on the edge generating data, and that could mean a very large edge computing tier sitting between those devices and the back-end datacenter. It could be larger than the compute in the datacenter – no one is sure. But what Pendekanti is sure of is that companies are going to put compute on the edges, and Dell wants it to have a PowerEdge brand on it.

The PowerEdge R6415 is a poster child for the single-socket strategy that AMD was been talking about even before the Epyc chips were unveiled last summer. The system has a single Epyc socket, which scales up to 32 cores, and with sixteen memory slots is able to cram as much as 2 TB of memory around that socket. The machine has eight drive bays that can hold 2.5-inch SATA or SAS disk or flash drives as well as NVM-Express flash drives, plus room for another two NVM-Express flash drives. If customers want fat disks instead, there is room for four 3.5-inch SATA or SAS drives. Payne thinks that the ten direct NVM-Express drives are going to be a big selling point, particularly with 128 PCI-Express 3.0 lanes coming off the Epyc processor and able to field the data from those flash drives – and without requiring a PCI-Express controller acting as a bridge card. For Dell customers who are used to its iDRAC server controllers and OpenManage software, should enterprises who have already invested in this control infrastructure want to bring these edge devices under the same thumb. The PowerEdge R6415 has a riser with two PCI-Express x16 slots and various LAN-on-motherboard mezzanine cards that can be equipped with dual-port Ethernet interfaces running at 1 Gb/sec or 10 Gb/sec. Microsoft Windows Server 2016, Red Hat Enterprise Linux 7.4, and SUSE Linux Enterprise Server 12 SP3 are certified on the machine; so is VMware’s ESXi 6.6 U1 hypervisor.

A base PowerEdge R6415 with an eight core Epyc 7251 and a mere 8 GB of memory costs $2,852. Which sounds cheap enough. But if you load this up as intended, with the expectation of lots of data and compute, moving up to the 32 core Epyc 7551P, putting in 512 GB of regular DDR memory (where it tops out because only 32 GB registered DIMM memory sticks are supported right now) and a mix of five 1.6 TB NVM-Express flash drives and five 2 TB disk drives, then you are talking about $34,176 after a $20,345 discount off of the $54,520 list price at Dell. That single Epyc CPU probably accounts for less than 5 percent of the system at street price (it is hard to tell from the Dell configurator). It is not clear when Dell will support the 64 GB and 128 GB memory sticks that allow the maximum memory capacity to hit 1 TB and 2 TB, respectively, but considering that memory capacity is supposed to be a key selling point, presumably it is soon.

The next Epyc machine in the lineup is a single-socket PowerEdge R7415, which comes in a 2U form factor and has a lot more room for disk and flash storage and is, not surprisingly after Dell bought VMware and EMC, the preferred hybrid compute-storage server for the VMware vSAN hyperconverged storage software. On vSAN workloads, Payne says that this Epyc box will deliver about 20 percent lower total cost of ownership compared to its own two-socket Skylake Xeon SP servers in as close to a like-for-like configuration as Dell can make.

The processor options and memory capacity options are the same on the PowerEdge R7415 as on the PowerEdge R6415 outlined above for edge computing, but the 2U form factor allows for as many as a dozen 3.5-inch drives of various kinds (SAS or SATA disk) and two dozen 2.5-inch drives (SAS, SATA, or NVM-Express flash or SAS or SATA disk). There are two 3.5-inch drives bays in the back of the machine, too. All the same software applies as with the PowerEdge R6415.

The starting configuration with that base eight core Epyc 7251, 8 GB of memory, and a 120 GB SATA boot drive costs $2,349, but that price is not really representative. With the 32-core Epyc 7551P plus 1 TB of low-power memory based on 64 GB sticks, a dozen 2 TB disk drives, and a dozen 1.6 TB NVM-Express flash drives jumps the price up to $114,399, but Dell is discounting that by $42,432 to $71,967. This sounds like an incredible amount of money to spend on a single socket server, but that is what all that memory and storage costs. Blame the memory makers, not AMD or Dell.

That leaves the PowerEdge R7425, which from the outside looks just like the single-socket machine above but it has a motherboard with two sockets and twice as much main memory capacity. This machine, aimed at analytics databases, computational fluid dynamics, and very heavy server virtualization workloads, can support up to 1 TB of memory today using 32 GB RDIMMs and 2 TB using 64 GB LRDIMMS, and will presumably have 4 TB support coming soon. Because the NUMA interconnect, called Infinity Fabric, between two sockets uses 64 lanes coming off each processor to couple them together, there are only 128 lanes left over for I/O. (Which is still a lot, mind you.) This machine has more networking options, including a mezzanine card that delivers two 25 Gb/sec ports. Other than that, the same software is certified on it.

The base configuration of the PowerEdge R7425 costs $3,819 and it has basically nothing in it, just like the other starting configurations above. With a pair of the top bin Epyc 7601 chips, which have 32 cores running at a slightly higher speed, 1 TB of memory using 64 GB LRDIMM sticks, and a dozen 2 TB disk drives and a dozen 1.6 TB NVM-Express flash drives, this system has a list price of $126,434, but after $46,900 in discounts, Dell is knocking that price down to $79,534.

One last thing: Given that the hyperscalers – notably Google and Rackspace Hosting, and we think maybe Microsoft – were strong adopters of the Opteron processors in the early days, and without telling anyone by the way, we wondered if Dell had any bespoke machinery based on Epyc chips in the field or if it had any plans to offer Epyc chips in the PowerEdge-C semi-custom and hyperscale-inspired iron it sells. Pendekanti weaved and bobbed a bit around the question, but said that thus far Dell has not been approached by customers who wanted such iron. But added quickly that the company was perfectly willing to build it, if someone wants it and is willing to pay for the work and bring the volumes.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. And now there are 2 in the x86 based server market once again with that Zen to thank for some new found competition!

    Now let’s not let AMD get away with not explaining the status of its Custom ARM K12 core design that is no longer showing up on AMD’s roadmaps. I’d be very worried looking at that Samsung Exynos M3 custom ARM core(1) if I where AMD. So hopefully all that K12 design work is being stored and really K12’s release was delayed until 2018 but still no news from AMD. I certianly hope that AMD will not try and become an Intel(Light) and only focus on x86 as that ARM market for more than just servers is not going away.

    Now that Samsung Exynos M3 is pretty damn bit more wide order superscalar and looks to be even wider on the back end than even Apple’s A series core designs. AMD needs to keep its K12 back burner lit just in case. Remember AMD that IBMs Power8’s/Power9’s are also RISC ISA based just like the ARMv8A ISA custom designs and some maker may just get some even wider order superscalar ARMv8A ISA running based custom cores on the market than that Exynos M3. K12 may have even had SMT capabilities if some of those YouTube interviews with Jim Keller prove correct, and K12 was Keller/Team’s other project while Keller/Zen-Team worked up that very sucessful Zen/x86 CPU micro-arch. Don’t be just another x86 based CPU company AMD, the future is more that any single CPU ISA.

    (1)

    “The Samsung Exynos M3 – 6-wide Decode With 50%+ IPC Increase”

    https://www.anandtech.com/show/12361/samsung-exynos-m3-architecture

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.