Minimalist Hyperscale Servers For The Rest Of Us

Server component and system maker Supermicro is known for being out in front when any X86 processor comes to market and often shoots the gap between ODMs, who have a relatively small number of large customers, and OEMs, who have a large number of relatively small customers.

Historically, these two customer groups have different needs. Generally, enterprises like machines that have more certification and testing across a wider variety of hardware options and operating systems, all of which takes time and adds to the cost of the system; they also like them equipped with lights-out controllers that are compatible with the ones they have installed, which also adds some cost. And generally, because they are not buying in huge volumes and they are loath to go through a new server vendor qualification process, the OEMs know they can charge a little more for their iron.

But slowly, thanks in large part to the Open Compute Project founded by Facebook back in 2011 and to the widespread knowledge of the minimalized, customized iron that hyperscalers and cloud builders design and have manufactured by ODMs or the occasional OEM have caught on to a certain extent. And with the MegaDC line of systems that Supermicro has recently rolled out, the company is taking barebones and compatibility with open standards to the next level in the hopes of reaching a broader set of customers who want more of the hyperscale experience but who cannot buy machines in lots of 50,000 to 100,000.

But don’t get the wrong idea. These MegaDC systems are sold alongside of the SuperServer Ultra, Twin, and Blade families of machines and are not intended to obsolete them. At least not yet.

“We have been challenged to go after new business and new opportunities,” Vik Malyala, senior vice president at Supermicro for the past two year and before that manager of technical and product marketing for various chips used in enterprise servers at Broadcom, tells The Next Platform. “One of the biggest untapped areas was Tier 2 and Tier 3 cloud service providers and datacenter customers who do not have the scale of the hyperscalers, but they have big enough infrastructure that they need to manage in an efficient manner. Ultimately, all of them are facing the same challenge: They do not have the same tools and the scale to compete with the hyperscalers, but they do need to operate in a similar fashion.”

In the United States, these Tier 2 and Tier 3 cloud service providers and their large enterprise compatriots have anywhere from several thousands to a few tens of thousands of servers in their datacenters, and they tend to buy machines in units of five to ten racks at a time, according to Malyala. In Europe, the scale is much smaller, he adds, and the spread across geographies is much broader at times, too. And in Asia, these service providers operate on a smaller scale, buying from a few dozen to a several hundred machines at a time.

For many of these customers, while compute density is a big deal, they are not space constrained so much as they are power and cooling constrained. So they tend to have only 10 kilowatts to 15 kilowatts per rack, and it is a big deal to get 20 kilowatts in a rack for them. This is nothing compared to the density of a supercomputer or some of the hyperscalers. They don’t need all of the extra features of the various SuperServer lines, and they don’t need (or want) to support a wide variety of motherboards and peripheral cards and such. They want to buy a machine that can be reconfigured in a few different ways – infrastructure server, storage server, GPU accelerated server – and not have to go through a lot of qualification, and they want to know that the product line is going to be around in the same basic form for a long time. They tend to be compute heavy, so they don’t have 24 or 32 memory slots as beefier machines do, but rather 16 slots will do.

They also, says Malyala, increasingly want to support open management tools, coprocessors, and network interface cards, and hence the support for the OpenBMC management controller, Redfish APIs, and the OCP 3.0 NICs in the MegaDC lineup. Supermicro is looking into supporting the OpenBIOS open source BIOS on these machines eventually, but it is in the early stages now and the current machines use the AMI BIOS.

To simplify everything, all of the components in the MegaDC line run at 12 volts, not the wide variety of 5 volt, 12 volt, and 48 volt components that are supported in the other Supermicro server lines. All of the machines have flat cables for all component interconnect, and have easy access to do break/fix maintenance or upgrades – and with as few screws as possible because people are more expensive than servers.

The idea is to allow customers to get many of the benefits of OCP machines without having to commit to OCP designs and try to get in on that OCP manufacturing ecosystem, which is really designed for higher volume commitments at this point. Moreover, while Supermicro is happy to engage potential MegaDC server customers, it fully expects for its downstream channel partners to grab the MegaDC playbook and run with it. “Anyone can buy MegaDC, there are no restrictions on it,” says Malyala. This is not the case with other hyperscale manufacturers in the past, who have minimum volumes even to be engaged.

At the moment the MegaDC line, which was developed in conjunction with Intel, only supports the “Cascade Lake” and “Cascade Lake-R” Xeon SP processors up to the parts, but Supermicro is definitely on the AMD bandwagon and supports the Epyc processors in its SuperServer BigTwin, SuperBlade, and Ultra lines already and it Malyala says a MegaDC variant supporting the “Rome” Epyc 7002 processors will come out shortly and that when the “Ice Lake” Xeon SP processors are sampling from Intel, a variant will be created to support these motors, too.

The Super X11DPD-M25 MegaDC motherboard is designed to cover a lot of the basic workhorse jobs in the datacenter. It is based on an EATX (12 inch by 13 inch) motherboard and there is just one available in the five machines in the MegaDC line. Here are the salient features in this two-socket system:

The motherboard supports up to seven PCI-Express 3.0 slots using risers, and has enough peripheral slots to be useful for a fairly wide array of devices. There are two 25 Gb/sec Ethernet ports built into the system board and then a daughter card that has extended OCP 3.0 cards that support Broadcom controller chips with two 25 Gb/sec ports or Intel controller chip that support four 1 Gb/sec ports. Supermicro will be adding of its Advanced I/O Module daughter cards to the MegaDC line that support four 10 Gb/sec ports, two 10 Gb/sec ports plus two 25 Gb/sec ports, or two 100 Gb/sec ports.

There are two servers that come in a 1U rack form factor, one aimed at compute and one aimed at storage, configured like this:

And there are there MegaDC servers that come in 2U form factors, one aimed at supporting a pair of GPU accelerators, one that has more I/O options than the 1U model but no more storage capacity, and the final one that has two fewer low profile PCI-Express 3.0 x8 slots and is called a compute variant mostly to give it a name. Here they are:

The MegaDC line will have slightly lower prices than equivalently configured other SuperServer lines from Supermicro, but this is a nominal difference that is strictly based on the bill of materials being a little less complex and certification and qualification being that much easier. The real cost savings, says Malyala, come from the operational savings from the management tools and ease of configuration and management that drive down total cost of ownership for the life of the machine. How much savings that generates remains to be seen, and Supermicro will be watching what customers do with the machines to see how much dough they actually save.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

3 Comments

  1. Minor typo here – how many daughter cards are being added?

    Supermicro will be adding of its Advanced I/O Module daughter cards to the MegaDC line that support four 10 Gb/sec ports, two 10 Gb/sec ports plus two 25 Gb/sec ports, or two 100 Gb/sec ports.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.