One Rack To Stack Them All

Stacking up electronics equipment in precise form factors that slide into standard racks is not a new idea, and in fact it is one that predates the modern era of computing. As is the case with any standard, the constraints it imposes brings order to the market while at the same time restricting it, and making any substantial change in something as fundamental as the datacenter rack requires a pretty significant payback.

Any standard also requires volume manufacturing to really take off and yield benefits, and this has certainly not happened with rack-scale architectures to date. The time is perhaps ripe to get it right, and not just for the hyperscalers, cloud builders, and HPC shops that need something a little different to get the efficiencies their business model or budgets require, but for the entire IT industry. Simply put, we need a new rack that supports higher voltages and one that is suitable for everyone – not just Google and Facebook and the few companies that will try to mimic them.

Many have tried cultivating alternative – and we would argue much-needed – rack architectures. These efforts were not failures, but they have not been success stories in terms of volume adoption, either, even if they have proved the value of their engineering.

Rackable Systems, which merged with supercomputer maker SGI in the fall of 2009 at the height of the Great Recession, was an early innovator here and for that reason sold scads of systems to a nascent Amazon Web Services before it started to build its own gear. Egenera, a spinout of sorts from Goldman Sachs, was another innovator in rack-scale architectures and unified computing (the merging of servers and networks) with its BladeFrame systems, but the Great Recession also knocked it out of the hardware arena for all intents and purposes even though it has remained in business selling its rack virtualization and system management software on other vendors’ bladed systems.

Facebook started up its Open Compute Project in April 2011, open sourcing its datacenter, server, and storage designs, and a year later it unveiled Open Rack, a wider rack that had a 21-inch form factor for gear, which better fit its own needs but which did not mesh well with the 19-inch gear that has been popular in the datacenter since Compaq debuted machinery adhering to this standard in 1998. (IBM adopted the 19-inch form factor a decade earlier with its proprietary AS/400 minicomputers and their peripherals, but few remember this forward thinking on the part of Big Blue.) Hewlett-Packard Enterprise and Dell have both delivered their own high density racks, with the former hitting its apex with the Cloudline machines for hyperscalers and the Apollo line for HPC shops and the latter with the DSS 9000 and Triton systems. Google and Amazon have been designing their own racks, too, as have Intel with its Rack Scale Architecture and Alibaba, China Telecom, Baidu, and Tencent have created their own Project Scorpio standard.

When everybody has their own standard, that means there is no standard.

Given the benefits of 48 volt power distribution to server racks, which Google discussed back in March when it joined the Open Compute Project, the deployment of uninterruptible power supply (UPS) battery backup within the rack, and the very high likelihood that the server industry will follow Google’s lead and distributed DC power to CPU, memory, and storage devices individually at their specific voltages to conserve energy, it seems that it is high time that the IT industry agree to a proper rack standard that can support the more common 240 volt as well as the 48 volt input in standard racks. While we agree wholeheartedly with Facebook’s contention that 19-inch gear was not as efficient as 21-inch gear given the dimensions of motherboards, power supplies, and storage drives, changing this form factor is too much of a pain in the neck for the entire industry to justify such a large change.

We have moved a step in the right direction, with Google and Facebook working together on 48 volt rack designs through the Open Compute Project, and the idea is that servers will be able to work with both the Google rack design, which is shallower than a standard rack but still using 19-inch gear, and the Open Rack used by Facebook in its own datacenters. Ironically, Facebook is looking at using co-location facilities in areas where having its own datacenters does not make economic sense, and that means it will need the 19-inch racks that most co-lo datacenters are designs for. Google itself deploys 12 volt servers and storage inside of its own 48 volt racks, with power rectifiers stepping down the voltage one extra step, because it, like all IT organizations, has to support legacy equipment.

The shallow-depth form factor of the proposed Open Rack v2.0 standard, which Google has just put out for review by the Open Compute community, are the company’s first (but certainly not its last) contribution to that community. The OCP Foundation is expected to review them later this year, and it is important to note that Google’s approach allows for native 48 volt racks as well as those that support 12 volt gear in 48 volt racks. You can pour over the specs for the racks at this link, if you are so inclined.

The point we are trying to make is a simple one: With Google and Facebook working on a 48 volt standard, maybe now is a good time to work with all of the server makers to get a single 48 volt rack standard, perhaps with one alteration for 21-inch width gear and another for a regular depth rack for 19-inch gear. The rack should not be a control point any more than a blade enclosure should have been – and while blade servers and their follow-on modular designs certainly had a chance to become true standards, they didn’t. Blades from one vendor do not work in enclosures of the other.

But with the Open Rack, the industry has a chance to make a real rack standard that can accept all kinds of gear – and importantly the kind that Google has created with specific voltage drops to individual components on the motherboard. This is a system architecture that is not generally available, but should be, and perhaps could be in the next several years. A new rack standard would have its own virtues, and at the same time drive more power efficient servers and storage for the entire industry. Every little bit is going to help as Moore’s Law runs out of gas.

It is ironic that in a world where a software container is becoming the standard way to deploy software and is analogous to the shipping container that has transformed the transportation sector that the IT industry cannot seem to get its head around hardware standards for fear of losing control. Only the users can compel this, and there has never been much appetite for that. So if the hyperscalers and HPC shops and cloud builders push for a broad rack standard that benefits them, the rest of the industry will benefit. This is, of course, how we think the IT world works when it is working best. We believe in the technology waterfall. But if Google really wants to drive a rack standard, then it should be helping even more by opening up its motherboard designs and the means by which it distributes voltage to components in its systems. This will surely have to be standardized, too, but it may be deemed too critical to be let go of right now. Whatever the case, Google does not talk about it, but it is talking more than Amazon Web Services, which has yet to join the OCP as far as we know, and Apple, which has joined but says nothing at all.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.