Inside That Big Silicon Valley Hyperscale Supermicro Deal

Among the major companies that design and sell servers with their own brands, which are called original equipment manufacturers or OEMs, and those that co-design machines with customers and then make them, which are called original design manufacturers or ODMs, Supermicro stands apart. It does not fall precisely into either category. The company makes system components, like motherboards and enclosures, for those who want to build their own systems or those who want to sell systems to others, and it also makes complete systems, sold in onesies or twosies or sold by the hundreds of racks.

Supermicro is also a bellwhether of sorts about what is going on in the server and storage ecosystem, and to a lesser extent (but with increasing importance) it is also a good indicator of what is going on in networking. As Charles Liang, the company’s founder and CEO, explained to The Next Platform last fall, blade servers, which have been around since the late 1990s in the enterprise and in the telco space back into the 1980s, are starting to take off, with MicroBlade hyperscale microserver sales growing by a factor of 3X to 4X each year and regular SuperBlade modular servers seeing a doubling of sales each year. All told, blades could represent 20 percent of revenues in fiscal 2017, which will end in June this year, Liang predicted when he spoke to us last September.

Liang is on to something with its MicroBlade and SuperBlade strategy, and a proof in point is the revelation this week that Supermicro has closed a big deal with an unnamed hyperscaler or cloud builder for more than 30,000 of its MicroBlade servers in the deal. Supermicro did not say who the customer was, only hinting that “a technology-leading Fortune 100 company” had deployed the MicroBlades “at its Silicon Valley datacenter facility.”

Hmmm. SoftLayer, now known as the IBM Cloud, was previously the largest customer that Supermicro has had, but IBM has not been spending as aggressively on its server buildout as it had originally anticipated when it bought SoftLayer a few years back. Rackspace Hosting was also a very big user of Supermicro gear, but it has joined Facebook in the Open Compute effort and is focused on that now. So who is the mystery hyperscaler in the Valley who bought a lot of bladed gear from Supermicro? It is probably not Google, which is tightly integrated with ODMs in Taiwan and doesn’t have big datacenters in the region. Amazon doesn’t, either, and similarly likes its own server designs and control over ODMs and its supply chain. Microsoft, too. So rule them out. Ditto for Twitter, which doesn’t need that capacity, and Netflix, which runs entirely on Amazon Web Services. Facebook does still have a datacenter in the Valley, but only Apple has one located in Newark, just down the road from Supermicro’s own Fremont facility.

And since last year, Apple has been not only spreading its infrastructure across multiple clouds, hedging its bets by moving some of its capacity away from Amazon Web Services and to Google and, we hear, also Microsoft Azure, but is also building out its own datacenter capacity, with the flagship facility in Maiden, North Carolina being augmented by new datacenters in Denmark, Nevada, and Arizona. The word we hear is that whoever the company is, it is switching from standard 1U rack servers to disaggregated MicroBlades, and if Apple was indeed retrofitting an aging datacenter in Newark, it might want to compress more compute into the same space and power envelope by changing server architectures.

The secret Supermicro customer could also be Intel itself, which is retrofitting the D2 chip plant in its hometown of Santa Clara, which has laid dormant for years, to turn it into a datacenter. Tip of the hat to Rich Miller of Data Center Frontier, who detailed the facility back in 2015. And if you look carefully at the photos, there are rack servers in the datacenter, as expected, and it is claiming a PUE of 1.06, which becomes relevant in a second. (If it is Intel, that is so much less exciting than Apple, but the importance of the rack design and efficiencies is the same.)

According to sources at Supermicro, this particular company has taken delivering of 36,000 MicroBlade nodes, and interestingly, this company has asked for custom racks that are 9 feet tall instead of something along the lines of a little more than 6 feet. This is an unconventional choice, but enables another 50 percent compute capacity to be added to a given floor area in the datacenter. From what we hear, this hyperscaler is using the two-socket sleds for the MicroBlade machines, which pack 14 vertical sleds into a 3U enclosure and which have shared power, cooling, and network infrastructure. In this case, the hyperscaler is able to support two 40 Gb/sec or eight 10 Gb/sec shared uplinks out of the enclosure, and according to Supermicro, the resulting racks deliver 56 percent better compute density than the rack machines they replace.

The MicroBlade sleds are equipped with the latest “Broadwell” Xeon E5 v4 processors from Intel, launched last March, but are capped at a 120 watt thermal design point. We are not sure what CPUs the hyperscaler has chosen, but to max the machine out thermally, that means choosing either a Xeon E5-2695 v4 with 18 cores running at 2.1 GHz or a Xeon E5-2680 v4 with 14 cores running at 2.4 GHz. Assume the cores are more important than clocks, and then this hyperscaler is able to put 21 MicroBlade enclosures in a rack, which yields 294 server nodes with 10,584 cores per rack. At 123 racks, that yields 36,162 server nodes with over 1.3 million cores.

Just to give you some perspective: If you ran Linpack on this puppy, it might weigh in at around 44 petaflops peak theoretical performance on the Top 500 list of supercomputer rankings. That would make this hyperscale cluster the third most powerful machine in the world, just behind the Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 54.9 petaflops, both Chinese systems, and well ahead of Titan, at 27.1 petaflops, installed at Oak Ridge National Laboratory in the United States. You would probably have to boost the network to 100 Gb/sec Ethernet to scale that far on this workload running MPI, mind you.

The other interesting tidbit is that the MicroBlade setup at this hyperscaler is 86 percent more efficient in its power draw and cooling, and that, along with a lot of other engineering in the datacenter, has allowed this hyperscaler to get the power usage effectiveness (PUE) of the datacenter down to 1.06, which is as good as it gets. This datacenter will eventually support a 35 megawatt IT power load, by the way, and the shift in iron will save it $13.2 million a year. That goes a long way toward paying back for the new iron, which might have cost somewhere between $90 million and $100 million at prevailing average system prices at Supermicro. Thanks to the modular design, which allows for compute, storage, and networking to be upgraded independently, the MicroBlades are expected to cut capital expenses from 45 percent to 65 percent compared to the prior iron used over several upgrade cycles.

RELATED STORIES

Surfing On Tech Waves With Supermicro

The Secret To Supermicro’s Quiet, Stunning Success

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

3 Comments

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.