HPE Takes On The High End With SGI Expertise

SGI has always had scalable technology that should have been deployed more broadly in the academic, government, and enterprise datacenters of the world. But fighting for those budget dollars at the high end of the market always came down to needing more feet on the street, a larger global footprint for service and support, and broader certification of software stacks to exploit that SGI iron.

Now that Hewlett Packard Enterprise owns SGI – or more precisely, owns its operations in the United States and will finish off its acquisition, announced in August, probably sometime in the middle of next year for its other units – the upstart supercomputer maker will finally have the reach that it has craved for so long and we will find out for sure just how much appetite there is for the engineering that SGI has done.

HPE, being a public company, is not about to make financial projections for specific operating units going forward, but it is safe to wager that HPE can and will grow the SGI business, and quite possibly to the chagrin of UV 300 resellers Dell and Cisco Systems, which in the past year had tapped SGI’s NUMA shared memory machines to expand their own Xeon server product lines and which have no desire to try to engineer fat server nodes on their own.

The managers and engineers at HPE and SGI are well aware of what assets SGI brings to the table and, because the deal only closed on November 1, have only gotten started on figuring out what will happen with their respective system and storage product lines. The integration of the two companies is a subtle one, and requires thought. HPE is being mindful and not destroying the thing it just acquired, as happens so often in deals in the IT industry. HPE got quite a deal, paying only $275 million net of cash and debts for SGI, which generated $532.9 million in revenues in its last fiscal year ended in June. As we pointed out in our coverage of the deal, $394.8 million of SGI’s revenues came from products such as the ICE XA cluster and UV shared memory systems and InfiniteStorage parallel file systems, while $138.1 million came from services. Here’s the juicy bit: Services gross margins were 42.7 percent for the year, compared to 21.7 percent for products. That is a really good services business, and worth whatever multiple HPE paid for it to get its hands on 1,100 HPC and data analytics experts.

“The acquisition really addresses the high end of the supercomputing space where HPE has not been very strong, even though we are the number one market share player,” Vineeth Ram, vice president of HPC and big data marketing for HP Servers, tells The Next Platform. “SGI has skills and competencies that have been proven over decades, and we are keen on the UV and in-memory technologies in particular, and how they blend these into HPC use cases. They have unique services expertise, too, and their engagement model is built on advisory and support services and a software stack that is proven across very large scale environments. These are all very strong capabilities that complement where we are because we are not at the high end of this market.”

There is no question that SGI gives HPE breadth in HPC and related in-memory and data analytics workloads, and going forward, it also gives HPE a bridge between how computing is done now on clusters and big NUMA machines to how it envisions computing will be done with The Machine, which is a memory-centric, distributed computing system that couples commodity processors, memristors, and silicon photonics to create a different kind of system that, incidentally, absolutely will have applicability in the HPC arena. HPE does not have a big presence in Japan, and SGI does among the HPC crowd, and that is worth something, too.

As for precise roadmaps for the integration of HPE and SGI, Ram is playing his cards close to the chest. “Our focus is to deliver investment protection with a clear, committed roadmap with a strategy of sustained investment that they can feel comfortable that they are getting the right technologies not just for today, but in the future,” he tells us.

After the merger between the old Hewlett-Packard and Compaq a decade and a half ago and the consternation this caused among their respective customer bases (which included Digital VMS and Tandem NonStop customers as well as Compaq ProLiant shops), HPE is understandably cautious.

HPE has its own “DragonHawk” Superdome X machine, which packs sixteen Xeon E7 sockets into a single system image with a maximum of 16 TB of memory, plus the “Kraken” variant of this machine for running SAP HANA. There is no question that HPE will sell and support this machine for a long time to come, as it has done with OpenVMS, HP-UX, and MPE systems in the wake of that Compaq merger so many years ago. That said, we think that the SGI UV 300 has a more flexible architecture and, thanks to the NUMALink 7 interconnect, also scales further. Because of this, we think that, in the long run, HPE will replace the Superdome X in the HPE lineup with the UV 300 and its kicker the UV 400. We are pretty certain that HPE will not port its HP-UX Unix variant to the UV 300s, particularly since the company did not port HP-UX to the Superdome X machines, either. Rather, HPE dubbed Linux and Windows Server as the logical successors to HP-UX and left it at that. HP-UX is stuck on Itanium-based Superdome Integrity machines, which have not been updated in years. Frankly, we would be surprised if Intel ever puts another Itanium processor into the field again even though it has promised it would with the future “Kittson” chip. There simply very little demand for it, and even if it does ship, the Kittson as delivered will be a tweak of the currently shipping “Poulson” Itanium 9300s, not a real unique chip.

Given all of this, it is reasonable to suggest that what HPE did was buy a more scalable NUMA system than it had created, and one that was already using “Haswell” and “Broadwell” generation Xeon E5 processors with the NUMALink 7 interconnect and that was being tweaked to support the future “Skylake” Xeon E5 chips due in the middle of next year.

Gabriel Broner, vice president and general manager of high performance computing at SGI, didn’t beat around the bush when asked about what SGI lines would continue at HPE.

“I think that what you will see is that the UV line will become the main product going forward, and we have been in development of our next-generation systems using Intel Skylake processors and now we are bringing people from both engineering teams together to work on that,” Broner tells us.

And then Broner really provides some insight: “Our thinking is that the ICE XA line will continue because it is succeeding with warm water cooling and high density, and the UV line continues. Surprisingly to me, after we had our initial discussions, we looked at our Rackable line, which is not as interesting because HPE has the volume play and better pricing when you have big volumes. But when you have to do more specialty nodes, using Nvidia Tesla GPUs or Intel Xeon Phi compute, the whole variety of machines where the cost is 90 percent CPU and the rest is 10 percent, these are not high volume products anymore and when you look at SGI, we were designed and organized around higher mix, lower volume. So it costs us less to build a product, but usually the cost per unit is a little higher, but when you analyze it, we are better off with the SGI path than the HPE path. So moving forward, we have a company, SGI, within HPE that is tailored genetically to support the upper part of the market that has fewer companies with more variety of needs, and we have HPE itself that is engineered for higher volume. The challenge for us is to keep both. You don’t want to just fold SGI into HPE and absorb all of its processes and lose that customer intimacy.”

Precisely.

HPE obviously has high hopes for the SGI product lines, and at the price it paid for SGI, the risk is minimal and the rewards could be great.

“The secret to SGI is that they really understand unique customer situations, where they are looking at compute intensive and data intensive workloads and understanding which nodes are best served by scale up and which ones are best supported by scale out and optimizing the whole architecture,” Ram explains. “HPE has done this with the Pittsburgh Supercomputing Center, which has in-memory computing for genomics and life sciences with Apollo 2000 and ProLiant and the whole portfolio, and there are probably a couple of others that we have done, too. But SGI has hundreds of customers where they are doing these kinds of things. So when you bring HPE and SGI together, I think we will find a whole lot more opportunities mixing HPC and in-memory capabilities. On top of that, we have a mission critical, transaction processing business that includes SAP HANA, and we will find a lot of opportunities injecting SGI technology into this business.”

Our observation is that it has been difficult for SGI to support both Linux and Windows Server on the UV line over the years, and it would be a good thing if HPE was able to help bring both of them to the platform. If HPE has a large NUMA machine and Windows Server has a large NUMA scale itself, it would be good to have Windows Server certified on the UV 300 and maybe even the UV 3000s, which scale out further but with higher latencies, just for fun. If Linux and Windows Server are the future, so be it. Embrace it and extend it.

To our way of thinking, HPE could use standard Intel chipsets with future Skylake Xeons for eight sockets and below and do UV 300s starting at four sockets with 4 TB on up to 64 sockets with 64 TB. Above that, Linux and Windows could be certified on the UV 3000 for compute heavy jobs that scale up to 256 sockets and 64 TB of memory. You have an upgrade path for current ProLiant customers and for UV 300 and UV 3000 customers if you do it this way, plus, we presume upgrade paths to the future Skylake-based UV 400s and 4000s. And, SGI gets to use HPE’s money to get Windows Server certified on the UV line, probably doubling its total addressable market.

We think that the market for shared memory systems has shrunk because it is just a given these days that enterprise applications have to scale out versus scaling up. But this is not valid thinking, even if it has come to be conventional. A UV 300 or UV 3000 is really just a big, wonking workstation and it is much easier to program applications for than a cluster based on Ethernet, InfiniBand, or Omni-Path. The wonder is that more techies don’t realize this, but we are cautious about predicting a resurgence of shared memory big iron and so are HPE and SGI.

“There are spaces where UV is winning, where we can show a clear value proposition,” says Broner. “But as you know, the commodity world has moved us to two-socket server nodes with an interconnect, and that is where many HPC workloads have moved. Once they move there and distribute the loads, it is harder to bring them back to the UV system.”

That said, for workloads that generate a lot of data and that need a shared memory system for parts of the workload, such as gene sequencing, it is easier sometimes to run both the distributed cluster and in-memory workloads on a shared memory machine because you avoid having to move the data. The extra cost of the big NUMA machine is more than made up by the faster time to answer you get with having the whole shebang running on one system. Data movement is the enemy, after all.

The one thing that could help foster a resurgence in big memory systems like the UV 300 and UV 3000 is the volume pricing that HPE can get compared to SGI and the balance sheet it brings to bear.

“One of the challenges that we had at SGI was volume,” concedes Broner. “Being the general manager of a company that is selling $30 million systems and that you need at least $30 million in cash to buy the parts for is an issue, and when you don’t have big volumes, you pay a higher price for those components. So one of the things that we are going to benefit from right away is the financial backing from HPE and the volume purchasing it has to get parts.”

This volume pricing will trickle over to the ICE XA cluster line, too. We think the ICE XA line can replace the Apollo 8000 and maybe other Apollo lines will be mothballed as well. It is hard to say. But the vast majority of SGI’s revenues come from ICE XA and we think it is probably a larger business than HPE is doing with the Apollo lineup. It is hard to say.

No matter what happens with the roadmaps, SGI now has the enterprise experts it has been craving to help it push into corporate datacenters, and HPE has a much richer set of HPC experts to try to take down large scale, capability class supercomputers.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.