When HPC Becomes Normal

Sometimes, it seems that people are of two minds about high performance computing. They want it to be special and distinct from the rest of the broader IT market, and at the same time they want the distributed simulation and modeling workloads that have for decades been the most exotic things around to be so heavily democratized that they become pervasive. Democratized. Normal.

We are probably a few years off from HPC reaching this status, but this is one of the goals that the new HPC team at Dell has firmly in mind as the world’s second largest system maker sets its eyes on being not only the top supplier of servers but also the dominant supplier of HPC systems. Rather than just try to take share away from incumbent HPC suppliers, Dell plans to broaden the market much as it has with enterprise servers starting three decades ago, Ed Turkel, HPC strategist at Dell and formerly the top HPC executive at Hewlett Packard Enterprise, tells The Next Platform.

“We went through a period where we were pretty much growing with the market, our share was staying pretty much constant,” Turkel told us at the recent ISC16 supercomputing conference in Frankfurt, Germany. “Now, with the much-increased investment we did starting this fiscal year, we are expecting to significantly grow the share by moving up market – we haven’t been pursuing high end systems as much as other vendors and we are certainly looking to do more of that. But the things that we are doing with our standard HPC systems and for different workloads is very much a part of broadening the market. So we want to move up in the hierarchy as well as broadening. We are looking to increase share, and our ultimate goal is to grow to be the number one HPC vendor by share over time. That is not something that is going to happen quickly. My former employer is still by share twice the size of us.”

Exactly how much revenue that represents is not clear because neither Dell nor HPE provide sales figures for HPC systems. Even when Dell was a public company, it never disclosed its revenues from HPC system sales, but the word on the street before it went private was that it had about a $500 million annual revenue stream from HPC. Preliminary data from IDC for the first quarter of 2016, which we gave you a snapshot of here, HPE had a 35 percent market share by revenue for worldwide HPC system sales, compared to 17 percent for Dell. With the acquisition of EMC completed perhaps later this year, Dell’s share of the HPC space will rise as well because the storage maker has a pretty good presence in the HPC arena with its Isilon storage arrays, among other things.

Democratization Of HPC

For Turkel, making HPC normal – meaning regular organizations can afford systems large enough to run useful simulations and models – has been a life-long mission.

“When I started my career at Digital in 1980, it was when the VAX 11/780 was launched, and we used to talk about the three Ms: 1 million instructions per second, 1 million bytes of memory, and 1 million pixels for graphics. In those days, we advertised those things as being pretty high end technical computers, not quite supercomputers. But your phone has more power than that now. It is not so much about the term “supercomputer” or “HPC” as making this type of technology more pervasive. When you see an engineering shop with a half dozen people doing modeling and simulation that required a large-scale supercomputer ten years ago and now it is being done on something that is sitting in a closet, this is democratization.”

There are many examples of this democratization process at work, and Turkel says it is illustrative to take a look at the old Top 500 supercomputer lists and compare them to more recent ones. In the old days in the early 1990s, the Top 500 lists showed HPC systems were only in a handful of countries, and mostly large academic and government centers. Over time, particularly with the evolution of Linux clusters, HPC technologies have spread and now countries that could never afford to play in the HPC realm are building world-class systems now, like the 1.7 petaflops Lengau supercomputer that Dell has built for the South African government. They could have never afforded to put in a big Cray or IBM or Convex vector machine.

Here is another interesting example cited by Turkel. In the 1990s, everybody was talking about grid computing, and then all of a sudden everybody stopped talking about it. Not because it ceased being important, but because everyone was doing it. It became pervasive.

This, in a nutshell, is what Dell wants to do with HPC.

“There is a natural growth upwards of the supercomputer segment, but there is a natural broadening of the market as more and more customers adopt technical computing,” says Turkel. “We see both of these trends taking place, and it comes because we are delivering more compute power, networking bandwidth, and storage capacity more affordably and based on more standard approaches. I would argue that Knights Landing is part of that, allowing us to get better performance at a lower cost.”

Part of Dell’s strategy is not only selling all of the hardware and making strong partnerships for the systems software for running HPC applications, but also designing flexible reference architectures aimed at manufacturing, life sciences, and research applications. Customers in these sectors of the economy do not want to do Linux kernel patches or set up clusters. They want to load applications and go.

Making It Up In Volume

This broadening that Dell is referring to – and is basing a business plan on – is reflected in our language. Back in the 1970s and 1980s, the term supercomputer meant something very precise in terms of architecture, number-crunching capability, and cost. But these days, supercomputers are but the top-end of the much broader HPC market, which scales down to departmental systems that are, relatively speaking, affordable for much smaller organizations.

“We have all seen diagrams of the usual technology adoption curves, and how it is not one curve, but multiple ones that come out and change the game,” says Turkel. “When HPC was all vector machines, it grew up its adoption curve until RISC/Unix came out, and that grew up its curve until Linux on X86 clusters took over and grew from there. There is a natural standardization that happens in those processes, and we see that evolving further here. And that is a good thing in that it broadens the adoption yet again. I do think that this trend will make HPC technology available for more users.”

As an example of this democratization – but certainly not the only example – take Intel’s Scalable System Framework for HPC, which takes Xeons and Xeon Phi compute and mixes it up with Lustre storage, Omni-Path interconnect, and soon the OpenHPC systems software stack. This is analogous to the LAMP stack for web applications (Linux operating system, Apache web server, MySQL database, and PHP programming language) back in the early days of Linux adoption in the enterprise. We would argue that the move to Knights Landing and Omni-Path is not as big of a jump as we saw in HPC from vector machines to federated RISC servers or from these to Linux clusters running MPI over fabrics. The question we have is what is the next shift, and it maybe that this one is not defined by hardware at all.

“Whether it is Xeons or Xeon Phis or GPUs, we have moved very heavily into a multicore world, so that I would argue that the next big change will be on the software side,” Turkel says. “I think that the challenge that we are starting to have is simply writing code that is going to run, in the case of Knights Landing, on 72 cores on a chip with four threads per core and then multiply that out by 50,000 servers or 100,000 servers. The net of all of that comes down to writing code that takes advantage of all of that and makes it usable.”

The point is that the standardization that Dell is talking about, and that Intel is putting a tremendous amount of engineering and marketing behind these days, doesn’t get the industry any closer to exascale computing in and of itself. But what it does do is make the overall ecosystem a little more accessible and a little more affordable to broaden the market. And that will allow companies like Dell to work with suppliers like Intel to create supercomputers at the high end whose technologies will eventually trickle down to enterprises around the world and be about as normal as a Linux server running a database or web server these days.

At that point, decades of hard work by an entire industry will have paid off.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.