Dell Engineers An HPC Market Expansion

It is hard to say exactly how much infrastructure revenue went up for grabs when IBM sold off its System x division to Lenovo last year, but in the HPC market, if you look at the data from IDC and Intersect360, which do detailed tracking of HPC organizations in terms of revenues and installed base, the numbers are not small. It is hard to project until 2015 is finished and all the counting is done, but it is easily on the order of hundreds of millions of dollars that shifted away from IBM/Lenovo and largely toward two vendors: Hewlett Packard Enterprise and Dell.

These two companies, along with Cisco Systems with its enterprise-aimed blade servers, have been the main beneficiaries of the IBM/Lenovo deal on just about all server fronts, in fact.

We discussed HPE’s revamped plans to move more aggressively in the HPC and data analytics space concurrent with the SC15 supercomputing conference last week with Bill Mannel, who took over this business after many years at SGI. While at the conference we also caught up with the new HPC team at Dell, which ironically enough, has a team of infrastructure people that hail from the former Hewlett-Packard. This includes Jim Ganthier, general manager of engineered solutions at Dell, and Ed Turkel, HPC strategist at Dell, as well as a slew of others who spent many years at their main rival in the systems business. And make no bones about: Dell is throwing down the gauntlet, especially perhaps after supercomputer maker Cray was able to win the deal to install the Lonestar 5 system and HPE is building the Hikari system at the Texas Advanced Computing Center at the University of Austin – where Michael Dell started his company in a dorm room in 1984 and just down the road from the company’s headquarters outside of Austin.

“We see huge opportunity in HPC, and we are clearly going to go do two things that Dell can do,” says Ganthier. “The first thing we are going to do is disrupt. We all know that for the longest time, if you look at market share or opportunity, HPC in general has been relatively flat and that sounds like an opportunity for disruption. With everything else going on with our competitors, there are huge voids to be filled. Sun Tzu once said that in chaos resides opportunity, and right now in our industry there is a lot of chaos. The second thing we want to do is democratize HPC, and outside of EMEA, there is no market where Dell is not number one in compute. We have economies of scale and an ability to innovate.”

Another benefit that Dell brings to bear, says Ganthier, is that Dell is now the only vendor that has a full set of HPC systems, thanks to IBM selling off its X86 servers and HP splitting into two, with workstations on one side and servers, switches, and storage on the other side. The mantra at Dell is “from desktops to petaflops, from cluster to cloud.”

To attack the HPC opportunity, Dell is going to focus on some key areas first, starting with genomics, manufacturing, academic research, and oil and gas. (These are admittedly the very core markets in the HPC space, so this is not a big surprise.)

“HPC has been in the realms of governments and academia for too long, and we want to make it easier and more cost effective and remove some of the risk, complexity time, and people from deploying HPC. The days when people were really excited to buy best-of-breed components and stand them up, then manage them, are over. We think that the next level of researchers and entrepreneurs should focus on innovation, not on infrastructure. I happen to work for a very activist shareholder, and we are going to go rifle shot, not shotgun.”

Dell plans to leverage its experience building the Stampede and Wrangler systems at TACC and the Comet system at the San Diego Supercomputer Center to win more deals that will be visible on the Top500 supercomputer rankings that come out twice a year. (Interestingly, Jay Boisseau, who ran TACC for nearly 13 years and SDSC for five years before that, is now part of Ganthier’s HPC staff at Dell, although he has not updated his LinkedIn profile yet.) With Dell being a member of Intel’s Scalable Systems Framework (SSF) initiative, which brings together key compute, storage, and networking technologies from the chip maker into a cohesive whole, and enthusiastically behind the OpenHPC open source HPC software stack launched by Intel just ahead of the SC15 conference, Dell is very keen on chasing the so-called “missing middle” in the manufacturing sector, where product designs could be accelerated and iterated more quickly on parallel clusters compared to the single workstations that are commonly used by these companies.

“We want to enable the next level down, the next two thousand or three thousand organizations down in the HPC pyramid so they can focus on the next genomics or design breakthrough,” says Ganthier. To show how much opportunity (or resistance or lack of education, depending on how you want to phrase it) there is outside of the big national labs and academia, Ganthier said that the good news is that 98 percent of the companies surveyed by the National Center for Manufacturing Science said that between now and 2020 said they would be using digital tools to design their products, but the bad news is that 95 percent of those polled had said they had not deployed HPC as we know it.

Parallel To Hyperscale

To a certain extent, Dell’s approach for HPC is precisely parallel to the creation of its Datacenter Scalable Solutions (DSS) unit earlier this year. Dell has been making custom servers for many of the top ten hyperscalers in the world for many years through its Data Center Solutions (DCS) unit, but created DSS to offer tailored systems, storage, and networking to the next 1,000 or so service providers and cloud builders who do not have the volumes of Microsoft or Facebook or Yahoo or Google, but who nonetheless want to engineer their machines to better fit both their applications and their facilities. The DCS business started in 2007 and quickly ramped to $1 billion in sales by 2011 and has been growing from there, and Dell’s goal is to have DSS be about the same size as DCS by next year and keep growing from there.

To make HPC easier, Dell is going to be rolling out reference architectures in the key areas cited above, starting with genomics. These reference architectures show life sciences customers how to build HPC clusters aimed at their particular applications – in this case, for gene sequencing – and are about 80 percent complete with about 20 percent left open for customization for particular software stacks. Once enough customers start deploying a reference architecture, Dell will turn that into a fully preconfigured engineered solution. This may sound a bit boring, and that is precisely the intent: Dell wants to make installing an HPC cluster normal, not something that requires scads of PhDs and sysadmins. Dell had a number of reference architectures in already, which you can see here, and one of them from two years ago focused on genomics. This has now been updated with all of the latest Dell iron and formally productized as the HPC System for Genomics.

To be precise, this pre-configured genomics system is based on Dell’s FX2 converged systems, and has 40 of Dell’s PowerEdge FC430 quarter-width server sleds in it, which delivers a total of 1,120 cores from “Haswell” Xeon E5 v3 processors with a total of 34 teraflops of double-precision number-crunching oomph. This compute is used for genomics sequencing, and has access to a 480 TB Lustre parallel file server (using Intel’s distribution of Lustre and used for scratch space for the genomics applications) and a 240 TB file server (based on NFS) that is primary storage for the operating systems, applications, and other user data. The setup also includes a four-socket, fat memory PowerEdge R930 rack server with 1.5 TB of memory for doing genomics assemblies. (Sequencing and assembly take different kinds of systems.) The genomics system setup uses 56 Gb/sec FDR InfiniBand to link all the nodes to each other, and has a bunch of login and head nodes to manage the cluster. Dell has chosen Bright Cluster Manager from Bright Computing, a popular cluster manager from the HPC realm, to control the whole shebang, and is deploying Biobuilds from Lab7, a collection of open source bioinformatics applications for Linux systems that are wrapped up and given commercial support, as the base genomics code. Services, support, financing, and domain expertise are leveraged here, too.

“HPC has been in the realms of governments and academia for too long, and we want to make it easier and more cost effective and remove some of the risk, complexity time, and people from deploying HPC. The days when people were really excited to buy best-of-breed components and stand them up, then manage them, are over. We think that the next level of researchers and entrepreneurs should focus on innovation, not on infrastructure. I happen to work for a very activist shareholder, and we are going to go rifle shot, not shotgun.”

If you run this Biobuilds code on a high end workstation to do genomics sequencing and assembly, it can take three to four days to complete a genome, but on this cluster it only takes four hours. (This is data from TGen’s own implementation.) Most of the codes are not particularly parallel, so you scale the workload by running the genomics of multiple patients concurrently. These are very large datasets that are I/O intensive, so you have to be careful about keeping a balance in the cluster between compute, storage, and networking as it is scaled up to do sequencing and assembly on a larger number of people.

Dell is working on an HPC System for Manufacturing, aimed at customers who want to start with workstations and move their way into clusters; the key applications here are finite element analysis for structural analysis based on ANSYS Mechanical, and computational fluid dynamics using ANSYS Fluent or CD-adapco STAR-CCM+. The HPC System for Research will be aimed at academic research centers and will support a stack of software, very likely open source. These two preconfigured setups will be available in early 2016. Dell’s own brand of Omni-Path adapters and switches, dubbed the H-Series, will come to market timed with Intel’s Omni-Path rollout in the first half of 2016.

One thing that Dell does not seem to be interested in is creating its own commercially supported variant of the OpenHPC stack for HPC iron, once it becomes available. Intel will be creating its own supported version, much as it does with Lustre, and we expect that Red Hat, SUSE Linux, and possibly CentOS and Canonical will do so as well once that stack is available. The issue, says Ganthier, is that Dell would have to invest in creating a support organization for the OpenHPC stack, and it would rather let its software partners (which include the companies above) do that and just push the iron. “There are just some places where it is smarter to partner than to be 100 percent vertically integrated,” he says.

Being privately held, Dell does not talk about its revenues in general or for any specific segment, but Ganthier says that what Dell is trying to do is increase the size of the total addressable market by making HPC easier to consume. This is not, as we all know, a new idea. But it is one that has been a lot tougher to do that, say, generic web and file serving in the enterprise.

If you look at IDC’s data presented at its most recent SC15 breakfast session, Dell had $1.48 billion in traditional HPC revenues (including compute, storage, and switching) in 2013, and it grew a modest 2.2 percent to $1.51 billion in 2014 against a market that declined by seven-tenths of a point. Dell’s share in 2013 of the HPC pie was 14.4 percent, and in 2014 it rose again to 14.8 percent. In the first quarter – the latest for which IDC had data available – Dell had $426.2 million in revenues, and got 16.9 percent of the $2.52 billion pie. Without giving out a number – which he is not allowed to do – Ganthier says that based on the initial feedback for its engineered systems approach, Dell can grow even faster than that in the HPC space. And once Dell completes the $67 billion acquisition of EMC and has access to Isilon, XtremeIO, and other technologies, Dell will be able to get a bigger slice and at the same time be more creative with its clusters.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.