HPC Spending Expands With Clouds And Data

While enterprise spending on general purpose servers has been soft in recent quarters, companies are investing more heavily in certain segments, and one of them just so happens to be high performance computing systems used for simulation, modeling, and data analytics at scale.

At the SC15 conference in Austin, Texas, Earl Joseph, program vice president for high performance computing at IDC, did his traditional early morning breakfast presentation giving the state of the market, and the good news is that the company’s prognosticators are raising their spending projections for 2015. Joseph also provided revised projections for spending in the HPC arena going out to 2019, which will be the beginning of the ramp to the exascale era.

Budgets in the HPC community can be choppy, given that so much funding for the national labs and academia comes through Federal and state governments, who are subject to the vicissitudes of economic forces at from local to global levels. All segments of the IT market, including HPC, took a hit during the Great Recession, which began at the end of 2007, but things started bouncing back in 2010 or so. Growth in IT spending overall has been modest in the past few years, with total IT spending edging above $2 trillion in 2014, up around 5 percent at a compound annual growth rates between 2012 and 2014 (inclusive). Hardware –servers, storage, and networking in the datacenter plus myriad client devices – accounted for about $1 trillion of that and services making up about $648 billion and software comprising the other $409 billion. Services grew at 3 percent over the period and software grew at 7 percent. If you drill down into the servers, sales grew only 1 percent a year between 2012 and 2014, hitting $54.78 billion last year. The HPC segment drove $10.2 billion in revenues, with supercomputers (meaning machines that cost over $500,000 and that typically have advanced features for scaling out workloads) driving $3.15 billion of that. Divisional machines (which cost between $250,000 and $500,000) accounted for another $1.52 billion, and departmental machines a $3.83 billion segment. Workgroup clusters rounded out the HPC server sales at $1.72 billion last year. Spending on HPC systems was actually down last year, as you can see in this summary table from IDC:

idc-hpc-server-spending

The supercomputing segment had a growth spurt from 2010 through 2012, and cooled off starting in 2013. Sales of smaller HPC systems had their ups and downs, but have been trending upwards over the past five years, but not in phase so overall HPC server spending has been hovering around $10 billion. Here is what the vendor share in the supercomputer segment looks like through the first half of 2015, according to IDC:

idc-hpc-supercomputer-pie

The good news for the HPC market is that IDC is raising its forecast for spending in 2015. Back in March, the company was projecting that HPC system revenues would grow by 7 percent in this year, but are now projecting 12 percent growth. Moreover, the spending forecast out through 2019 has been bumped up and now Joseph and his team expect for sending to kiss $15.5 billion in 2019, which represents a compound annual growth rate of 8.6 percent between 2014 and 2019. The previous forecast from March of this year pegged that growth over the five-year period at 8.2 percent. Here’s the latest forecast from IDC plus the old one from March:

idc-hpc-server-spending-forecast

With the uptick in server spending, IDC will be updating its spending forecast across the broader HPC ecosystem, which includes storage, middleware, applications, and services. (We presume that networking relating to cluster of systems or for parallel file systems gets put into the respective server and storage buckets in the IDC models, since networking is not broken out separately.) Here is the current state of the broad HPC market forecast from IDC, with the updated uptick in server spending:

idc-hpc-server-spending-broad-forecast

Note that this table does not include the updated data for server spending through 2019, which has been tweaked upwards. So be careful with this data. There are several hundred million dollars a year more in HPC server spending from 2016 through 2019 than this data above shows. Also, Joseph says that the other elements of HPC spending will be updated in forecast in December. So stay tuned.

So what is driving this growth? At first glance, it would not seem to be the high-end supercomputing segment, although to be fair, Cray has done a phenomenal job growing its revenues over the past couple of years so it seems more like others have seen their sales drop off (IBM in particular) as Cray has gained a larger slice of the high end market. IDC does tens of thousands of surveys each year to come up with its forecasts, and Joseph said that the company had been undercounting the use of HPC systems at financial services companies, and that two points of the five point jump in the 2015 server spending forecast was driven by a revision in its models for this sector.

Moreover, there is more spending on HPC systems used for data analytics, with the in-memory, fraud detection system used by the US Postal Service (and based on SGI UV shared memory systems) being the example that Joseph brought out to illustrate the principle. These new high performance data analytics workloads, which are a subset of HPC in the current IDC lingo, include fraud and anomaly detection, marketing, business intelligence, and a hodge-podge of workloads running at scale. (This is not business intelligence running on a few nodes in the back corner of a commercial datacenter, but software running at scale.)

These kinds of jobs running on HPC-style systems represents one of the founding premises of The Next Platform: HPC technologies aimed at national labs and academic supercomputing centers will get tweaked and deployed in commercial settings. (We also believe that technologies created by hyperscalers will see a similar tectonic drift, and have discussed many of these.)

The other big driver, explained Joseph, is that buyers of low-end HPC systems are now spending a bit more than they have in recent years, which is a sign that the manufacturing sector HPC spending is finally coming back after the recession. (Manufacturers are the first to make cuts when the economy takes a dip, and they are often the last to start spending again once the economy recovers.)

“Big data is also creating new buyers in the market,” Joseph added in describing the expansion that is happening in HPC spending. “In the past few years you are seeing financial firms like PayPal and other companies that you have never seen before buying HPC systems.”

Oddly enough, one of the other big drivers for HPC installations might turn out to be the cloud, where companies are offloading some of their HPC workloads to get quick access to cheap and scalable compute. The cloud works great where the data sets are relatively small, the budgets are small, and the job can scale across a relatively large number of cores to run faster than it might on a smaller set of infrastructure.

“Some people are saying that cloud usage in HPC is stagnant, others say it is growing a lot, but our surveys show it is growing dramatically,” said Joseph. According to a survey that IDC did four months ago, 25.5 percent of HPC sites are using public cloud infrastructure for some of their workloads, which is up from 13.6 percent in surveys done in 2011, and customers report that 31.2 percent of their workloads have been deployed on clouds. “This has now gone beyond the experimental stage. People have figured out where it makes sense and when they should run their code in the cloud. Is it pervasive and everywhere? No. The other thing we are looking at is how it affects the HPC datacenter. Outside of very small organizations, the use of cloud is actually causing the HPC datacenter to grow faster. This is a kind of trend that we have seen before. There is a pent-up demand for running HPC codes. Almost any center you go to has jobs they would like to run, and the more people they start running, the more people find how to use HPC to do more things. We actually seeing cloud computing be additive as opposed to being diminishing.”

At some point, IDC will have to actually try to figure out how much money is being spent on public clouds running HPC workloads and similar HPC-scale data analytics workloads and add this to the current on-premises mix it is tracking.

In the first half of the year, IDC reckons that overall HPC server spending is up 12 percent, and it is now projecting that this rate will more or less hold steady for the remainder of the year. That’s an incremental $716 million in spending on infrastructure, services, and software spread around the industry. Some of this is pent-up demand in the wake of IBM selling off its System x server division to Lenovo last year, which caused a bit of a pause in spending. IBM’s business is at a lull as it transitions to OpenPower hybrid platforms and is no longer selling either X86 clusters or Power-based BlueGene systems, as you can see in the table below:

idc-hpc-overall-2014-2015

IDC did not divulge second or third quarter figures yet for the HPC server segment. And as we have pointed out before, you really need to look at this market over the course of a year – and then several years – because of the lumpiness and bumpiness of the deals each quarter.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.