As we roll into 2020, it’s shaping up to be a banner year for cloud-based high performance computing. As we recently reported, market research specialist Hyperion Research thinks we hit a tipping point in 2019 with cloud spending for high performance computing, forecasting a compounded annual growth rate of 24.6 percent over the next five years. That means that in 2020, HPC cloud revenue should top $4 billion on its way to more than $7 billion by 2023. As the Hyperion researchers noted, that represents a huge jump from just a couple of years ago, when HPC cloud growth was stuck at under 10 percent.
Likewise, Intersect360 Research is forecasting cloud to remain the highest growth product and service segment for HPC. Although Intersect360 tracks the numbers differently from Hyperion, that market researcher reported HPC cloud revenue jumped 38.3 percent in 2017, slowing to 16.0 percent in 2018, when spending reached $1.2 billion. The projection is that the annual total will exceed $2 billion in 2021 and $3 billion in 2023.
Although the new growth trajectories were something of a surprise, there were indications in the market for some time that HPC cloud spending was heating up. Among other things, that included a recent survey conducted on behalf of ANSYS that pointed to cloud usage for engineering simulation more than doubling over the next year. Agility and, somewhat surprisingly, cost, were the top two justifications for increasing cloud spending.
Increased usage is not confined to commercial environments. In a very recent example, the Clemson University School of Computing set a new record for HPC in the cloud using 2.14 million virtual CPUs for a large scale data analytics application. In this case, the application involved visually counting cars on the interstate highway network in the southeast United States, using two million hours of video. The anticipated use case for such an application is for traffic management in emergency situations, such as hurricane evacuations.
There are a number of factors driving all this growth, but the increased incorporation of HPC-specific cloud hardware, workflow management tools, and container technology have made clouds a lot more attractive to practitioners over the last few years. Microsoft has been particularly enthusiastic about adding high performance computing capabilities to its Azure offering, which includes such goodies as InfiniBand, for low-latency networking, and CycleCloud, a cluster management toolset the company built using the IP and expertise that came with the Cycle Computing acquisition.
For the ultimate HPC experience in the cloud, Microsoft has folded dedicated Cray systems into its Azure service, making it possible for customers to run their applications on XC supercomputers (or CS Storm clusters). The jury is still out on the utility of renting such high-end machinery, but it does offer a potentially useful option for customers with the occasional need for more purpose-built HPC capability.
More generally, public clouds now offer some of the most powerful processors and accelerators on the market, often even before they become available from system manufacturers. Thanks to their market clout for buying in bulk, hyperscalers often get first crack at the latest and greatest chips rolling out of the fabs. For performance-minded users that want to try out AMD‘s new “Rome” Epyc chips or are looking to give Intel’s “Cascade Lake” Xeon SP processors a spin, it’s a lot less complicated to do this kind of tire-kicking in the cloud as opposed to investing in an on-premise trial.
In fact, an HPC user may even have different applications that are better suited to one platform or the other. The ability of clouds to offer both options means users can avoid any such hard choices.
All of that goes double for GPUs. Although new graphics processors are not introduced at the same rate as CPUs, GPUs are an even riskier proposition for on-premise setups due to their higher cost and special software requirements. Now that the market forces driving AI have shrink-wrapped GPUs into cloud instances, complete with integrated software stacks, renting these chips from service providers has become a lot more practical.
All of this bodes well for increased cloud spending for these kinds of performance-demanding applications in 2020 and beyond. Certainly, if Hyperion’s spending forecast for HPC in the cloud comes to pass, the market is in for a significant growth spurt. Keep in mind though that Hyperion’s 24.6 percent CAGR is applied to today’s small base of cloud spending. So even the projected $7.4 billion of cloud revenue expected by 2023 is a just a modest-sized piece of the $44 billion spend anticipated for the total HPC ecosystem. Servers will continue to be the biggest item, at $20 billion, and even storage revenue, at 7.8 billion represents a larger chunk.
But if the trend continues, the bifurcation of the market between on-premise hardware and cloud services will become much more apparent. And that will almost certainly change the dynamics of HPC spending across the ecosystem in ways that we have yet to imagine.