Cray Supercomputers One Step Closer to Cloud Users

Supercomputer maker Cray is always looking for ways to extend its reach outside of the traditional academic and government markets where the biggest deals are often made.

From its forays into graph analytics appliances and more recently, machine and deep learning, the company has potential to exploit its long history building some of the world’s fastest machines. This has expanded into some new ventures wherein potential new Cray users can try on the company’s systems, including via an on-demand partnership with datacenter provider, Markley, and now, inside of Microsoft’s Azure datacenters.

For Microsoft Azure cloud users looking to bolster modeling and simulation capabilities without adding more virtual instances or building an on-prem datacenter to host an HPC system, Cray has an interesting proposition. However, it is likely not what you think. These are not Azure datacenter-based on-demand systems, but rather dedicated Cray clusters that belong to the customer but are fully managed, serviced, and if in the contract, upgraded by Cray. This is great news for those who cannot build to suit a high power consumption cluster or for those that may be new to managing a large parallel file system in addition to the Cray Linux Environment, which is a slightly different animal for enterprise newbies to Cray.

Via an exclusive arrangement between Microsoft and Cray, it is now possible to have a dedicated Cray XC or CS Storm supercomputer hosted in an Azure datacenter, managed and supported by Cray, with access to both the suite of Cray tooling, including newer offerings like the Eureka XC graph analytics framework. All of this is designed to mesh well with Azure tools and services, most notably Azure’s storage stack which can grab data that has been stored in the cloud and feed it into a customer’s simulation workloads running on the Cray systems.

To some, this might sound like a strange way to access a Cray that is cloud-connected. After all, it seems logical for Cray to offer its clusters on demand in a multitenant fashion or at least rented by the hour. However, as the company’s Barry Bolding tells us, virtualizing a Cray machine is defeating the purpose. The parallelism and performance that come from such a tightly coupled system cannot be suitably broken into parts—from the compute to the storage system. He says that the company will keep exploring this as a possibility in the future but for now, there is enough demand from HPC users that are storing data and running some workflows in Azure but need a high performance computing boost.

“This is about addressing new customer bases in addition to existing customers that are already using Azure to store data and for some workflows. There are many cloud customers that have increasing needs for scalable compute and they’re realizing that a dedicated resource is more cost-effective than using bursty Azure services,” Bolding explains. “This is also about those streaming a lot of data to the cloud but they also need modeling and simulation tightly coupled with their datasets on Azure.”

We asked Bolding how common a use case like this will be. It is not easy to get statistics from either AWS or Microsoft about what percentage of applications are tightly-coupled, true HPC in nature, but one has to imagine it is still a rather limited number. He argues that offering Cray systems this way will have broader reach than we might think. “This brings supercomputing to the customer that doesn’t have or hasn’t had a Cray; they don’t have to worry about the datacenter. There are many customers that fit this bill whose resource requirements are growing to the point that it makes cost sense to have a dedicated resource for simulation and modeling. They can buy an on-prem supercomputer, buy more cloud resources, both of which are expensive when they can purchase a Cray, put that in an Azure datacenter where they are already running their workloads and get out of that management.

Again, colocation is nothing new, even if it is unique to see high-end dedicated supercomputers added to the mix. Cray is counting on an increasing number of modeling and simulation users who have hitherto used only cloud or cloud cluster resources due to cost constraints. With more options for HPC cloud users in terms of on-demand and public cloud, it will be interesting to see who adopts this option and in what industry,  as well as whether these users have ever had experience with Cray supercomputers.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.