Adapting InfiniBand for High Performance Cloud Computing

When it comes to low-latency interconnects for high performance computing, InfiniBand immediately springs to mind. On the most recent Top 500 list, over 37% of systems used some form of InfiniBand – the highest representation of any interconnect family. Since 2009, InfiniBand has occupied between 30 and 51 percent of every Top 500 list.

But when you look to the clouds, InfiniBand is hard to find. Of the three major public cloud offerings (Amazon Web Services, Google Cloud, and Microsoft Azure), only Azure currently has an InfiniBand offering. Some smaller players do as well (Profit Bricks, for example), but it’s clear that InfiniBand doesn’t have the same mindshare in the general public cloud space as it does in HPC.

Developing a suitable path for broader cloud adoption of InfiniBand is the subject of Feroz Zahid’s paper in the Supercomputing 16 Doctoral Showcase. “Realizing a self-adaptive network architecture for HPC clouds” highlights five key challenges to efficient use of InfiniBand networks in cloud environments. The challenges can broadly be distilled down to two categories: performance and flexibility. Of the two, the flexibility may be the larger roadblock.

There are a variety of reasons for this, but one is fundamental to how InfiniBand works. Much of the work is offloaded to the hardware for performance reasons, but this means routes are fairly static. This, of course, is antithetical to the idea of cloud resources. Both public and private clouds have need to shuffle virtual machines around, but public clouds have an additional need – providing logical separation between different customers (for example, Amazon Web Services’ “Virtual Private Cloud”). This means the hypervisor migration that can happen transparently with traditional servers is difficult for virtual machines using InfiniBand. The time it takes to recompute and update routes ranges from several seconds to a few minutes.

Researchers in Norway have developed a virtual switch model for InfiniBand Single Root I/O Virtualization (SR-IOV), but it does not appear to have gained wide adoption yet. Zahid proposes using this model to allow providers to quickly reallocate resources and provide live VM migration. This capability is critical to operating a production cloud system because of the way it enables rolling maintenance windows without requiring user-visible downtime.

The use of SR-IOV allows virtual machines to share a single physical host channel adapter.  but it does not solve the problem of updating routes. While the San Diego Supercomputing Center found SR-IOV to provide minimal performance loss on their Comet cluster, it’s less clear how a public cloud provider might fare. At the least, the hypervisor would need to be aware of the network load and have the ability to perform load balancing quickly and without impact. Even Zahid’s performance challenges eventually tie back to flexibility.

None of this is to say that InfiniBand cannot work in a cloud model. Both public (e.g. Azure) and private (e.g. Comet) cloud environments have adopted it. But widespread adoption will require some of the technical challenges to be addressed.

As is often the case, the technical challenges are not the greatest obstacle – economics plays a major role. InfiniBand leader Mellanox reported $224.2 million in revenue for Q3 2016 (the most recent report available), and yet that is only a fraction of the $3.5 trillion that Gartner forecasts for global IT expenditure in 2017. The HPC market is small and specialized, so it makes sense that public cloud providers in particular would focus on the much larger general IT market.

This may change, and sooner than some expect. As we wrote earlier this month, a Cambrian explosion is coming in 2017. Platforms that have spent years converging on a homogeneous standard will again diversify. Public cloud providers have rushed to bring specialized kit to the market – double-precision GPUs and FPGAs made a lot of news in 2016. 2017 may be the year that serious efforts are made – not only to offer InfiniBand, but to actively market and enable it.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. OrionVM and some more are also in the infiniband train – you can tell fr the performance who’s using IB and who isn’t. As a customer, the big three make only sense for their commonly factor.
    Normally the underlying network of your storage should not matter, but it’s a safer bet still to have some real Ops folks around who have more hardware understanding than the pure Terraform user.

    Personally I can’t find any reason any more why one would put on the corset of aws API along with insane costs, especially considering how slow it all is compared to some Infiniband insanity…

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.