Google, Cycle Computing Pair for Broad Genomics Effort

In times long since passed, when cloud was the latest, greatest hype machine, a steady wave of “cloud enablement” companies came to fore, promising secure ease as users tiptoed across the firewall. As we have seen over the last several years, a great many of these have been absorbed by infrastructure providers or have dissolved into mist.

The startups that managed to make it past the initial cloud boom and managed to carve out a niche, most successfully as highly-touted partners of Amazon, Microsoft, and Google cloud platforms have had to work hard to stand out. For some, it helped to specialize, as was the case with Cycle Computing, which found its early footing by tapping into the supercomputing set. For this specific group of users in the high performance computing market, the initial worries about cloud security could oftentimes take a backseat to suspicious about what the virtualization overhead meant for performance. And that, coupled with large simulation datasets, made the HPC cloud market a tough one to play in.

But play they did, moving from some early use cases with cloud-based HPC applications into the demanding workloads from the life sciences industry. From large pharma companies to smaller research firms and their sequencing needs, Cycle Computing was able to establish a reputation as a critical middle layer in the HPC to cloud transition. The company counts most of the major companies doing large-scale drug discovery as customers, which means that many of these have already found a comfortable fit in the Amazon cloud.

Since the mid-2000s, when they first came onto the scene, Cycle’s business of “enablement” has meant patching together some sophisticated middleware to onboard tough applications for large core-count jobs. For instance, more recently, the company spun up a 70,000 core cluster for hard drive maker, HGST to model their new helium drives in high resolution and last year they put their brains (and software) behind a petaflop-capable cloud-based machine for Schrodinger for a molecular dynamics application. The key to both of these stories, as well as their other use cases, is that they are helping companies like Amazon prove out that tricky cloud math or rather, when it makes sense to do something in-house versus push it out to cloud infrastructure. The costs that underlie this, namely the management, procurement, and various optimizations are shouldered by Cycle, which has been fine-tuning its approach on a user-by-user basis on the Amazon cloud.

As of today, Cycle’s opportunity to move outside of Amazon’s borders and capture a potential base of new users expanded. The company’s CEO, Jason Stowe, shared news of the new hooks for the Google cloud with The Next Platform, noting that there are many new use cases they expect will open in the coming months. While they have continued to find new large users through their Amazon partner status, including most recently, the Food and Drug Administration (FDA), expanding beyond that base makes sense and could them secure more long-term users. Stowe estimates that for every long-term user of their CycleCloud service there are 2X-3X that use the cloud (and their middleware) for a single major undertaking where in-house resources aren’t enough.

These ties with Google Compute Engine are being announced in tandem with two related news items, one from Cycle, the other from Google. First is the fact that Cycle Computing has been working with the Broad Institute to make the move to the Google cloud for a cancer research workload that ate 50,000 cores on GCE. The second item is that Google announced that its preemptible VMs, which were launched in beta several months ago (and are akin to spot instance pricing on AWS) are entering full availability. Interestingly, Stowe explained that it was this very feature inside GCE that pushed the Broad Institute the Google Cloud way, although work between Google and the Broad Institute has been ongoing in previous months for other workloads.

The Broad Institute ran what amounted to thirty years of cancer research calculations in just a few hours based on a decade’s worth of sequenced or genotyped biological samples—1.4 million to be exact. With Cycle’s help, they were able to scale to this level in just a few weeks, taking advantage of preemptible VMs for price reductions, even if it means there is always a chance their workload could be interrupted (we have a longer piece coming this morning featuring a chat with Paul Nash, Google’s product manager for GCE describing this). As Chris Dwan, acting director of IT at the Broad Institute described, “this kind of cloud-based infrastructure helps us remove some of the local computing barriers that can stand in the way. Flexible processing power allows us to think on a much larger scale.”

While use cases like the Broad Institute’s genomics workloads are in Cycle’s sweet spot (and are a target market for cloud providers), Stowe says that Cycle is seeing momentum in more areas, including manufacturing. Although the company’s start was in research and life sciences, there is far less concern about the “older” topics like security and more of a sense that the real challenge is making sure it is possible to efficiently balance workloads between what stays in-house and what is sent to the cloud. Cycle’s service helps manage this via policies and with more hooks into other workload management frameworks, including Univa’s Grid Engine, which was used in the Broad Institute example, they plan to keep broadening their reach through wider integration and orchestration with schedulers and cloud provider use cases.

On that orchestration front, the goal for the Broad was to ensure they were getting the necessary scale and performance, but of great importance was the pricing benefits they hoped to capture from the preemptible VMs. This is a similar challenge for Cycle in terms what the company faced in rolling out support for spot pricing inside of AWS, but the fundamental concept is the same.

We have a representation for these clustered applications, it is not unlike a container approach in concept,” Stowe explains. “Our software takes sets of hosts and sets of roles and makes it so you can consistently orchestrate them in different infrastructures. From a practical standpoint, things like a Grid Engine cluster or a parallel shared file system or a Spark cluster or any other number of examples, all of those are just clustered applications with a head node and read/write nodes and maybe a shared file system–all those roles we abstract in a generic way, and we can orchestrate all of that at scale.”


Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.