Middle Ground Emerging for Next Generation HPC Storage

In a series of articles over the last two weeks, we have taken a close look at how the storage stack might be changing for next-generation high performance computing sites—not to mention similarly sized installations for very large-scale enterprise. Specifically, these have focused on the potential advantages of object storage as the primary approach, which would be simple if not for the fact that an intermediate layer to bridge the divide between worlds is required.

As we wrote back in November, “If file systems could scale in terms of their storage capacity and performance, object storage would not have been necessary.” This realization, from the upper echelons of high performance computing to the lower end of the general enterprise datacenter, has been clear for some time, but for a POSIX-rooted base, simply knowing this doesn’t change things much. The death of the parallel file system may be pending (for a number of years) and while there are efforts to bridge the gap between the old and the new object-based future, including MarFS, among other efforts at the national lab and supercomputing site scale, it will take concerted effort to bring the efficiencies of object storage to certain large-scale computing centers—be they research or business-driven.

For a company like Scality, which was founded in 2009 to bring object storage to petabyte-class centers, the first customers were a much easier scale since they were “native” to the new hyperscale way of doing things. The company’s founders scoured papers from Google, Facebook, and others operating at webscale to design a storage platform that could accommodate those needs, but to grow past it to companies with a great deal of data, all of it locked away against file system approaches that made object storage difficult, required some maneuvering.

“We are combining the hyperscale internet technologies that have let go of some of the constraints that used to be necessary for traditional parallel file systems and put together a system that meets both requirements—that’s what we see going forward,” says Brad King, Chief Architect and one of the co-founders of Scality. “Large enterprises are looking like, or trying to look more like Google and Facebook, but those companies are using all their own stuff in-house, it’s all specific and tuned. Our goal has been to work toward a more generic platform that fits both that hyperscale and enterprise, but often that requires something in the middle.”

King pulled from his experiences working with large datasets and HPC applications when he and the founding team put together the concept of a scalable, “generic” platform for doing object storage against complex requirements. He spent eight years at oil and gas giant Schlumberger working with CFD applications, moved into telco via OpenWave systems, and built a number of complex platforms for use in telecommunications and monitoring at OpenWave and Bizanga before stepping into storage with Scality.

“Many HPC customers can take advantage of software-defined storage today leveraging existing interfaces. For some HPC customers, what’s missing are HPC applications that are written natively for object storage interfaces. Many have been written mainly for POSIX-based storage to date,” King explains. “What folks are realizing is that there are a lot of things in POSIX that aren’t needed for parallel HPC applications, and actually introduce unnecessary overhead.” As an example, French nuclear site, CEA, has reckoned with this by using Scality’s object APIs and rewriting some of their applications to take advantage of it—a fact that CEA says will let them get close to 100% of the bandwidth from physical disks instead of 20% from their POSIX file systems.

Scality is also working closely with Los Alamos National Laboratory, where the MarFS work is happening, as described here. While both supercomputing sites have some similarities, namely believing that object storage is the best approach for the bulk of their data as a faster, more scalable option than classic parallel file systems with much less overhead than POSIX, there are some key differences. As Scality’s Leo Leung explains the different modes of use by referencing the company’s RING, which is its distributed object storage platform that, like many homegrown setups used by hyperscale companies like Facebook as well as commercial offerings, is designed with erasure coding to provide fault tolerance and specifically tuned to run on commodity x86 systems. “CEA is earlier in its development process compared to Los Alamos, which is much closer to having its full environment, which includes other technologies outside of Scality’s RING to support their needs. We expect CEA’s entire system to be complete in the next few years but for now, CEA is using the RING as a testbed to develop an object-based interface for allowing Lustre volumes to be tiered with the Scality RING as a second tier.”

For large supercomputing sites, the object storage movement challenges have been well-defined. However, for large POSIX-based enterprise users, the barriers to object stores go another direction. Beyond the file system problems, it’s a matter of software. “One of the key constraints is that many large enterprises are using applications they purchased and aren’t the master of. If that application expects a traditional file system interface, it’s not possible to use an object platform—or at least, there’s a not click button solution there. That’s different than in the web space where applications are being written from scratch,” Leung says.

There are stable and POSIX applications and enterprises need something that interfaces with those applications—we are doing just that. Some go further in terms of certain functionalities, things within POSIX they need, and this is also what something like MarFS does too. What they need is scalability on the backend—if you’re specific in terms of requirements on the front end, you might have to go build something because there’s no vendor to satisfy that requirement.” Ultimately, however, for large HPC and very large enterprise shops with a great deal of legacy data aligned with POSIX, what Scality (and the early users we talked about in HPC) believe looks a lot like the Two Tiers model that EMC’s John Bent described. It is a paring down of silos—something that will be in increasing demand over the next few years.

For Scality, however, HPC is only one part of the overall market beyond the hyperscale and web-native companies who don’t have to think about these POSIX and legacy application challenges. The company has made its biggest strides in telco, among others, where there were still plenty of hurdles, but the scalability and cost-effectiveness were such key drivers that companies were willing to do whatever necessary to make their datacenters look more like Google or Facebook.

scality_info

Other companies, fed by similar needs in the HPC market, including DataDirect Networks, Cleversafe, and others are tackling the HPC market in different ways, but if there is any clear trend for HPC storage we’re tracking in 2016, it’s in the object and object interfacing space.

For the wider enterprise market and “web native” application environments, the object storage story has already been playing out for a number of years given the relative lack of barriers as described above. As one of the companies out in front of that trend, Scality is on quite a roll, boasting in the summer that they were up to 75 companies that were in the petabyte or more range.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.