Pushing Back Against Cheap and Deep Storage

It is not always easy, but several companies dedicated to the supercomputing market have managed to retune their wares to fit more mainstream market niches. This has been true across both the hardware and software subsets of high performance computing, and such efforts have been aided by a well-timed shift in the needs of enterprises toward more robust, compute and data-intensive workhorses as new workloads, most of which are driven by dramatic increases in data volumes and analytical capabilities, keep emerging.

For supercomputer makers, the story is a clear one. However, on the storage side, especially for those select few companies that have a unique, proprietary parallel file system and hardware partnerships to roll into appliances, the move to a wider enterprise market base from pure HPC can be a bit more challenging. Unlike with compute systems for HPC, which are seeing broader reach due to increased computational and data requirements, parallel file systems have something of a tougher row to work, especially with the “cheap and deep” options enabled by software defined or object-based storage.

We have talked in depth about how humble HPC roots have propelled some storage companies into the broader enterprise storage foray (DDN is a prime example), and via HPC systems makers that sell storage (think Cray’s Sonexion line with burst buffers and such), but these companies are basing their offerings on commercial-grade distributions of Lustre and GPFS for the most part versus going to fully customized route. This is already a tough sell for some enterprise shops that haven’t had the experience of using high performance parallel file systems as it is, but for companies that have been around for decades with a closed-source file system locked inside an appliance as in the case of Panasas, the enterprise reach seems like a bit more of a stretch—at least, in theory.

Actually, according to Tom Shea, COO at Panasas, this might sound logical, but what enterprise-grade commercial HPC shops need is something that can just function out of the box. He says that while there are plenty of sites that choose to deploy Lustre or another file system on their own commodity hardware, the management overhead of this is far more complex than it might appear. He says that users are far less interested in what is under the hood of the Panasas appliance-based approach and far more focused on getting to work on real applications. While open sourcing their file system is not on any foreseeable roadmap (which bucks the trend), their focus will be on building a bolder base of commercial high performance computing users through a focus on scalability, reliability, and manageability—all areas where Shea says the open source, DIY storage approach fails for users with mission-critical applications at scale.

As a quick refresher, Panasas got its start in 1999 at the technical direction of co-founder, Garth Gibson, of RAID fame and found significant footing in the emerging HPC market in government and academia in particular, beginning with the PanFS file system and its first scale-out NAS appliances. While it is difficult to tell if the privately-held company is profitable or growing overall, especially after undergoing some major transitions, the company’s Jim Donovan gave some key insight into what growth looks like from a more government/academic focused HPC business to a more enterprise-geared one.

This fresh reset in direction started close to two years ago with a revamping of the team (many left, but new hires came in with media and entertainment creds) and according to Donovan, has paid off.

“In our bookings for the year 2014, 20% of our business was comprised of manufacturing, life sciences, media and entertainment. Looking at 2016 and what is expected to close before the end of the year, we are looking at 49% of those bookings coming from those same three areas.”

We can assume that the majority of the business is in the traditional academic and government markets that Panasas is rooted in. Overall, Dovovan says that in 2014 2% of the 20% bookings were in media and entertainment—a figure that has jumped to 13% currently with manufacturing doubling and some steady growth in life sciences.

When it comes to HPC-rooted storage companies that have found inroads in the media and entertainment market, DDN flashes to mind first (customer wins include Fox Sports, MLB Network, Starz, and other smaller production houses), but Panasas is expecting to be competitive with its own wins among some major broadcasters. “We are watching the evolution of media and entertainment from 2k to 4k environments and now we’re starting to see 8k environments with those large file sizes and increasing pressures for high performance,” Donovan tells The Next Platform.

With so many storage and file system platforms sharing similar performance characteristics (at least when you dig into benchmarks and strip out the unfair compares), Donovan says even with a closed door on their file system and appliance, the performance, linear scalability, and hybrid scale out for large files and high throughput, mixed workload requirements is what is still capturing share, presumably powered by the latest ActivStor 20 appliance. “We hear from a lot of our customers that use both Panasas and Lustre that we shine when running lots of applications against the storage, and also on the ease of management fronts. People struggle with Lustre, keeping it going and managing it, which is why a turnkey approach is what works best. A large oil company we work with came back to us after data was lost and management overhead was high.”

Of course, the main question we had for Panasas is how they are getting along when the whole storage world seems to be shifting to an object-based future with open parallel file systems.

Most promising are future directions to consider the future of all-flash arrays to bolster their file system (PanFS) and to explore the possibilities of pulling burst buffer capabilities inside of the storage subsystem for enhanced capabilities, but these are all roadmap items versus realities. Their customers for the most part aren’t ready to fork over what is required for all-flash and the burst buffer development work will represent an industry first and will take time. Still, it shows that an established company that has been able to make things work by doing things the good old fashioned way is not flinching from the new day ahead.

“We are aware of the trends and are tuning our roadmap accordingly. Yes, we see where the cheap and deep storage trend is going with objects and software defined storage running on commodity hardware. We look at that all the time, but we are customer focused and what we are consistently being asked to deliver is something that is fully integrated and installed so its running in 10 minutes and you forget about it,” Shea says. “Everyone is looking at the economics of software defined storage but the fact that we have this level of integration and reliability is a huge differentiator for us—along with the ease of management.”

Panasas senior software engineer, Curtis Anderson, who is a newer hire from NetApp, says that ultimately, all of these aspects present a TCO question—as is the case with any company pitching a closed appliance. It’s ease of deployment and management versus overhead for end users at the sake of saving money. “If someone wants the absolute lowest cost of entry and the cheapest hardware, that’s not where we play,” he says, adding that even the buzz around object storage misses a point, at least as far as Panasas is concerned. Under the hood everything the company does is object-based but with a POSIX interface (versus a Ceph-like approach with a direct object-native API). “Right now use objects at a fundamental level because our file system is built on objects but we don’t expose back to the customer in a way that they can get at it programmatically.”

Anderson says some have been scared off from POSIX APIs over object because it’s not fast enough, but he disagrees. “Customers tend to think that object equals cheap storage; they are conflating object with low-cost. That cost reduction comes from commodity hardware. We are object underneath but can provide a POSIX interface on top of that. That is valuable for a large base of existing applications since many have no interest in converting their applications to object-native. POSIX is actually advantageous for a large group of customers; they want scale out performance from object but cannot afford to rewrite applications. Besides, the minute you’re handling large files, a lot of sharing is a good fit with POSIX as the interface,” he adds.

“Google and Facebook and others have shown that it is quite possible and there are advantages to writing applications for pure object native APIs. On the other hand, they’ve also developed man centuries worth of experience on how to write these applications, and these are completely new applications on top of that. They are not existing CFD or other applications,” says Anderson. “Yes, you can go whole hog object but it will take the industry much longer to get there, outside of those at that leading edge.”

Although we tend to cover many of the bleeding edge hardware and platform innovations and how they’re being adopted, the fact remains, a vast majority of the commercial HPC installations rely on pretty traditional approaches for their mission critical work. Further, unlike the national labs and academic customers, which can spend endless hours tweaking and optimizing for peak performance of open source file systems on commodity hardware, these enterprises are in the business of building product, not managing file and storage subsystems. And so, even if there’s a “same-old storage” story to be told here, a much more compelling one is how long these users will continue to overlook flash, pure object, and other approaches for a tried and true HPC appliance for their storage needs.

This big base of users can’t be underestimated, says Shea—and they are allowing Panasas to keep adding to their enterprise HPC ranks.

Shea adds, “What keeps us in business is that there a lot of customers that are going to fit a profile similar to media and entertainment where they have a lot of throughput needs and they have really big files, which is always been the strength of our parallel file system. They don’t have the interest or time to slap that software onto some random hardware and make sure it all works. For inexpensive cheap and deep storage that’s one thing–when they care about only cost per gigabyte. But what we find is that a lot of our commercial customers are not comfortable with that model. When you’re looking at petabytes of data with linear scalability, multi-gigabytes per second terms of sustained throughput without hiccups, that’s where we play.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.