While it is generally considered stable and reliable in large-scale environments, Lustre is not without a few key weaknesses, which the small vendor community supporting it are often first to point out. From support, manageability, ease of deployment, overall cost, enterprise readiness, and the politics of open source, the ecosystem is lively. But the question is, where will Lustre’s path lead in era when scalability matched with performance are in ever-increasing demand?
Those needs, which are found in an ever-broadening range of markets, several of which even a decade ago were not users of high performance computing tools, do seem to offer a promising environment for future non-HPC adoption of Lustre. But the question remains whether the roadblocks, which will be detailed in this series, are too high for new markets to scale.
Over the coming week in this multi-part series, we will take a look at the Lustre file system in terms of its current status as a scalable high performance file system for commercial and research organizations—and how its adoption curve might look in the years ahead as it becomes suitably hardened for a broader array of enterprise environments outside of the traditional HPC arenas where it is most common.
For large-scale organizations, both on the traditional supercomputing side as well as in commercial HPC, it is well-recognized that Lustre plays second fiddle to IBM’s General Parallel File System (GPFS). The reasons for this are partially historical and go beyond the argument that HPC shops were already buying IBM supercomputers and this was the file system of choice by default. The reasons are more rooted in the fact that Lustre has not been considered ready for enterprise prime time until the most recent 2.5 release, which added some key reliability and availability features, including changes in the way it handles metadata and via features like hierarchal storage management (HSM). But more important, the decisions against Lustre are also rooted in the management complexities that are inherent to handling the monster parallel file system. Even though IBM’s GPFS customers are paying dearly for support, from all accounts, it is worth the cost if the only other massively scalable option takes a small army of trained admins to wrangle.
A number of companies have tackled the management complexity of Lustre by locking down the system and limiting the number of knobs users can turn, still others are optimizing the open source code generally and packaging support and cleaner interfaces for it, and others are reworking the code from the ground up in an effort to make it more enterprise-friendly—a definition that encompasses the reliability and availability mentioned above, but also makes sure it won’t take a big, expensive team to support internally.
Even with all of the features added in Lustre 2.5 (we’ll talk more about those in a bit), it still appears to be an uphill climb to take Lustre to the enterprise. According to Addison Snell of Intersect360 Research, whose firm focuses on how high performance computing centers adopt HPC technologies, “Although the requirements for more scalable data management solutions continue to grow, the adoption of parallel file systems continues to lag in commercial environments.” Snell says that according to Intersect360’s data from many sites with HPC infrastructure, “Lustre is still deployed by only about one-tenth of companies with high-performance or big data workloads.”
Still, once one steps away from the “expected” commercial HPC markets where adoption is still slow going, there are other opportunities—even if the time is not quite right yet. These are driven by one of the prime motivators for next generation infrastructure needs: pure scalability. Luckily for the vendor community behind Lustre’s adoption outside of traditional HPC, this is the best story for Lustre that anyone could tell. It’s what Lustre is known for, after all, and according to some, it is this need for bandwidth and capacityat massive scale that could drive a new set of cloud and telco providers to look to Lustre for the first time.
DataDirect Networks’ director of HPC markets, Laura Shepard, says that there are finally a number of large cloud and telco providers who have growing needs that are pushing them into the supercomputing sphere. “A lot of customers on this end are considering object storage as an option for the 100 petabyte and beyond range over the next few years. They’re looking at what kind of file interface they will need to manage at that scale. That sure won’t be NFS, so they’ve heard about Lustre and what it can do at 100 petabytes plus and with tens or even thousands of users across multiple datacenters. It takes HPC-type infrastructure to do all of that and when it comes to that scale, Lustre is something that does well.”
Even if the need is there at the scalability level, Lustre will be faced with some other challenges, in partbecause there is general doubt that the performance and scalability at hyperscale datacenters can be found in a POSIX file system. As Intel’s Brent Gorda, who sold his commercial Lustre company, Whamcloud, to the chip giant in 2012, explains: “We own the high ground here. We have applications running at 2 TB/sec, which is 100 to 1,000 times faster than most people think it’s even possible to do data movement.”
“When we talk to people who are not familiar with Lustre about the capabilities in terms of feeds and speeds, capacity, IOPS, ability to work with SSDs and so forth, we get instant interest. It’s when we start talking about how to actually deploy it that things get more difficult,” Gorda tells The Next Platform. “They say they don’t know how to stand up a cluster within a cluster that is masquerading as a storage system.”
Accordingly, Intel is including a range of features into Lustre to make the management less daunting, as are the small cadre of other Lustre-driven vendors who are trying to build a business around the file system.. This includes simplification of the management interface via Intel Manager for Lustre, which is intended to allow “the average” system administrator to wrangle a high performance file system. “Traditionally, Lustre has been a bit of a hairball in terms of complexity. The really large sites have dedicated Lustre technical people keeping their file system going since, after all, it’s a big parallel machine.” Gorda says that aside from the management interface, they are adding features that enterprise application developers want, including layering the Zettabyte File System (ZFS) developed by Sun Microsystems many years ago underneath it. This ZFS foundation adds scalability to Lustre and portability to NFS, NAS, and other environments common in enterprise shops.
Commercial HPC centers are an easier sell for Lustre than are sites that not running complex modeling and simulation workloads. Lustre was reared by a supercomputing family, but it’s harder to tell the story from a management perspective in particular, says Steve Butler, CEO of Terascala. Butler’s company specializes in HPC storage management via appliances that come pre-packaged with a tuned version of Lustre that can be addressed with TeraOS—an interface designed to make running Lustre more manageable. He says Terascala’s core customer base is in the industries one might expect—the oil and gas, manufacturing, financial services, and academic institutions that are long-time consumers of HPC technology. When it comes to growing into more mainstream, less strict HPC environments, adoption is slow going.
For storage in general, it’s interesting how little growth there is outside of application-specific areas (HPC and big data would be included here) as well as more general back-office applications. “There has been so much commoditization and so many improvements in enterprise storage that the capabilities go up but the price drop per terabyte is significant.” Butler says that while the larger-scale players like EMC, NetApp, IBM, and Dell are seeing enterprise storage growth, the returns are not as high as many might think, even with the increase in demand for storage from the widening range of data-intensive analytics workloads.
The entry scale for many of the enterprise customers Butler is acquainted with is between hundreds of terabytes to multiple petabytes to manage. The common requirement is for a scale-out architecture that provides a single namespace and, naturally, the throughput and scalability to keep pace with demand.
“Another common requirement is to provide at least 10 Gb/s of I/O performance for these customers to be able to run against clusters that are usually in the ballpark of several hundred to thousands of nodes,” Butler adds. The general shift among the large-scale enterprise customers is toward open source and inexpensive commodity hardware, Butler explained. “Most of the RFPs we are seeing from the enterprise world have a few demands that we are seeing more of these days.” He cited one recent example via an RFP from one of the largest healthcare providers in the world. “They are requiring a POSIX-complaint file system packaged within an appliance, they want whatever they choose to be scale-out, an open source software stack, and they want real commoditization of their hardware—they want to build whatever they can by themselves.”
The key here, however, is that these enterprise RFPs seek to do all of the above inside of a tightening price window. Butler says that when the breaking point is in the 25 cents per gigabyte range, it rules out proprietary file systems. It is cases like this, he says, where Lustre has a chance to shine.
“The real key is just showing how Lustre operates in these types of environments, and sometimes the only way to show what it can do is to get demo appliances in a customer’s hands so they can experiment with the manageability and benchmark performance.” As a proof point, he told The Next Platform that Terascala’s batting average for these on-site demos results in between 80 to 90 percent home runs in terms of on-site tests to final sales. It just takes putting a toe in the water, is his argument.
For a company like Terascala, which is pushing enterprise growth of Lustre via appliances aimed at the Lustre easier to manage, there is a constant battle to keep Lustre “enterprise ready” against the features of other prevalent enterprise approaches. While the newest 2.5 release of Lustre does include some of the elements Fortune 1000 companies found valuable in other offerings, namely the ability to offer high performance data movement and hierarchical storage management (HSM), there are key areas where it might not stand up to more traditional enterprise file systems, including GPFS. For instance, while Lustre is known primarily for its performance and scalability, some features, including the ability to handle snapshots are lacking. Further, many enterprise customers want to be able to move their data around to their NFS or other environments, but Lustre does not have a native way of addressing this important need. While Terascala has worked to address this shortfall with their Intelligent Storage Bridge, which is a data movement engine that lets users leap bi-directionally between Lustre and other environments, this is available only in an appliance approach like Terascala’s. For those shops looking to roll their own Lustre, this chasm in data movement, coupled with the complexity of managing Lustre, leaves some enterprise users wary.
“The important thing is that Lustre stops being its own private island,” explains Butler. “That’s not what the enterprise wants, they want to be able to run their job, get all that data, and move it into traditional environments using NetApp or EMC and have teams of people analyze the data. That’s the classic enterprise model we see coming to bear.”
So even though Lustre might have its closest competitor beat on scale and potential performance, it still lags far behind on the type of support available with GPFS. In fact, we were hard pressed to hear differently from any of the vendors with Lustre-based offerings we spoke with for this series. It’s difficult to imagine how those changes might play out, but for many, not managing a “hairball” of complexity could continue to outweigh whatever changes are on the horizon with how GPFS is supported—at least in the short term.
Tomorrow, we will continue this multi-part series on the state of the Lustre file system by tracking its progress toward enterprise stability and taking a closer look at where the gaps still remain.