Building A File System That’s Primed for the Times

One of the temptations of IT companies that skate on the cutting edge is that they get enamored with their own inventions, forgetting that customers are a lot more interested in practical solutions than whiz-bang technology. Startup Qumulo took this premise to heart when they were devising their own file system.

At last week’s Next I/O Platform event, we heard from Qumulo’s director of product marketing, Molly Presley, about how her company navigated this usability-innovation dynamic.

Although Presley never mentioned it by name, the Qumulo Scalable File System (QSFS) is the company’s flagship offering and has garnered a fair amount of success in domains as far afield as entertainment, oil and gas, life sciences, manufacturing, and scientific research.

Qumolo’s story, she said, began in 2012, when a lot of the talk around the industry was focused on how to store and manage the ever-growing amounts of unstructured data being produced in areas like media, healthcare, and other sectors. And like many file system companies, Qumulo was trying to figure out how to overcome the architectural limitations of current solutions that, as Presley put it, “just weren’t designed for billions and billions of files.”

The first decision they had to make was whether a new file system was something to even pursue. At the time, object storage was being talked about as a new paradigm that would usurp the need for files altogether. But when Qumulo looked into this, they found that files and file systems were popular with users and would likely remain so for the foreseeable future. “They talked with a thousand customers before writing a lick of code,” said Presley.

However, what they did see changing was the users were migrating to the cloud, which was happening for a variety of reasons. One of those was the need to access technology like AI, which is much better supported in the cloud, both from a hardware and software perspective, than it is at most on-premise facilities. And then there was the more generalized demand for additional peak compute or storage capacity that has led many users into cloud bursting. As a result, Qumulo decided that their file system should at least be cloud-native. It doesn’t require a cloud setup, but it supports it, as well as hybrid environments that mixes on-prem with cloud.

The other piece of the puzzle was being able to handle big files and small files with equal dexterity. A lot of this came down to managing the metadata more efficiently. In traditional file systems, seemingly simple queries like how much storage capacity is left, how many files are stored, or which files are hogging the I/O can take days, especially when billions of files are involved. That’s because those operations require the software to perform complete directory scans or file system walks. Qumulo’s solution was to embed these operations related to capacity and performance directly into the file system software in such a manner that these kinds of queries could be delivered in a matter of seconds.

According to Presley, to do these sorts of real-time metadata operations, they take advantage of flash technology, often in the form of NVM-Express drives. The file system’s reliance on flash is clear: your choices are all flash hardware or mixed hard disk/SSD setups. Other than that, the software is compatible with commodity hardware, which can be supplied by a variety of vendors.

Fortunately for Qumulo, NVM-Express drives have not only become popular, but also much less costly of late – two attributes that are no doubt related. Presley says the cost of these drives has come down about 30 percent this year, which means NVMe-based storage is only two to two-and-a-half times the price of a typical SAS solution now. According to her, that “puts it in the realm of a reasonable cost.” She noted that about half their revenue over the last quarter came from deployments with NVMe-based solutions.

In fact, she thinks NVMe has enabled some customers with applications demanding the highest levels of I/O performance to move off of a client-based storage architecture or even a parallel file system. “Mostly it’s all about latency in those environments,” Presley said.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.