It Takes Geological Patience To Change Datacenter Storage

Moving from an HPC center or a hyperscaler to work on enterprise software has to be a frustrating experience. In the HPC and hyperscaler world, when you need to deal with a problem, it is usually one of scale or performance – or both – and you have to solve the problem now. Like in a year or less, but sometimes you get more time to refine things. Call it 18 months, tops. The new platform – database, storage, network control, compute control, whatever it is – goes into production and months later replaces whatever was going to run out of gas just in time to save the company.

The enterprise, into which we lump government and academic institutions, by contrast, move at a much slower pace because the risk profile is much higher. If your email or social network or media archive is down for minutes, hours, or even days, no one is going to die. But if an enterprise has an outage and either customer data is compromised or normal business is interrupted, reputation and money are on the line.

So if you have a strong background in HPC and hyperscale storage, as the co-founders of Quobyte certainly do, and you want to change the way that enterprise storage is done, you have to have patience and take the long view. Because it is going to be a long haul to convince companies to change the way they do storage.

To help boost adoption of its eponymous object based parallel file system, which we talked about at great length when it first became available to early adopters five years ago, the company has decided to roll up a Quobyte Free Edition, which has the prospect of significantly expanding the reach of the Quobyte Data Center File System in the coming years.

The Free Edition falls short of a fully open sourced file system, and if that is a big deal to some hyperscalers, cloud builders, and service providers, we think it is much less of a tick box on the RFPs for enterprises. While open source is something these big companies can demand, it is not something that concerns enterprises as much. They are practical and they actually need to start thinking about moving away from SAN or NAS appliances and towards some sort of software-defined storage that runs on common infrastructure.

Besides co-founders  Bjorn Kolbeck, the company’s chief executive officer, and Felix Hupfeld, its chief technology officer, have been down the open source road before, with the XtreemFS file system project in Germany, to be specific. Hupfeld was the architecture and project manager for XtreemFS and Kolbeck was the lead developer, and after working at Google for several years on various projects, the two formed Quobyte to take the ideas of the HPC-style XtreemFS and the ease of management and heavily automated nature of Google’s internal and most definitely closed source file systems and create Quobyte.

We took the occasion of the launch of Quobyte 3.0 and the rollup of the Free Edition to get a sense of where Quobyte is at and where it is going.

The first Quobyte release came in late 2014, a little more than a year after the company’s founding – see how fast these hyperscalers move? – and it was designed from the ground up to be a POSIX-compliant distributed parallel file system with block and object overlays when necessary, with triple redundancy of data running on absolutely plain vanilla X86 Linux servers.

With the Quobyte 1.3 release in August 2016, erasure coding data protection methods, popular among the hyperscalers and cloud builders, was added to cut back on the amount of capacity needed to protect data. (It takes more compute to do erasure coding protection, but there is no replication of data.) The replication and erasure coding can be deployed on different partitions in the Quobyte file system. It is not a situation of one or another, and, for instance, you would not use erasure coding to protect VMs running atop a hypervisor, which expect block storage, because it would be running the erasure coding data protection nonstop as data changed on the VMs, putting a heavy load on the CPUs. With Quobyte 2.0 in November 2017, access control lists were added to secure the object and file storage and volume mirroring (good for VM environments) was also added along with better integration with Kubernetes container environments.

With the Data Center File System 3.0 release that debuts today, Quobyte is adding a slew of things to the storage stack, including native HDFS and MPI-IO drivers for data analytics atop Hadoop and traditional HPC simulation and modeling workloads, respectively. These drivers all bypass the Linux kernel and therefore offer lower latency than many might expect. The storage now also has end-to-end AES-256 encryption, for data in flight and data at rest.

Kolbeck tells The Next Platform that Quobyte Data Center File System is mostly written in C++ but which uses Java for its interfaces and for the changing distributed algorithms that, among many other things, determine where data is placed on the file system. The software runs in the Linux user space and does not have any Linux kernel modules or modifications, which is important in the enterprise. They want to load Red hat Enterprise Linux, SUSE Linux Enterprise Server, or Canonical Ubuntu Server and leave the kernel alone. Machines can be equipped with disk drives, flash drives, and NVM-Express drives and do not require RAID disk controllers or NVRAM memory or other kinds of journaling to protect data. The Quobyte nodes just need “a reasonable amount” of CPU and memory, plus their storage, of course.

A base configuration of Quobyte with triple redundancy requires four servers, and with erasure coding, it requires a base of six servers across which data is sliced and encoded. Quobyte can scale, in theory, to thousands of nodes and exabytes of capacity, but thus far the largest installation in terms of capacity is the Science and Technologies Facilities Council, an HPC center in the United Kingdom, which has 42 PB of capacity in around 100 nodes that is attached to the 600-node “Jasmin” petascale supercomputer. The largest customer in production, says Kolbeck, has 200 nodes. And highlighting the ease of management angle that was so transformative at Google for both of the Quobyte co-founders, Kolbeck notes that one customer who started out with seven nodes two years ago now has 185 nodes running Quobyte and with two system admins each spending a quarter of their time managing that storage, for a half person for what we presume is tens of petabytes of capacity. Quobyte counts Airbus, Yahoo Japan, and the High Performance Computing Center in Stuttgart as its marquee customers.

At this point in Quobyte’s history, there is a small part of the customer based that has deployed the storage on public clouds – about 10 percent, Kolbeck estimates – with the remainder running the storage on premises. Most users at this point prefer to run Quobyte in two-socket X86 servers with 24 storage bays, and some (but an increasingly small number) have all-disk bays, others have a mix of disk and NVM-Express flash bays (this is more common now), and some even have all NVM-Express bays. This gives a mix of performance profiles, within the nodes and across the nodes, which are utilized by the policy engine at the heart of the Data Center File System to figure out what data should go where.

Kolbeck is not looking to add persistent memory of any kind to the Quobyte file system any time soon, but if nodes have it, Quobyte will certainly use it. “The reason we don’t care about persistent memory, even with its much better latencies, is that the storage has to work over the network anyway, which has a much higher latency,” Kolbeck explains. “Moreover, another issue is that we don’t believe that most users need the absolute lowest latency possible. They want low latency but they also need scale out because if they are running machine learning workloads, for instance, they want to make sure they can run 100 compute nodes against the storage system and still have reasonably low latency. The system does prefetching, so whether it does 50 microseconds or 60 microseconds doesn’t matter. What does matter is that 100 nodes can access that data in 60 microseconds.”

To get that consistent performance, Quobyte has its own client, which links to server nodes using remote procedure calls (RPCs) over Ethernet networks. Here is how it compares to the NFS v4 client, which has some issues according to the grousing we hear:

Quobyte also has something else that is very interesting: List prices for its Data Center File System. Take a look:

The Free Edition, which supports file systems with a combined 150 TB of disk and 30 TB of flash or 10 TB on a cloud instance, has no charge associated with it; it is based on the Version 3.0 code. The Cluster Edition has the same S3, HDFS, Kubernetes, and TensorFlow plug-ins and has silver support (9×5 business days) for $8,999 per cluster and has gold support (24×7) for $12,999 per cluster. The multitenancy, security, self-service, and erasure coding features are only in the high-end Infrastructure Edition, which does not have a list price but which has academic discounts and volume discounts.

With the availability of the Free Edition, Kolbeck expects to eventually see the typical distribution of free to paid customers – with 90 percent on free, using it in small installations with community support, and the other 10 percent paying for the Cluster Edition or Infrastructure Edition.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.