Hybrid Arrays Fight Back Against All-Flash

There are almost too many ways to skin the storage cat in the enterprise datacenter these days. HPC centers have it easy. They pick a single parallel file system and now maybe put a burst buffer in between their compute clusters and storage to improve performance and they are done. Hyperscalers have homogenous and sometimes homegrown object storage, sometimes with file system and block overlays, and they need homogeneity on compute and storage to keep their capital and operational costs in check.

But enterprise customers, with their diverse workloads, different quality of service requirements, and budgets, have more variables to solve for in the storage equation.

It is reasonable to assume that tiered storage will continue to be the norm in datacenters in the coming decades, much as it has been for the past several decades, although there is some talk about all-flash datacenters becoming more common and, according to recent projections by Gartner, possibly accounting for 25 percent of datacenters by 2019. Adoption of new, fast storage technologies will always go for the top tiers in the storage hierarchy – tier zero in the servers or very close, and inside of tier one arrays that host the hosted data for production workloads – and older storage technologies will be pushed down to tier two analytics, replication, and archive uses. In time, there will be mixes of different kinds of flash, with different cost and data durability profiles, used across these tiers, and over a longer span, other non-volatile memory that is more expensive and offers more bandwidth and lower latency will come in on top of the storage stack, beginning the whole geological process all over again, pushing the very oldest technologies into oblivion. No one knows for sure when tape or disk storage will truly be vanquished from the datacenter, but it will happen someday. Disk-based object storage with erasure coding can compete with tape on a cost per capacity basis today, and flash with data reduction algorithms such as de-duplication and compression can compete with disk.

That does not mean there is not a place for hybrid flash-disk arrays now. It all comes down to cases, and NexGen Storage, a startup with an interesting history, is focusing on hybrid storage aimed squarely at the 500,000 enterprises in the world that employ VMware’s vSphere server virtualization stack. For these companies, a balance of capacity and performance is necessary and it has to be at the right cost and, NexGen believes, with a quality of service that is guaranteed by integrating very tightly with VMware’s ESXi hypervisor and its Virtual Volumes (VVols) storage virtualization features.

The company is not foolish, either, and NexGen is working on all-flash arrays as well.

A Long And Winding Road

NexGen was founded in 2010 by John Spiers and Kelly Long, who were the founders of LeftHand Networks, the originator of the idea of converged storage (ahead of Nutanix even, though it does not get credit for it usually) and which was acquired by Hewlett-Packard back in October 2008 for $360 million.

LeftHand was a pioneer in iSCSI storage area network technology and the product line continues at HP today as the StoreVirtual line and is sold alongside the 3PAR StoreServ disk arrays. The important thing to note is that Spiers and Long created a storage architecture that could accommodate both disk storage and flash when it came along. In April 2013, hoping to beef up its presence in enterprise datacenters as its business among the hyperscalers Facebook and Apple was tapering off, server flash card maker Fusion-io snapped up NexGen for $114 million in cash in April 2013. At that point, NexGen had raised $12 million in two rounds of venture capital and was one of the hybrid upstarts like Tintri, Tegile, and Nimble Storage that are trying to take on EMC and NetApp in the hybrid array space.

NexGen used Fusion-io’s PCI-Express flash cards in its ioControl n5 series of hybrid arrays before it was acquired by Fusion-io, and according to Chris McCall, senior vice president of marketing at NexGen, it still does. Earlier this year, when SanDisk decided to get out of the hybrid disk array business and was prepping its own InfiniFlash all-flash arrays, SanDisk spun out NexGen and Spiers and Long are once more CEO and CTO, respectively, of an independent storage company. (And yes, it does not make logical sense that SanDisk would spin out NexGen and yet launch InfiniFlash if it was worrying about competing with its customers in the storage racket.)

The n5 series of arrays have redundant controllers that are based on Intel’s “Ivy Bridge” Xeon E5 v2 processors. This compute is equipped with 96 GB of main memory and is used to run the ioControl storage operating system and to do read caching for data stored on flash. All of the writes to the Fusion-io flash cards are mirrored and acknowledged back to the host before it is officially committed and ioControl has automatic tiering software that moves cold data down to disks and keeps hot data in the flash. The read cache for the n5 hybrid arrays is split between the main memory and a segment of flash memory, and the ioControl quality of service algorithms determine what data goes where and when. ioControl has the ability to pin data to either flash or disk as well so it can’t be moved around, which is another kind of QoS.

The redundant controllers in the system are linked to each other by an internal 10 Gb/sec Ethernet link, and the flash is not really used as permanent capacity so much as it is used for caching. The disks have RAID 6 data protection on them, and the system has four 10 Gb/sec Ethernet ports that feed out to servers. NexGen doesn’t use SATA drives, but rather 7.2K RPM SAS units that it sources from Toshiba and Seagate Technology; the drives come in 2 TB, 3 TB, and 4 TB capacities.

nexgen-n5-table

The largest customers that NexGen has have eight of the top-end n5-1000 hybrid arrays linked up to a single workload and all managed through ioControl plug-ins that snap into the vCenter management console for the ESXi hypervisor from VMware. That works out to 2 PB of raw disk capacity. In a twist of irony, NexGen is selling into the LeftHand Networks installed base as well as against other all-disk arrays from EMC, NetApp, IBM, Dell, and HP.

“We can take out an entire rack of LeftHand disks with a 3U system and deliver not only more performance, but more capacity,” says McCall. “That is the quantum leap that doing flash the right way has made in the last five years.”

This is what we here at The Next Platform call scale in – as opposed to scale up (making one thing bigger) or scale out (adding more little things and spreading the work) – and McCall concurs that this is precisely what is happening. And, we say it will happen again with two tiers of flash, and then flash and some other NVM, and then again, and again. Before we know it, we will have a petabyte in our pockets and somehow, some way, the storage business will still only generate about $25 billion in revenues.

The NexGen arrays have data reduction algorithms that can take an array with 256 TB of raw capacity and turn it into about 400 TB of usable capacity. Such a beefy box would cost somewhere on the order of $250,000, or about $1 per IOPS or about 62.5 cents per GB, depending on how you want to look at it.

The reason why NexGen went with the Fusion-io flash cards and has been waiting for NVM-Express flash cards from other vendors is that it wants to get flash memory as close to the main memory in the system. If you use flash at the other end of the SAS or SATA disk controller, you end up negating much of the benefits of using flash in the first place.

NVM-Express will be coming to the NexGen products in the not-too-distant future, and McCall says that the company has been waiting for flash cards and SSD form factors that support NVM-Express to become more widely available. With NVM-Express, a bunch of SCSI disk commands that are not remotely useful on a flash-based device are removed from the storage drivers, and it removes the host bus adapter overhead and doesn’t mess around with protocol translation. The performance benefits are tremendous, as these benchmark tests done by Samsung, one of the vendors of raw flash and NVM-Express products, shows:

nexgen-samsung-nvme-benchmarks

In the tests above, Samsung is showing the random and sequential performance of flash SSDs with various interfaces linking them to the compute complex. This is going to be a big jump in performance for many hybrid array makers, and it may be some bad news for Fusion-io, which could potentially lose another customer for its ioMemory PCI-Express cards. The neat thing about NVMe is that it can stretch the PCI-Express fabric in the storage server out to the SSD form factors.

McCall says that NexGen is working on an all-flash array right now and that it has prototypes running in the lab right now. He cannot comment on when it might possibly be launched.

“We have been waiting for the right media, because there are a lot of interesting things on component vendors’ roadmaps that are coming out late this year and early next, so we want to time it appropriately,” says McCall. “We are working through all of this stuff now.”

This includes 3D NAND flash and NVM-Express connections out to the drive slots. At the moment, NVM-Express can only run four PCI-Express lanes out to an SSD slot, but it does extend that memory bus for flash out to another tier in the storage array. But don’t get the wrong idea. Flash cards plugging into the PCI-Express bus directly (and also supporting the NVM-Express protocol) will still be used as the top tier in an all-flash array, it is just that they won’t necessarily come from SanDisk. Some storage chassis makers, like Supermicro, will have some SSD slots support NVM-Express and others not. For future hybrid arrays, NexGen could use a mix of PCI-Express flash cards and flash SSDs supporting NVM-Express and then 7.2K RPM SAS drives for the coldest data. The gap in performance between the Fusion-io flash cards and SAS drives is so great that adding a middle tier based on NVM-Express flash SSDs makes sense, says McCall, to offer a wider range of QoS and to smooth out performance and price/performance.

The NVM-Express support is already woven into the ioControl storage operating system, specifically with version 3.5 that launched this week. NexGen says the next generation of NVM-Express devices from Intel, Micron Technology, SanDisk, and Samsung are all certified to work in its machines – once they are put into them.

The software update also includes support for VMware’s VVols, which allows for a virtual disk volume to be allocated directly to a VM. In the past, before VVols, storage was allocated to LUNs in the file system and VMs were piled on top of the LUNs in groups and moving the VMs around caused all kinds of grief for storage managers. In a datacenter that has virtualized applications running across multiple types of storage arrays with different levels of performance, migrating a VM’s storage for performance reasons using Storage VMotion migration feature can take hours to days, says McCall, and “makes storage admins shudder.” On NexGen storage arrays, which have native VVol support, ioControl can reassign a VM’s storage to a different tier in the array without having to do a Storage VMotion, using ioControl to do it faster and making vCenter think it is doing a very fast Storage VMotion.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.