Flash Over The Tipping Point In The Enterprise

This being leap year day, storage juggernaut EMC is having fun with puns about quantum leaps and frogs as it launches its much-anticipated DSSD all-flash arrays aimed at extreme I/O performance. But that is not the only all-flash launch the company is making. The company has also re-engineered its high-end VMAX storage arrays aimed predominantly at enterprise customers so they can be configured entirely with flash.

The VMAX all-flash configurations offer customers a huge amount of capacity – up to 4 PB of storage with a peak of 4 million I/O operations per second across a four-rack setup – and are coming to market a little bit earlier than EMC anticipated thanks to the rapidly falling cost of flash-based SSDs.

“As we have been looking across our product lines, one of the things that has become increasingly clear over time is that we have always known that the cost of enterprise SSDs was going to drop down below the cost of spinning media,” Chris Ratcliffe, senior vice president of core technologies at EMC, tells The Next Platform. “That was just a matter of time. What we didn’t expect was how quickly it would happen. So early last year we started doing a lot of work to make all of our products ready for flash. This is not just sticking SSDs in a VMAX. There are a lot of things that we do inside the box to manage the longevity of the flash, the reliability and consistency of performance, and so forth. We did a huge amount of work inside the VMAX code, literally touching millions of lines of code and adding new code so that customers can get all of the hallmark capabilities of VMAX in an all-flash configuration.”

The VMAX3 arrays are the eleventh generation in the lineage of parallel disk arrays that were conceived by genius storage engineer Moshe Yanai back in the late 1980s and rapidly commercialized as the Symmetrix line in the early 1990s. (Yanai went on to found XIV, now part of IBM, and is working on yet another storage startup called Infinidat, which we will be chasing down.) The Symmetrix was wildly more successful than many people expected, and made EMC one of the fastest growing IT suppliers in history after it basically knocking IBM flat on its back in its own captive mainframe market. The Symmetrix arrays were also popular for very large Unix systems, often used to house big databases and ERP applications.

The initial Symmetrix machines had 24 disk drives behind a custom controller that made them look and feel like a big IBM 3390 mainframe disk drive, plus great big gobs of cache memory that gave them superior performance to the IBM iron. The VMAX 400K arrays launched last year scale up and scale out, and the top-end machines have up to 5,760 drives, 384 “Ivy Bridge” Xeon E5 v2 cores, and 16 TB of cache memory in a single storage server image that has eight controllers and up to 4 PB of aggregate capacity across its four racks. These VMAX controllers are linked to each other with 56 Gb/sec InfiniBand interconnects, by the way, and they are also the first VMAX controllers based entirely on Xeon processors without any other custom ASICs on the boards.

The Microsecond Is The New Millisecond

The VMAX is a beast by any measure, and when stuffed full of SSDs instead of disks, it can deliver average latency under 500 microseconds and millions of I/O operations per second of throughput when fully loaded – and still have the six 9s of system availability that comes with the VMAX warranty. A fully loaded all-flash VMAX has 150 GB/sec of aggregate bandwidth, which is considerably higher than the 100 GB/sec offered by the new all flash DSSD D5 array also announced today. But the DSSD product is offering 10 million IOPS across 144 TB of capacity with 100 microsecond average response time, which is a stunning amount of IOPS per unit of capacity and very low latency indeed. The VMAX arrays support IBM System z mainframe and Power Systems platforms running the proprietary IBM i (formerly OS/400) operating system, as well as a slew of Unix platforms as well as Windows Server and Linux. It also supports block and file storage, and is therefore a “massive consolidation platform,” as Ratcliffe puts it.

The XtremIO all-flash arrays can scale up to 1 PB at the moment, and they are designed for mixed workloads that are highly compressable and de-dupable, says Ratcliffe. (We suspect that over time, telling the difference between XtremIO and VMAX hardware could be very difficult indeed, although their software environments might continue to be different.)

The point is, EMC has different storage horses for different data courses.

The all-flash VMAX is coming out a little bit ahead of plan, according to Ratcliffe, who says that EMC is going to build all-flash versions of all of its storage products because 2016 is the transition year when flash reaches parity with disk.

“Our expectation is that by the end of the year, the majority of new machines that we sell will be all-flash arrays,” he declares. “As we move forward, we will see lower-cost spinning media come in. We don’t think that hybrid is going away, but we think for most enterprise workloads, customers are going to move to all flash. We can tier an all flash VMAX so it can push data to a Data Domain archive system or to the cloud.”

That does not mean that EMC is not going to continue selling disk-based VMAX and VNX arrays for customers who want them, or that EMC does not expect fatter or denser spinning media in the future. It does mean that it is no longer much of a horse race for primary or tier one storage for enterprise applications. Flash and other non-volatile storage will eventually win out in the enterprise even if disks do, as Google pointed out last week, persist for cloud storage for many, many years to come.

emc-vmax-scale

The VMAX all-flash arrays are modular in design, and the basic module is called a V-Brick. This includes a two-socket Xeon E5 server with up to 1 TB of DRAM memory that is used as data cache. The base engine comes with up to 53 TB of SSD flash drives, which are based on 3D NAND flash and which come from several different suppliers. The compute, cache, and flash can be scaled somewhat independently depending on the nature of the workloads on the systems attached to the VMAX array and the storage software running on the array. (The more functions, the more compute and memory required.) VMAX flash packs with 13 TB of capacity can be added to each V-Brick, for up to 500 TB per engine, and up to four engines can be clustered together in the VMAX 450 variant and up to eight with the VMAX 850 variant, yielding a maximum of 2 PB and 4 PB, respectively, for the top-end configurations. (That is usable capacity, not raw capacity.)

The 450 and 850 series do not just differ in capacity, but also in I/O performance.  On benchmark tests using a mix of reads and writes with 8 KB files, the VMAX 450 can drive 375,000 IOPS per V-Brick, while the VMAX 850 can drive 500,000 IOPS per V-Brick. So the 450 machine tops out at 1.5 million IOPS and the 850 hits 4 million IOPS.

emc-vmax-packaging

There are two different flavors of the all-flash VMAX arrays, one called the F series and the other the FX series. The F series has the basic HyperMAX operating system and support for VMware Virtual Volumes (VVols) plus Unisphere management and TimeFinder snapshotting replication services. (Unlike many hyperconverged storage software stacks, which are skinny when it comes to snapshotting, the VMAX3 all flash arrays can juggle a whopping 16 million snapshots at the same time.) All of the elements shown in blue for the F models are purchasable on an a la carte basis. The FX models have everything in the F series but add in a slew of the software in the EMC storage stack. The interesting bit about the FX stack is that it is “value priced,” meaning sold at a discount compared to buying modules individually, and the FX software has a fixed maintenance cost.

This latter bit is a trick that EMC learned with its XtremIO all flash arrays. The “Expect More” marketing and support program that started with the XtremIO all-flash arrays, and which helped blunt the attacks of Pure Storage, Solidfire, Kaminario, and other array upstarts, is being moved over to the VMAX all-flash setups. Under this program, support contracts for the product is fixed for the life of the product rather than be increased by EMC when it feels the need. The arrays also have a lifetime flash endurance provision, so if you wear the flash our within the contract term, EMC will replace them. There is also a three-year money back guarantee if the VMAX all-flash arrays do not meet performance expectations after EMC’s engineers come in and try to fix any potential problems customers find.

Later this year, EMC will deliver inline data compression and non-disruptive mobility of datasets across VMAX arrays in the second half of this year. That compression will offer on the order of 2:1 compaction on enterprise datasets, sometimes more depending on the nature of the data. (EMC thinks it can get even 2:1 compression on things like databases, which are generally not amenable to external compression.) This compression will come as a free upgrade to the VMAX HyperMAX storage operating system variant that was developed specifically for all-flash variants. The all-flash VMAX arrays do not have inline de-duplication, and Ratcliffe says that it is not currently on the roadmap, either. So don’t count on further data reduction gains to drive the cost of the effective flash capacity down.

Pricing information was not available at press time, but we will add it in here once EMC tells us.

The VMAX all-flash arrays have been orderable for a few weeks now, and are generally available as of today. A number of early customers already have machines and have been putting them through the paces.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.