A Thirst For Petabyte Scale All-Flash Arrays

Some technology trends get their start among enterprises, some from hyperscalers or HPC organizations. With flash storage, it was small businesses and hyperscalers who, for their own reasons, got the market growing, drawing in engineering talent and venture capital to give us the plethora of options available on the market today. Now, the big customers are ready to take the plunge.

It is no coincidence, then, that Pure Storage has architected systems that scale to multiple petabytes of capacity to meet their needs. Large enterprises with pressing demands for scaling in terms of both performance and capacity need a different kind of all-flash array than the company has previously designed, and that is what the FlashBlade, which made its preview in March and which is being tested under an early adopter program by key customers ahead of its general availability early next year, is all about. The enthusiasm for the new machine is a bit broader and deeper than Pure Storage had expected, signaling another level of maturity for customers and all-flash array vendors alike.

“What we are seeing in the early adopter program is a very high thirst for very large systems and aspirations for multi-petabyte installations,” Matt Kixmoeller, vice president of products at Pure Storage, tells The Next Platform. “This is one of the real differences between FlashArray and FlashBlade. FlashArray was aimed at mission-critical applications, such as virtualization or databases, and it is a design point where six nines availability is critical and there is not really a need for all of the VMs or all of the databases to be on one storage array. Virtualization makes it quite easy to spread them between three or four arrays and you improve your availability by dividing the load a little bit. The types of use cases we are going after with FlashBlade really are different, and there really is a need for a single namespace across billions or tens of billions of files or objects. And so dividing it destroys its value.”

Early adopters of FlashArray, which debuted in May 2012 and which was last updated in June 2015 with faster controllers and fatter flash modules, were mid-sized IT organizations who were just putting their toes in the water, and it was reflective of the fact that Pure Storage was a relatively new company with modestly scaling products that large enterprises were not stepping up to run their organizations on FlashArrays on Day One. The early adopters of FlashBlade are different in that they are pushing the barriers of performance and scale, says Kixmoeller, and interestingly, about 80 percent of them are not existing Pure Storage customers. “The product is really expansionary in terms of new markets that we can go after as a company with new use cases for flash that were not really being served before.”

We would argue that it is all about scale, an argument that we have been making for a while but which some all-flash vendors have been telling us was not necessary. For the virtual server, virtual desktop, and database workloads where flash was originally targeted in the enterprise, it is certainly true that having more than a couple of hundreds of terabytes of usable capacity was probably not necessary, but when it comes to the analytics, simulation, and media workloads where FlashBlades are seeing early traction, the scale of the array actually does matter.

The FlashArray FA-320 system that was launched four years ago had two controllers and two SSD storage shelves that offered 20 TB to 40 TB of usable capacity and delivered around 200,000 I/O operations per second (IOPS) with an 80/20 mix of random reads and writes using 4 KB files and around 100,000 IOPS on streaming writes; its performance was considerably lower on 32 KB files, which Pure Storage has since used because this is more reflective of the dataset size that its customers have. Something on the order of 20,000 IOPS on the 80/20 mixed reads and writes. The top-end FlashArray //m70 that came out in June 2015 could handle over 300,000 IOPS on that read/write mix and usable capacity of 400 TB.

But this scale was not sufficient for the largest customers out there. Hence the FlashBlades, which we have detailed here and which will offer up to 1.6 PB of capacity per controller using data reduction techniques such as data de-duplication and compression. The initial directed availability rollout of the FlashBlades, which is slated to occur sometime during the second half of this year (and has not yet started) is limited to a single controller with a maximum of 15 blades, but at general availability Pure Storage will double that up to two enclosures with 30 GB/sec of aggregate bandwidth and handling up to 2 million NFS operations per second. (Pure Storage has not revealed IOPS ratings for the box yet.) The company told us in the spring that the fabric that is being used to lash multiple FlashBlades together would eventually scale to ten nodes, and Kixmoeller said the architecture of the machine would enable the scaling to hundreds of controllers and enclosures, all with a single namespace. That could mean something around 500 PB of capacity using the current 52 TB capacity blades across 320 nodes (to pick a number), and if the capacity on the blades is doubled up we are talking about 1 EB usable for workloads that can be compressed and de-duped. Those are reasonable numbers for four years from now, when the FlashBlades will be replaced with something else even more impressive.

HPC In Various Guises Comes To Flash

Pure Storage is seeing the FlashBlades play out in a number of key markets that are adjacent to its original use cases in the enterprise. One of those areas is high-end scientific computing, which includes massive simulations for electronic design automation (EDA) workloads that are used to create chips as well as machine learning applications. Interestingly, Kixmoeller says that there are a number of car companies that are using early FlashBlades to design their self-driving cars and support the massive amounts of telemetry that will come off these machines as they are functioning. Another HPC-style workload where the FlashBlades are seeing early interest is among bioinformatics processors and researchers, who want to reduce the time it takes to sequence human or plant genomes as well as lower the cost of doing that sequencing.

Another area that sometimes functions as an early-adopter of advanced storage technologies that Pure Storage has gotten early adopters from for its FlashBlades is the media and entertainment sector. “We didn’t really know, to be honest, how attractive this would be because the one hiccup is media is that the data and files are not reducible by compression and de-duplication since they are already precompressed,” explains Kixmoeller. “Part of our goal with FlashBlade, and why we did so much design around the hardware is that we knew we had to deliver a system that took cost out not only through our software, but through very purposeful hardware. When we engage with customers in the media space, we are finding that our cost points with FlashBlade are very competitive with disk, even without the data reduction software.”

The FlashBlade early adopters in the media and entertainment sector include movie render farms and special effects studios, and oddly enough, there is demand among those who run large scale video surveillance, which includes not just archiving data but doing real-time and batch analysis of the video streams, both of which are sensitive to the performance of the underlying storage and need to also be cognizant of the cost of that storage.

The third big group of new users for the FlashBlades are looking to accelerate their data analytics applications. Kixmoeller says that Pure Storage reckoned that a lot of so-called “big data” applications are focused on ingesting large amounts of data and landing it, but then the analytics is a slow, batch-style munching of the data after it has landed. “People want to immediately use the data, landing it and do randomized I/O on it, and that is exactly what the traditional systems are exactly bad at,” he says. “Large-scale file systems are either optimized for streaming writes to land data quickly, but if you try to query it with randomized I/O it is tough, or they are optimized for small, random I/O for reading but they are not optimized for streaming writes. And so we have tried to deliver something that can handle both of these use cases.”

At the moment, the FlashBlades only support the NFS file system that is popularly used on network-attached storage arrays. Support for object storage that is compatible with the S3 protocol of Amazon Web Services will be available when the FlashBlades are generally available. Some early adopters are running FlashBlades as a backend for Hadoop or Spark and they are using NFS as the interface to the data, but Kixmoeller says Pure Storage is looking at implementing the Hadoop Distributed File System as well as other protocols and file systems used in data analytics directly on the devices. As for GPFS and Lustre, the parallel file systems used with parallel compute clusters to support simulation and modeling workloads and increasingly data analytics, too, Kixmoeller says that Pure Storage wants to obviate the need for such file systems and have applications talk directly to the file system it has implemented. That said, the protocol layer in the FlashBlades is intentionally separated from the underlying native object storage layer implemented in the FlashBlades, so any protocol can, in theory, be added to the box without having to gut it and, importantly, multiple protocols can be run side-by-side on the machine and accessing the same or different data in the box.

“Just a baseline with NFS seems pretty good to start with,” says Kixmoeller. “I think we are having the exact same experience that we had with FlashArray. If you look at FlashBlade, it does things that the current products can’t do today in terms of performance and scale at a cost point that is very attractive, but it doesn’t have all of the features – like snapshots and replication – in the first generation. Whenever you launch a new product, there is always a certain amount of nervousness about whether or not you need all of the bells and whistles. What we found with FlashArray five years ago is that there was an enormous group of people who did not need all of that stuff and we are very happy to start with a viable product for folks who really value the raw performance and scale. I would say that the people who came into the early adopter program have found the upper boundaries of what NetApp and Isilon can do, and so FlashBlade is providing them a breakthrough in terms of dramatically more compute and simulation power than they had before. They can look beyond some of the feature gaps and as we add more mainstream features, we will see the product evolve toward more mainstream uses. We are going to start simple and see where customers pull us.”

The rage in HPC proper these days is to use burst buffers built from flash on the front end of parallel file systems, which de-randomize the I/O before the data is pushed out to those parallel file systems that are largely based on disk drives. It would be interesting to see customers – perhaps those with off-the-shelf applications in the oil and gas industry that do seismic analysis and reservoir modeling at a smaller scale using NFS, not GPFS or Lustre – try a more direct approach.

No matter what architecture or vendor large enterprises and modest HPC shops consider, Pure Storage has another trick up its sleeve to try to win the deal: Its Evergreen marketing program.

With the initial version of Evergreen, customers who added a reasonable amount of capacity to existing FlashArray systems would get a free controller upgrade every three years. (The amount of incremental capacity depends on the model.) Now, the Evergreen effort is being extended to the storage shelves in the FlashArrays, and customers can get credit for the capacity they have already installed if they agree to buy four times more capacity when they upgrade. This capacity increase will fit the three to four year upgrade cycle that customers tend to have anyway for their storage, and by being generous in this way Pure Storage has a competitive edge on its rivals and can help minimalize the diversity of iron out there in its installed base. In a very clever way, Pure Storage has invented a kind of rental model that puts the depreciation of the iron on its customers’ books instead of its own. Very clever indeed.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

2 Comments

  1. FlashArray FA-320 performance per TB is 5-10 4K IOPS/TB which is firmly in HD – not SSD – territory.
    If it’s IOPS rating was not a typo I don’t see why anyone would buy it for any reason(s) related to performance.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.