Fabrics Open The Way For Storage Class Memory

Dell EMC has long been a vocal proponent of NVM-Express, the up and coming protocol that cuts out the CPU jib-jab with PCI-Express peripherals and that boost throughput and drops latency for flash and other non-volatile memory.

For the past two years, Dell, like other system makers, has put NVM-Express drives in its servers while ramping up the flash in its high-end storage systems and preparing to bring the protocol to those external storage appliances. It has taken time to get the arrays reworked, for the price of NVM-Express drives to come down, and for the volumes to ramp up.

Two new PowerEdge systems introduced at the Dell Technologies World show this week in Las Vegas, the PowerEdge 940xa and 840, both include direct-attached NVM-Express drives. Also at the show, the company unveiled the massive PowerMax storage array, the successor to the company’s all-flash VMAX portfolio. The two systems in the PowerMax family include among their features an NVM-Express interface and an NVM-Express array enclosure.

The PowerMax portfolio comprises the 8000 and 2000 systems, which are designed to run both legacy applications like virtual machines and relational databases as well as modern workloads like mobile apps, genomics and the Internet of Things (IoT). The 8000 can deliver performance of up to 10 million IOPS and 50 percent better response time than competitive systems, according to Dell EMC.

Dell EMC is far from the only tech vendor that is focused on NVM-Express. Others include not only established system makers like IBM and NetApp, but also component manufacturers like Intel and Cavium and smaller storage players like Pure Storage, E8 Storage, and Excelero. For example, IBM established its NVM-Express strategy a year ago and in December demonstrated integrated Power9 systems and FlashSystem 900 arrays using the protocol over InfiniBand, while all-flash storage provider Pure, E8 and Excelero are building out new architectures and implementations for NVM-Express.

A key message from Dell around NVM-Express throughout the company’s recent DellWorld event was that while it is important to get the protocol in place in servers and storage arrays, NVM-Express is the gateway to NVM-Express-over-fabrics (the very ugly abbreviation NVMe-oF) and upcoming storage class memory (SCM) technologies, which will drive improvements in performance and latency. In announcing PowerMax, Jeff Clarke, vice chairman of products and operation at Dell, said that “one of the key attributes of the future’s modern datacenter is end-to-end NVM-Express storage. NVM-Express done right is the path to the next media storage class memory. It’s how you take on increasingly bigger workloads with all of the data that’s coming your way.”

NVMe-oF will enable communication between a host and storage system over the network, leveraging such technologies as Ethernet, Fibre Channel, and InfiniBand. What SCM technologies come to the forefront is still unclear, though what the industry does know is that with the rise of modern workloads like AI, it has to take the next step beyond NAND. Dell EMC has designed the PowerMax systems to be able to support NVMe-oF and SCM when they are embraced by the market.

In an interview with The Next Platform at the show, Danny Cobb, Corporate Fellow and vice president of global technology strategy at Dell, noted that NVM-Express is only a protocol aimed at accelerating flash and other types of non-volatile memory , and to get the full benefits of it will require a system approach that can take advantage of the upcoming SCM products. Cobb said that switching out a hard drive for a flash drive made software run faster, there are still bottlenecks that slow performance. In a similar fashion, if nothing around NVM-Express changes, the required pieces – such as a contemporary X86 architecture and an artificial translation layer – need to be put in place around it. And the choice of the networking protocol that creates the fabric needs to be flexible.

“We’re tackling system problems,” Cobb said. “We’re not just reswizzling a data path protocol or something like that. We’re dealing with NVMe-over-fabrics, we’re dealing with naming and directory service problems, we’re dealing with security problems, identification, authentication, and encryption problems, failover type issues and those types of things, and because of that, it’s really cool what the Ethernet implementation looks like. It’s even pretty cool what the Fibre Channel implementation looks like, but now you can see you can run it over Omni-Path, you can run it over InfiniBand, you can even run it over plain old TCP. We’re sort of fragmenting what we meant by fabric at the beginning, where we thought PCIe for the in-the-box fabric, Ethernet for the out-of-box fabric and Fibre Channel for the people who won’t switch off of the orange wire. We understand that. They don’t want to change that infrastructure, they just don’t want to be left behind when the new stuff comes along.”

The idea of NVMe-oF came at the Intel Developer Forum in 2013, Cobb explained. EMC was running a technology demo of NVM-Express and nearby Intel was conducting a similar demonstration. Essentially both were the same things in different ways in their stacks and came to the same answers in terms of overhead. They found that a more efficient protocol to layer on top of NVM-Express was needed and an RDMA fabric could be the answer. In 2014, the NVM-Express consortium, which was founded by Dell, EMC, Intel, among others, accepted the idea, giving rise to NVMe-oF. The 1.0 specification was issued recently and 1.1 is almost ratified, he said. At last year’s Flash Memory Summit, there were numerous NVMe-oF proofs of concept and demonstrations.

“I guarantee that no two of them interoperated with each other,” Cobb said with a laugh. “All of Mellanox’s stuff worked with Mellanox’s stuff, and all Broadcom stuff worked with Broadcom, and if you tried to plug them in all together, that’s just not where the industry was nine or twelve months ago. But we’re getting to that point where we’re going to have the type of interoperability that we need and that customers demand. It’s going to come from multiple vendors and multiple components interoperating with each other and demonstrating that interoperability, robustness and resilience. Until we get there, people will be bringing it up in the lab and looking at it and measuring it and asking their switch vendors about it and their storage vendors about it, but we’re not going to pick up steam without that broad multi-vendor interoperability nailed down.”

The picture around SCM is still coming into focus. There have been four successful high-volume memory technologies, with NAND – which has been around for more than two decades – being the latest. Given the growth of data and the modern enterprise workloads, a NAND successor is needed, and the industry is looking at resistive RAM (ReRAM) and magnetic RAM (MRAM). ReRAM is a good candidate to succeed NAND, given that it’s faster, has similar endurance and is manufactured in similar capacities, Cobb said. MMRAM is very fast and offers DRAM performance characteristics, but he said it’s unclear how to manufacture it at the high volumes and scale needed to make it cost-efficient. Dell EMC also is looking at memory technology based on carbon nanotubes, and has invested in Nantero, a startup developing what it calls NRAM (non-volatile RAM). Cisco Systems is also an investor in Nantero.

“It’s got at least electro-chemically unlimited endurance and unlimited retention,” Cobb says. “You can understand how nanotubes can scale down and get to 28 nanometers and 14 nanometers and then down to 10 nanometers. No one has done it yet, but at least you can understand the physics of how it gets there. We said, ‘We’re in on all three.’ You’ve got magnetic RAM coming, you’ve got nanotube RAM coming, you’ve got resistive RAM coming. You could already see phase-change memory or 3D XPoint on the horizon from Intel and Micron, so we knew where that was going to land, so we said, ‘In this phase of faster and faster storage meets bigger and bigger memory, storage class memory is going to be disruptive.’ Whatever one wins. The slower ones are going to be more of a high-storage use case and the faster ones are going more of a fast memory use case because they don’t have stall the processor and people want more and more and more physical memory to run more in-memory workloads, so we view storage class memory as this intersection of the fastest possible storage and the biggest possible memory.”

It’s going to take a while for broad adoption of SCM. Many companies will have to change their memory stacks because software today doesn’t expect memory to be persistent or for storage to have sub-100 microsecond response time.

“In the meantime, it will get used for really big, somewhat slow memories, and some of it will be used for really fast but still pretty expensive storage – so caching and the fastest tier and those types of things – for the initial cases,” Cobb said. “As volume picks up and as multiple fab ecosystems get involved, you will begin to see that accelerate. We are very bullish and there will be more than one technology in that space, but engineers have a lot of work to do.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.