A Reference Architecture for NVMe over Fabrics

Cavium has raised its profile over the past several years as one of the pioneers in developing Arm-based systems-on-a-chip (SoCs) for servers, rolling out multiple generations of its ThunderX chips in hope of pushing Arm’s low-power architecture make gains in a datacenter environment that for years has been dominated by Intel and its x86-based Xeons.

However, like similar chip makers, Cavium didn’t start with the Arm server chips, but instead built to that point atop a broad array of products for other areas of the datacenter, including adapters, controllers, switches and MIPS-based processors for networking and storage devices. It also has used partnerships with the likes of Dell EMC and Hewlett Packard Enterprise to expand its reach in an array of areas, from the Internet of Things (IoT) to service providers.

At this week’s OCP Summit 2018, Cavium is bringing together several of its products along with those from partners Microsemi and Marvell (Cavium’s expected soon-to-be parent company) to tackle what has become among the hottest technologies in the storage space, NVM-Express. Together the three companies are demonstrating a reference architecture for NVM-Express over Fabrics (NVMe-oF) that organizations can leverage to build NVM-Express solutions, from all-flash arrays and just a bunch of flash (JBOF) to fabric-attached bunch of flash (FBOF). The reference architecture takes advantage of both Arm’s 64-bit architecture as well as x86.

The demonstration comes as a growing number of vendors race to build out their portfolios around the NVM-Express protocol, which is designed to accelerate the performance of flash and other non-volatile memory. The rapid growth in the amount of data being generated, collected, processed and analyzed, and the rise of such technologies as artificial intelligence (AI) and machine learning in managing the data have driven the demand for faster speeds and lower latency in flash. A broad range of established vendors, including Dell EMC, IBM, NetApp and Intel, and newer companies like Pure Storage, Excelero and E8 Storage are working to ensure they are players in NVM-Express as the market expands. The first specification of the NVM-Express protocol was released more than seven years ago, and adoption is expected to accelerate over the next few years. Partnerships will be important to that adoption, according to Christopher Moezzi, vice president of marketing for Cavium’s Ethernet Server Adapter Group, adding that “NVMe technology is rapidly changing the way cloud and enterprise data centers connect to shared storage, but building NVMe-oF solutions require industrywide technology integrations.”

That is on display at the OCP Summit. The reference architecture leverages Cavium’s ThunderX2 Arm-based processors, FastLinQ 100 Gigabit Ethernet NICs and XPliant programmable switches and Microsemie’s Switchtec PCI3 switches and Flashtec NVRAM cards. In addition, Marvell is adding its NVM-Express SSD controllers. On the software side, the reference architecture enables independent hardware vendors (IHVs) to integrate Microsemi’s open-source PCIe peer-to-peer technology and Cavium’s FastLinQ concurrent RDMA over Converged Ethernet (RoCE) and Internet Wide-area RDMA Protocol (iWARP) for Storage Performance Dataplane Kit (SPDK), a programming framework to accelerate the development of high-speed data packet networking applications. Through this, users can improve efficiencies and performance by offloading compute resources from the data path by using NVM-Express 1.2 controller memory buffer technology.

At the show, the reference architecture shows concurrent 100 Gb/s NVMe-oF RDMA over Converged Ethernet (RoCE) and Internet Wide-area RDMA Protocol (iWARP) connectivity from FastLinQ 100GbE Open Compute Project (OCP) NICs on x86 servers to a Celestica Nebula JBOF – which includes Marvell NVM-Express SSD controllers – and Facebook Lightning JBOF driven by Cavium’s ThunderX2 and XPliant platforms. With the reference architecture, storage IHVs can take advantage of PCIe peer-to-peer and SPDK software to offload the server CPU from the datapath, which aims to reduce latency.

The reference architecture gives Cavium another place for its ThunderX2 SoCs, which are based on large part on the Vulcan intellectual property gained from Broadcom after Avago Technologies bought Broadcom for $37 billion in 2016. Now Marvell, which has had its own Armada Arm-based servers chips, is in the process of buying Cavium for $6 billion, a deal that is expected to close later this year. It got a boost earlier in March when Marvell shareholders approved issuing common shares in the company in connection with the proposed acquisition.

Also at the OCP Summit, Cavium announced it is working with longtime partner HPE to drive the OCP message of open hardware development to enterprises and many cloud service providers that want to gain the efficiencies and performance that hyperscalers get in their datacenter infrastructures but don’t have the purchasing power or IT skills that larger companies like Google, Facebook, Amazon and Microsoft have. HPE in 2015 partnered with device manufacturer Foxconn to build its Cloudline family of low-cost servers to sell into cloud markets and to push back at the growing presence of ODMs in the server space. Cloudline systems give enterprises and smaller cloud providers open-design servers they can put into their evolving datacenter environments that are cost-efficient and backed by a major OEM.

Cavium said it is bringing its FastLinQ 4100 Series 10/25GbE NICs into the OCP 2.0 form factor for HPE’s Cloudline systems and also will work with HPE to spread the message of the emerging OCP NIC 3.0 standard among the companies’ customers. The standard is designed to drive greater power efficiencies and expand the options for PCIe Gen 4 and SmartNICs, while also making it easier to install and remove the NICs, Cavium said. Putting the company’s FastLinQ Ethernet NICs into HPE’s Cloudline systems will bring universal RDMA network adapters to give customers a choice of RDMA technologies and faster network virtualization via offloading protocol processing for VxLAN, NVGRE, GRE and GENEVE. It also improves server virtualization and, through integration with DPDK and OpenStack, enables the deployment and management of network-functions virtualization (NFV).

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.