The dynamics of managing and processing information have changed, and are continuing to do so at a breakneck pace as more and more “smart things” are created, connected and begin producing data.

From the fully autonomous car to the intelligent transit turnstile, or the drone that will someday drop groceries at your front door, the scale of the future connected universe cannot be understated. As Intel’s head of datacenter services, Diane Bryant, recently noted, the average smartphone creates about 30MB of data traffic in a single day, and a PC creates up to 90MB. Looking toward the IoT, the quantity of daily data would jump to 40GB for driverless cars, and 50TB for connected planes.

To accommodate this onslaught, the value of the datacenter market is expected to almost triple, from $10.34 billion in 2016 to $38.3 billion by 2021, according to the research firm Markets and Markets. But merely highlighting the cost of the challenge does not capture the extreme feats of engineering that must be part of adjusting to this new data paradigm. If it is going to contend with the mounting challenges posed by capturing, accessing, analysing and storing these massive amounts of data, the semiconductor industry must fundamentally rethink its approach to datacenter-scale memory.

From booking a dream vacation to scrolling through Instagram or counting our daily steps, memory acts as the nerve synapses that link our digital intentions to the almost reflexive outcomes we expect.

The Connective Role of Memory

Memory, that cornerstone of the modern datacenter, can be seen as the connective tissue for the digital world. Beyond housing ever-increasing amounts of information, it enables systems to meet the bandwidth requirements of summoning large quantities of data almost instantaneously. From booking a dream vacation to scrolling through Instagram or counting our daily steps, memory acts as the nerve synapses that link our digital intentions to the almost reflexive outcomes we expect.

However, two of the most important issues facing systems today are the impact of moving data over long distances to CPUs, and the inherent difficulty of optimizing the performance and power efficiency of data processing.

The traditional structure of a datacenter server rack puts physical distance between the compute power and the memory called on to fulfil a request. These legacy server configurations are contributing to data bottlenecks exacerbated by the new big data paradigm, with more data having to be moved and stored than ever before. Modern server memory hierarchies have a huge latency and capacity gap between DRAM and the storage subsystem (SSDs and HDDs) and, as more cores are added to CPUs, the problem exists across the computing landscape of trying to access and move data fast enough to keep computing pipelines fed.

The impact of this latency gap is exacerbated when data is moved over long distances to the CPU for simple compute operations to be performed before being moved back to be written to the storage system.

“Because of the changes we are seeing in network traffic—more East-West, server-to-server traffic and less North-South server-to-core traffic—we will probably see more people adopting leaf/spine topologies,” wrote industry analyst Patrick Moorhead in his forecast of the most significant datacenter networking trends of 2016.

“Some of the new networking products (like VMware’s NSX) lend themselves well to this type of decentralized architecture. But don’t expect that to crush the traditional core-centric network topology. Instead this will happen more along the edges and with the deployment of new workloads like Big Data.”

Smart Data Acceleration

In this context, new architectures are already being developed, and may yet be discovered, that will improve the ability to manage increasing amounts of data without simply shifting the bottleneck to a different place within a system. Among them is Rambus’ Smart Data Acceleration (SDA) program.

SDA – a new architecture developed by Rambus – offers high memory densities linked to a flexible computing resource. SDA is built on the idea that it’s more expensive to move data toward processing elements than to move compute power to where the data sits.

“We think a lot about the memory hierarchy,” said Steven Woo, Rambus’ vice president of Systems and Solutions. “At the top is DRAM and at the bottom is storage, and in between are some gaps that have been filled in by new types of memory. For example, non-volatile memory (NVM) fills a massive gap. But we see that there are still opportunities for other levels in that hierarchy, and we believe SDA fits nicely in a gap that still exists.”

To implement this new processing architecture, Rambus built hardware modules that consist of an FPGA connected to 24 DIMM sockets to house 24 memory modules. With this card hosting an FPGA’s compute power directly alongside the memory component, Rambus has moved computation so it can do the processing locally, right against the memory.

The SDA platform can be operated on or subjected to transformations to maximize parallelism and efficiency – without the need to move data – and can be inserted directly into an existing server rack. Or, it could ultimately be made available over a network where it would serve as a key offload agent in a more disaggregated scenario. The potential for implementing this new architecture to deal with our insatiable appetite for data is vast.

Taking the next step

The semiconductor industry must focus on addressing the emerging issues facing datacenters in an almost pre-emptive manner, before the problems become too large to tackle, and legacy architectures too entrenched to change.

This will be a challenge, to be sure. As Timothy Prickett Morgan recently observed in The Next Platform, “making any substantial change in something as fundamental as the datacenter rack requires a pretty significant payback.”

Rambus has already taken the first step toward this goal by developing SDA in a research program setting. The next step is to deploy this platform in datacenter environments where its capabilities can be proven amid the real-world demands of Big Data.

As Prickett Morgan added, “The time is perhaps ripe to get it right, and not just for the hyperscalers, cloud builders, and HPC shops that need something a little different to get the efficiencies their business model or budgets require, but for the entire IT industry.”

VIDEO

WHITEPAPER

SOCIAL MEDIA