New Memory Architectures Poised to Reshape the Future of Data Centers

The dynamics of managing and processing information have changed. In a connected world where more data is captured, moved, and analyzed than ever before, data centers are being placed under ever-increasing demands. Performance and power consumption are top-of-mind for every server and data center architect and as the data deluge continues to build, the demands placed on memory and the processor in systems are only going to grow. With these challenges mounting, the semiconductor industry must decide what form the next generation of memory will take.

Memory is the cornerstone of the data center. It enables systems to meet bandwidth requirements and handle huge amounts of data with a significant emphasis on in-memory computing (which entails data gathering, analytics and reporting – all of which are essential to ensure services work without a glitch).

The semiconductor industry needs to focus on tackling emerging issues facing data centers in this age of Big Data. Rethinking memory and system architectures as the industry defines DDR5 will be a step in the right direction. Improving performance, power efficiency, and Total Cost of Ownership (TCO) with new technologies such as hardware acceleration and Near Data Processing will be vital in future data centers.

Until recently, new data rate challenges and demands have relied on the advancement of Moore’s Law that stated transistors would double every 18 months. However, Moore’s Law is slowing, while the volume of data being processed has increased significantly. Although the next generation of memory is likely to help accelerate data movement, it may simply shift the bottleneck to a different place within a system. In fact, this has been a recurring theme as compute, memory, storage and input/output (I/O) have all advanced at different rates.

So how do system architects meet the challenges of improving performance and power efficiency for modern workloads?

To accommodate the next leap in data processing demands, the industry will have to do more than tweak processor and memory designs. While the next generation of DDR is needed to keep up with the growing capabilities of future CPUs, there are growing challenges that evolutionary advances are struggling to address. With Moore’s Law slowing and power scaling a thing of the past, architects need to rethink system architectures to address critical bottlenecks that are preventing improvements in performance and power efficiency.  In particular, systems must minimize data movement and provide application-specific acceleration capabilities.

Managing data center resources intelligently and efficiently

Large data centers typically address scalability by increasing the number of fixed-resource servers, in some cases up to hundreds of thousands of servers with the largest data sets being stored and processed across many racks. Used in this manner, the legacy server architecture can lead to low CPU utilization rates, high latencies to access data, reduced power efficiency and increased total cost ownership (TCO).

Rack Scale Architectures offer data center architects the ability to group processing, memory, and storage resources into pools, allowing them to be scaled independently as needed. This means one rack can contain a mix of compute, memory, storage and I/O resources, while another can contain a completely different mix, resulting in reduced bottlenecks for varying workloads with improved performance, power efficiency, and TCO.

Managing data access

The memory hierarchy plays an important role in determining how well applications perform, especially for data-intensive applications. Modern server memory hierarchies have a huge latency and capacity gap between DRAM and the storage subsystem (SSDs and HDDs). The impact of this latency gap is exacerbated when data is moved over long distances to the CPU for simple operations to be performed before being moved all the way back to be written to the storage system.

There are several ways to address this issue: block storage devices that are faster than SSDs can fill the gap and improve performance, and Near Data Processing can move processing to the data to minimize data movement and, in turn, the time it takes to process it.  Rambus’ Smart Data Acceleration (SDA) Research Program has developed a solution that combines high DRAM capacity with an FPGA to provide flexible offload and acceleration capabilities. This solution enables near-data processing, minimizing data movement and access latency. The solution is scalable, allowing more engines to be added as more memory and processing are needed.