Rethinking System Architectures to Improve Performance and Power Efficiency

The semiconductor industry is no longer able to depend on dramatic performance and power efficiency gains from Moore’s Law and Dennard Scaling. This is precisely why there is an increasing emphasis on rethinking system architectures with a range of newer technologies such as FPGAs, which can help improve computation and fuel future improvements in data centers and High Performance Computing (HPC) systems.

Although smaller process geometries continue to provide more transistors per chip, clock speeds are plateauing due to power and thermal limits, with instructions per clock cycle remaining relatively static as well.

CPU trends

“Process technology improvements successfully fueled advances in performance and power efficiency for many years,” Steven Woo, VP of Systems and Solutions at Rambus, explained. “However, the days of being able to rely on Moore’s Law and Dennard Scaling for dramatic performance and power efficiency improvements are past. Going forward, the industry must focus on rethinking system architectures to drive large improvements in performance and power efficiency.”

In addition, says Woo, the performance and power efficiency bottlenecks in systems have shifted over the years due to the evolution of both architecture and applications. To be sure, the relentless progression of Moore’s Law and clock speed scaling prevalent throughout the 1990s and early 2000s so effectively improved computation capabilities that processing bottlenecks have moved to other areas.

“Traditional system bottlenecks have shifted and new ones are forming in the memory and storage systems, and in networks as well,” he confirmed. “The industry is responding by turning its focus to these new and emerging bottlenecks, while rethinking traditional system architecture to address these new challenges. Power efficiency continues to be a primary concern, as well as flexible acceleration to meet the performance needs of modern workloads.”

Perhaps not surprisingly, minimizing data movement using near data processing and accelerating computation with FPGAs are two specific areas where the industry has redoubled its efforts to rethink system architecture. Indeed, FPGAs offer customizable hardware acceleration capabilities – allowing processing functions to be adapted and modified based on the needs of the applications and workloads.

In addition, FPGAs are available on a wide variety of cards that can be deployed for offload and acceleration, using the same types of interfaces as CPUs and attaching to the same types of memories as well. Simply put, leveraging similar interfaces and memories eases integration and makes FPGAs appropriate for a number of environments such as data centers.

“New advances in FPGA technology have driven broader adoption, with Microsoft deploying FPGAs in the Bing search engine and Intel revealing plans to couple FPGAs to their CPUs with the purchase of Altera,” said Woo. “In addition, the recent announcement of the formation of the CCIX consortium, slated to focus on the development of a Cache Coherent Interconnect for Accelerators and the Coherent Accelerator Processor Interface (CAPI) will enable further system improvements by allowing programmers to choose the most appropriate processors and accelerators to coherently share data.”

According to Woo, the rise of big data analytics and in-memory computing has resulted in ever-larger amounts of data being generated and analyzed. In many systems today, so much data is transferred across networks that data movement is itself becoming a critical performance bottleneck. Moreover, the very act of moving data is consuming a significant amount of power, so much so that it’s often more efficient to move the computation to the data instead.

“Rambus’ Smart Data Acceleration (SDA) Research Program is part of an industry-wide effort to re-examine the architecture of conventional computing platforms by reducing and even eliminating some modern bottlenecks,” Woo concluded. “With an increasing focus being placed on system architectures, it’s a really interesting time to be working on this initiative.”
Interested in learning more about our SDA platform? You can check out our research program page here.