Building a reconfigurable future with FPGAs

Dave Altavilla recently penned an article for Forbes about the critical role FPGAs are playing in helping to build reconfigurable data centers and advance the development of artificial intelligence (AI).

“Field Programmable Gate Arrays are a programmable chip technology that has been around for decades,” he explained. “Historically, FPGAs were used mainly as pre-production vehicles, allowing system designers to get to market quickly with new design ideas needing custom solutions.”

As Altavilla observes, FGPA technologies have advanced to the point that engineers can build much larger, more complex chips with pre-built modules for anything from processor cores to custom algorithm accelerators.

“Couple that with the on-the-fly reconfigurable nature of modern FPGAs and you can imagine how powerful the technology can be in the age of machine learning,” he continued. “With FPGA technology support, not only can the processor reconfigure the software algorithm, tuning it for what it has learned and its changing workload, but it can also reconfigure its own hardware – and quite literally adapt its brain, if you will, over time.”

Moreover, says Altavilla, fixed-function compute engines such as CPUs and even GPU accelerators “only get you so far” in machine learning applications like speech recognition and translation and real-time network threat analysis.

“The applications are myriad and throwing more standard compute cycles at the problem eventually reaches a point of diminishing returns. Both Microsoft and Intel, as well as other players in this space, are faced with addressing this new processing paradigm and FGPAs are a very well-suited solution,” he added.

Commenting on the above, Steven Woo, VP of Systems and Solutions at Rambus, told us that a number of companies are indeed using FPGAs to optimize a range of systems and processes.

“At Hot Chips, for example, Baidu discussed the use of FPGAs to accelerate SQL queries. Meanwhile, DeePhi is looking towards reconfigurable devices such as FPGAs for deep learning,” said Woo. “In fact, DeePhi CEO and co-founder, Song Yao, believes FPGA-based deep learning accelerators already meet ‘most’ requirements, with high on-chip memory bandwidth, acceptable power and performance, as well as support for customized architecture.”

In terms of choosing between CPUS, GPUs and FPGAs, Woo emphasized that accurately comparing the performance of various acceleration devices with disparate architectures will remain a significant industry challenge for the foreseeable future. Nonetheless, says Woo, understanding the advantages of a specific architecture is key to addressing this critical issue.
For example, GPUs are perhaps best suited for applications such as visualization, graphics processing, various types of scientific computations and machine learning. The combination of numerous parallel pipelines with high bandwidth memory makes GPUs the compute engine of choice for these types of applications.

For other types of workloads, says Woo, FPGAs may be the most appropriate choice. Indeed, when paired with traditional CPUs, FPGAs are capable of providing application-specific hardware acceleration that can be updated over time. In addition, applications can be partitioned into segments that run most efficiently on the CPU and other parts which run most efficiently on the FPGA. As Woo points out, flexibility is only one of the advantages associated with using FPGAs, as field programmable gate arrays can also be attached to the very same type of memories as CPUs.

“It’s actually a very flexible kind of chip. For a specific application or acceleration need, FPGAs are capable of providing optimized performance and improved energy efficiency. Moreover, the ease of developing something quickly to test out new concepts makes FPGAs an ideal platform for innovation,” he explained. “In fact, this is why some design teams start with an FPGA, then turn it into an ASIC to get a hardened version of the logic they put into an FPGA. They start with an FPGA to see if that market grows. That could justify the cost of developing an ASIC.”

In addition to offering versatility, says Woo, reprogrammable and reconfigurable FPGAs can be outfitted with a wide range of algorithms without going through a difficult and costly design process typically associated with ASICs. Meanwhile, the flexible nature of FPGAs allows the silicon to be easily reconfigured to meet the needs of changing application demands. To be sure, when paired with traditional CPUs, FPGAs are capable of providing application-specific hardware acceleration that can be updated over time.

From a broader perspective, says Woo, the industry is going through a major learning cycle as new hardware and computing paradigms emerge.

“In machine learning applications, for example, we’re seeing GPUs being used for neural network training and FPGAs being used for inference. GPUs and FPGAs offer different advantages for various phases of the machine learning process, so both are being deployed for the most appropriate tasks,” he concluded.