There is little doubt that this is a new era for FPGAs.
While it is not news that FPGAs have been deployed in many different environments, particularly on the storage and networking side, there are fresh use cases emerging in part due to much larger datacenter trends. Energy efficiency, scalability, and the ability to handle vast volumes of streaming data are more important now than ever before. At a time when traditional CPUs are facing a future where Moore’s Law is less certain and other accelerators and custom ASICs are potential solutions with their own sets of expenses and hurdles, FPGAs are getting a serious second look for an ever-growing range of workloads.
FPGAs have always been a multi-market solution, but the market is more varied in terms of compute requirements, yet more homogenous than ever in terms of those basic requirements for efficiency, speed, scalability, and usability. A great deal of work has gone into improving the profile along every single one of these trajectories, not the least of which is programmability—the assumed Achilles’ heel for reprogrammable devices, but to far less of a degree than ever in the history of FPGAs.
It is for these reasons the editors of The Next Platform has produced a book covering the recent past, present, and future of FPGAs, which is being made available as a free download for a two-week period before it hits shelves for sale on Amazon and other booksellers. The full book can be downloaded by following this link from January 16-27 only.
UPDATE: The book is now available in print on Amazon and other booksellers.
The ability to make the full version of FPGA Frontiers: New Applications in Reconfigurable Computing, 2017 Edition, free is made possible by FPGA maker, Xilinx, sponsors of free week for this title from Next Platform Press.
In his fifteen years as FPGA maker Xilinx’s Chief Technology Officer, Ivo Bolsens has watched as reprogrammable devices have moved from being the glue logic to the heart of full systems. “This is because it is now possible to incorporate so many rich capabilities into the devices and governing software framework. In terms of the future, we think we have also made it clear that the broader compute world has an opportunity with FPGAs as well given larger trends in the datacenter, most notably performance per watt and overall scalability. Scaling up and down at the same time as having programmability mean more in the wake of these trends, and this of course maps well to FPGAs,” he explains.
The goal for companies like Xilinx is to deliver high compute density and capability in an FPGA to meet these growing workload requirements. These two areas are the real starting point, especially as we look at emerging needs in both cloud and machine learning. Many applications in both of these spheres have data flow and streaming data processing needs, and this is exactly where FPGAs shine with less power consumption than other accelerator architectures or CPU only approaches. “This is because the data and compute are side by side without heavy, expensive data movement between memories—a feature that is most important for machine learning,” Bolsens notes.
These same aspects that make FPGAs an attractive fit for the emerging workloads map well to other areas where reprogrammable logic has a major role. Network processing, security, deep packet inspection are all important areas for FPGAs. Also on the horizon are even larger trends feeding more work to the FPGA in terms of adding greater levels of intelligence into both the network and storage layers. The opportunities here are huge; the ability to bring the compute closer to the storage and data for doing networking functions is a game-changing capability that CPUs cannot touch performance and efficiency-wise. The emerging trend toward network function virtualization alone represents a major opportunity for FPGAs in tandem with CPUs and although it is a different use case from machine learning, video transcoding, and other emerging workloads, it shows how and why the FPGA can hum against streaming data in a way other accelerators or CPU-only approaches cannot.
The historical challenge for FPGAs entering into new and emerging markets has been the programming environment, but this problem is being solved by major step-changes. “Much of our research and development organization has been focused on the future of making reprogrammable devices programmable, and with OpenCL and critical insights we have had over the years to make these more approachable, we are now moving toward general purpose (in terms of usability) devices. To put this into some perspective, it is useful to understand where Xilinx began with FPGAs and where we are now for both the hardware and software end users of our devices,” says Bolsens
The challenge ten years ago was to bridge the gap between the hardware and software sides of the development house. “We wanted to make sure it was possible to unleash the full potential of the hardware platform without exposing the software people to all of the gritty details of the underlying hardware, beginning with an effort to move from Verilog and HDL to higher level abstractions,” Bolsens recalls. “A decade ago, we actually had to sell this concept inside of Xilinx—this idea that we could build hardware with fewer lines of code and with a high-level synthesis approach that allowed us to see functions and map them into hardware. Ten years ago, this was not a need. Today it is, and we have responded.”
“It has been a decade-long journey; from selling the diehard hardware people to start using C to build hardware functions with fewer lines of code to now getting those people who are writing C++ code and want to have abstraction of all the hardware details.”
So much has changed in terms of FPGA usability in the last decade that it is quite stunning to stop and see that big picture. If you are a software developer these days in a datacenter environment and you are writing your C++ programs to run in the cloud with FPGA acceleration it is now seamless. It is now no longer a major hurdle to run in a heterogeneous platform with your code running on the CPU, GPU, FPGA, or all of these together. You can now get the benefits of the FPGA without having to deal with the requirements of specialized knowledge about programming those FPGAs. This is a long way to have come for these disparate hardware and software people—from those beginnings to this more seamless environment.
Of course, there is still a great deal of work to do. One of the things that is still missing for FPGAs compared to other accelerators—and something companies like Xilinx understand they still need to address for some of the emerging workloads like machine learning and deep learning—is providing the libraries needed. “Today, people do not write software from scratch; they are using many libraries and compared to other platforms, this is where catching up needs to be done. The goal is to leverage our ecosystem and partners, and of course, leverage our in-house software expertise to build around this gap and ensure new application areas can quickly onboard with FPGA acceleration,” Bolsens says.
Even though we talk about programmability and the availability of key libraries, the biggest hurdle for FPGAs is also their most attractive point; they allow an immense amount of freedom. For those who are skilled in navigating FPGA use this is the benefit, along with performance per watt—this flexibility. However, that tremendous amount of programmability means the user experience can be more complex. As Bolsen explains, “There are many ways to mess up but we are addressing this with additional investments in templates to make developers as efficient and productive as possible in key domains so they can adhere to best practices, avoid mistakes, and get around otherwise longer efforts riding a learning curve.”
What sets FPGAs apart is not only the flexibility, energy efficiency, and price-performance profile we have described, but also the fact that FPGA makers understand interconnects better than anyone else in the semiconductor world. “We understand how to build an interconnect infrastructure that is programmable, can connect any function to any other function by programming a device, and with that kind of infrastructure in place, the potential applications abound,” Bolsen notes.
We look forward to witnessing where FPGAs find a place in the new world of applications that are being driven by big data and see a path to this internally. As we will highlight in the course of this book, there are numerous opportunities for emerging application areas and despite some roadblocks and challenges, there is great hope on the FPGA horizon.
Please take advantage of the free week of FPGA Frontiers: New Applications in Reconfigurable Computing, 2017 Edition sponsored by Xilinx by following this link to register, or use the button below to start downloading the book.
[purchase_link id=”6482″ text=”Add to Cart” style=”button” color=”blue”]
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
What I would like to know how good is the average resource efficiency when using OpenCL or other abstraction languages on FPGA compared to traditional Verilog, HDL and hand optimzied gate synthesized layout.
I’ve seen mixed reports going from as low as 20% to maximum of 70%. Meaning they are leaving huge margins on the table. If the percentage are that bad not sure if it really is worth the hassle going to FPGA at all.
Any chance of release this in an eBook format? It’d be a lot more convenient to read on an eReader or a tablet without the fixed PDF formatting.
Hello, when it goes to print, it will be found as a Kindle download from Amazon as well.
I don’t see a Kindle version on Amazon…?
They have still not approved the Kindle version on Amazon due to some charts that were resized and are still pending review. Will post here when it is available.