The Next Platform has been tracking momentum with FPGAs over the last several years with particular emphasis on the role programmable devices will continue to play in application acceleration as well as in computational storage and modern datacenter networks.
In just the last five years there has been significant traction for FPGAs to accelerate a wide range of enterprise and research applications in areas as diverse as genomics, AI inference, large-scale business analytics, in-house EDA, and video transcoding, among others. At the same time there have been shifts in FPGA accessibility, device diversification, and performance/memory/efficiency improvements, along with many new twists along the path to more productive programming.
On January 22, 2020 we will be gathering some of the preeminent leaders in FPGAs on the application acceleration side for an extended day of discussions as part of The Next FPGA Platform event. The PowerPoint-free format, held in San Jose, CA., will feature live on-stage technical interviews and topical panels with FPGA technologist on the end user, developer, and creator sides, moderated by The Next Platform team and select guest interviewers.
There are countless questions to explore over the course of the event—and plenty of others for individual attendees to discuss at breaks and during the networking time that follows. Here are just a few. We welcome your thoughts on some you would like to see addressed as well.
- One overarching theme will be focused on what is ahead for FPGAs in the datacenter. While the event will be primarily emphasizing application acceleration, we will discuss the concept of the FPGA-laden datacenter, with devices ranging from compute accelerators to where they fit in the network, storage, and at the edge. What does this mean for how future devices are designed?
- To take that one step further, here’s the big question: when will FPGAs have a representative share, particularly in application acceleration? Remember when Intel bought Altera and made the bold claim that inference would soon be one-third of the cloud datacenter workload? Has that happened? And is scalable inference the key to future FPGA acceleration in the datacenter at scale? Or is the answer more nuanced? Think of it this way. The cost of bringing a server class chip to market is around a half billion dollars. It could be close to a billion over the next several years. At what point will ASICs stop making financial sense, opening the door to devices that do inference one moment and another workload the next on the same device? Will economics or workloads drive the future of FPGA penetration in the market?
- Present and future (related to the above) we have FPGAs in client, edge, datacenter/application acceleration, inside SSDs/storage, and in the network. What is the distribution of these devices now and with all of these bulleted points in mind for the day, what will be in five years?
- With that in mind, we want to survey the current state of FPGA accelerated applications. Where is the most traction and why? What is the overhead in the face of other options (GPU/CPU/ASICs) and what are the results? Challenges of deployment and benefits from programmability, cost over time, efficiency, suitability to broader workloads, etc. Examples from inference, transcoding, genomics, and other areas will be pulled into this discussion. And why is missing in key areas? For instance, why do we have yet to see an FPGA accelerated supercomputer? What happened industries like oil and gas? What is the momentum in financial services and how has that shifted over time?
- We are entering into 2020 with a range of access options for FPGAs. From traditional on-prem to cloud-based instances, including the AWS F1 instances and those from Nimbix and smaller providers, what has traction been? What workloads are finding a home on the cloud? What about in-house FPGA development internally among cloud providers (Microsoft a good example here with Brainwave, among other projects), and how does this differ from what is happening with the biggest clouds in China? Are they deploying FPGAs in novel ways comparatively? Is there an appetite for FPGAs on cloud platforms worldwide? Why are cloud providers choosing to provide FPGAs in the first place? Where is the demand and what are the next steps to keep these efforts fed from users, vendors, and programmers?
- What are some of the key technology innovations that will push capabilities of new devices? Here we will certainly discuss the role of high bandwidth memory as well as how key companies attempt to future proof the FPGA in terms of packaging technologies on the horizon and more broadly, in terms of whir ability to decrease latency in the datacenter. In other words, we will look at crucial hardware technology trends and what they mean for future devices and the users/programmers who will consume them. Finally on this note, we cannot completely avoid cost discussions. What are the brightest lights in FPGA application acceleration and how do technology creators follow those in 2020 and beyond?
- And no forward-looking FPGA gathering could be complete without extensive discussions about programmability, developer enablement, tools, and frameworks. This is where the conversation risks going in several directions given the many discrete frameworks to operate within from both vendors and the open source community, which we believe will gain traction over the course of the next several years.
If you are reading these thoughts about where we are heading with The Next FPGA Platform and answering the questions to yourself or coming up with new ones, you’re in for a great day with us. This is the first time we’ve collected folks around FPGAs and we’re eager to see what happens conversationally throughout the day. It won’t be boring, to say the least.
Unlike traditional events that feature an unending stream of one-sided PowerPoint presentations, leaving attendees to form their own synthesis out of disparate bits, we will be starting at high-level points of synthesis and working backwards by letting the interviews break several questions down in the interview-based flow of the day. This format keeps marketing to a minimum while the day stays on track, focused, conversational, and unfolding more like a narrative on the state of FPGA acceleration rather than a choppy series of vendor slides without meaningful context.
The Next I/O Platform and The Next AI Platform, both of which focused on large-scale compute, storage, and network infrastructure both sold out rather quickly. Make sure you get registered as space is limited for this one-of-a-kind day that brings depth into the bigger picture and allows those present to help guide the conversation.
The agenda will be posted in the next few weeks. Stay tuned by subscribing to The Next Platform so you can be among the first to see the lineup.
Secure your seat as this is a limited seating venue!
*If you have questions about the program or being involved please email event chair, Nicole Hemsoth at nicole at nextplatform dot com. For general questions, email our events pro, Abby Priest, at abby at nextplatform dot com