On The Spearpoint Of FPGA And The Cloud

Sometimes markets need a particular technology and they are impatient for it, and sometimes technologies get ahead of the immediate needs of customers and their creators have to hang on until the time is right. So it is with FPGAs in the cloud, which is actually the confluence of two different technology waves that have come together and that are now, finally, getting some traction together.

This is what we learned from Steve Hebert, co-founder and chief executive officer of Nimbix, an HPC cloud startup founded in 2010 that thought FPGAs were going to take off but watched as GPUs became the dominant accelerator – at least for a while, anyway. Nothing is forever in the IT sector because the bottlenecks keep shifting and the technologies keep changing.

“When I started Nimbix, I came out of Altera,” Hebert recalls to The Next Platform. “I was with an FPGA company and we endeavored to build what we called out our launch in 2010 the Nimbix Accelerated Compute Cloud. And the reason was that when we looked at the market at the time, Amazon Web Services was the cloud provider, and it was virtual CPUs and memory and it was for web services – I used to joke and call that ‘leftover computing,’ but we saw a market case for the growth in HPC. It used to be that HPC was nichey – some oil and gas companies, some finance companies, the national labs. Our view was that in the future, however far or close that future was, more people were going to need beefy compute to do their work.”

The inevitability of beefy compute, and particularly beefy compute on the cloud, was something that Hebert and Nimbix co-founder Robert Sherrard, who has built and run many different kinds of distributed systems over decades believed would happen for two reasons. First, the inevitability of Moore’s Law would lead to the demise of general purpose computing and heterogeneity, and second, the expense of compute would lead to a shared utility model. Both of these viewpoints were informed by the semiconductor industry that Hebert and Sherrard participated in, and as Hebert correctly points out, the major chip makers all used to have their own foundries and assembly plants, and once started costing $1 billion to build the next fab, companies were happy to sell off their fabs and move to a foundry model – which is akin to a cloud for chip etching. Nimbix made a bet that FPGAs were going to be important, and similarly bet that GPUs were also going to be important, for the future of compute, and hence the idea of the Accelerated Compute Cloud.

“No one had an accelerator-based cloud in our early days, but we were going to build this FPGA cloud,” continues Hebert. “We knew that FPGAs are hard, and there are not a lot of Verilog or VHDL coders out there to write algorithms for them. But our thesis was that the tools are going to get smarter and the FPGA silicon is going to keep getting better, and therefore reconfigurability should drive an opportunity for FPGAs that is unique. But we also saw that it was going to take longer to see the market really adopt it. But GPUs were coming online as an accelerator at this time, and so the first implementation of accelerated compute at Nimbix was on GPUs. All of our early customers were HPC workloads as well as ray tracing, real-time rendering, and other media stuff – not yet deep learning.”

Unlike many clouds, Nimbix started out on bare metal, without resorting to a server virtualization hypervisor to get in the way and impact performance. And frankly, the host server hypervisors of the time a decade ago didn’t know how to carve up and allocate work for FPGAs or GPUs, so there really wasn’t much of a choice other than to create its own bare metal environment. And Nimbix has been pushing Xilinx and Altera to build accelerator cards that feel more standard, like a video card from Nvidia or AMD does, and given this, Hebert does not think that it is a coincidence that there is stronger uptake for FPGAs at this point a decade later. It doesn’t hurt that the programming environment for FPGAs is getting better – and still hold the promise to get better still.

The irony, of course, is that once the cloud took off in people’s imagination, and neither GPUs nor FPGAs could easily run traditional HPC simulation and modeling workloads, and so Nimbix was asked by customers to build traditional CPU-based clusters for these workloads, and then as HPC workloads were GPU-enabled and as machine learning based on GPUs exploded onto the scene, Nimbix could catch these on the ricochet. Now, as FPGAs are getting more attention and some traction as an accelerator platform – particularly for applications that are highly depending on throughput computing and low latency – Nimbix is ideally positioned to benefit from this next wave in FPGA computing.

Startups point the way, as they often do. One Nimbix customer called ByteLake, is a software development company based on Poland that has not only created a federated machine learning stack for IoT and, but has also created a computational fluid dynamics application, called CFD Suite naturally enough, that weaves machine learning algorithms into the CFD kernels and accelerates them with Alveo FPGA cards from Xilinx.

“Computational fluid dynamics happens to be one of our largest workloads for CPU at Nimbix, and we want to know what could we do if we could drive FPGA into that workload?” says Hebert. “What kind of economic benefit would we see, what kind of speed throughput would we get, and how does that change the CFD landscape? The jury is out – I haven’t seen the code and I have not seen the performance figures yet. But that’s an example of an area that we are exploring aggressively.”

There are feedback loops all over the place here. As Hebert points out, the commercial CFD market is largely controlled by Siemens and ANSYS, with the open source OpenFOAM tool also on the rise. So ByteLake is taking on some big players, and having FPGA acceleration could be a huge differentiator.

“Any one of these companies can come in with something disruptive, and that forces the change,” Hebert says. “But because one of them can run faster, this can catalyze the big commercial players to start to investigate their code base and see how they could make their application go faster using FPGA capabilities. So Nimbix is actually at the center of it because we have relationships with the big software companies and the customers who are running those codes as well as the technology providers who are building the hardware. We centralize this all in our cloud to, and it is a great way to create and foster and accelerate the ecosystem to build accelerated applications.”

And that was the Nimbix plan all along.

To our way of thinking, what Nimbix is building is a preview of the hybrid and converged nature of future applications and the hardware that supports them. Not only will it be hard to tell the difference between HPC and AI some years hence, systems could end up using all kinds of compute engines across a workflow – CPUs, GPUs, and FPGAs – and leveraging the unique benefits that all bring to the system.

Hebert will be joining us on January 22 at the Glass House in San Jose to participate in The Next FPGA Platform event to talk about this and more. You can register for the event at this link, and we hope to see you there.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.