RISC-V Inching Closer to Reality at Scale

Back in the early 1990s, the common view was that there was little money to be made in the business of open source. As the wave of Linux distributions rolled forth, however, that was quickly disproven, setting the decades-long chain of companies that have secured their footing, funding, and futures on the back of open software.

That same trend is moving into hardware, creating what might be the next run for companies touting open source designs, from projects like Open Compute (which we will be reporting on from the OCP Summit this week) and beyond, there appears to be demand for freely accessible hardware that is not locked behind licenses. With this is in mind, it is finally time to start to watching one very early stage effort toward open and freely accessible instruction set architectures, most notably RISC-V. While for now its best opportunity is rooted in the Internet of Things and mobile devices, there are some noteworthy projects coming down the pick that are starting to show what a server-class approach to open chip architectures might look like.

“While instruction set architectures (ISAs) may be proprietary for historical or business reasons, there is no good technical reason for the lack of free, open ISAs,” says a report, authored by UC Berkeley RISC pioneers, David Patterson and Krste Asanovic. “Companies with successful ISAs…have patents on quirks of their ISAs, which prevent others from using them without licenses. Negotiations take 6-24 months and they can cost $1M to $10M, which rules out academia and others with small volumes.” They note that even those that secure an ARM license are simply paying for the right to use the designs with the ability to permit new ARM-based cores limited to a very small number of very large companies.

Of course, whether or not there can be commercial momentum for something like remains to be seen. As noted above, opportunities for an open source business boom aren’t always expected and come when demand for something truly different is actually feasible. RISC-V in its current incarnation has garnered several backers to the foundation, including Google, LG, BAE Systems, and others. While Kurt Keville, a RISC-V advocate from MIT told a group last week at the HPC Advisory Council Stanford meeting, said he is not sure what Google is doing with RISC-V, the trend toward architectures that emphasize openness is gaining momentum at warehouse scale datacenters—and as always, it continues its strength in academia.

All of these efforts could pay off for extreme scale HPC in the future, Keville says. “There is a path to exascale that goes through RISC-V” he notes, pointing to power consumption bottlenecks that are at the heart of worries about next-generation supercomputers. “Locality is so important for energy efficient chips. If you’re moving data, you’re taking a penalty. The RISC-V promise is smaller binaries, less registers to fill, and less data movement.” He also explains with X86, users are dealing with 3000 instructions, ARM V8 has 1000, but RISC-V has only 177. This has been useful to train undergraduates quickly, he says, but offers a great deal more in the future.

Of course, to bring anything meaningful to the ecosystem, which is driven by X86, and increasingly, ARM and OpenPower, work needs to be done to create server-class chips that can withstand workloads beyond traditional SoC areas, where RISC-V is already finding footing.

There are some interesting projects set to push RISC-V forward, including GRVI Phalanx from Gray Research (led by FPGA pioneer, Jan Gray) which presents a massively parallel RISC-V FPGA “accelerator-accelerator.” Interestingly, if there is any geographic region that is putting its eggs in the RISC basket, it’s India, which just pushed another $45 million toward a project to fund a 64-bit RISC processor in the Shakti processor family. Another notable RISC-V processor project out of the Aspire Lab at UC Berkeley is the Berkeley Out-of-Order Machine (BOOMI) which is led by the authors of the report listed above. Given that this is where the RISC momentum started, it’s no surprise the teams there are developing in several directions with at least four individual processor projects.

Among some of the other research projects working toward this are the PULPino project at ETH Zurich, which aims to create a parallel, low-power chip design, aimed first at IoT but with future applications on larger-scale server workloads. Another project, called Pydgin for RISC-V from Cornell University and with Google collaboration aims to create a fast and productive instruction set simulator. MIT and other universities also have pending projects aimed beyond small-scale computing, but if this activity shows anything, it is that there may be a viable future alternative in the market over the next several years—at least for ultra-low power systems.

Even though the market is not there, there are critical pieces from both the performance/capability side of RISC-V for future HPC workloads, and there is yet full OS support for it, we have to keep an eye on all the underdogs—and to some extent, root for their success. This market requires more variation on the chip architecture front and with all the major vendors backing only a few key architectures—and even licensing with the ability to create new cores locked down to only a few vendors, something like this may be the Linux story of the chip world. It may seem like a stretch now, but that was what everyone thought of a certain open source operating system that came about just as frustrations with limited options and flexibility were hitting a peak. While the market may not be at that point yet, it’s coming. We expect that shift will start with the hyerpscalers and move through the industry and will be keeping an eye on what Google and others are doing with RISC-V over the course of the year.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.