Site icon The Next Platform

Strong Showing for First Experimental RISC-V Supercomputer

A European team of university students has cobbled together the first RISC-V supercomputer capable of showing balanced power consumption and performance.

More importantly, it demonstrates a potential path forward for RISC-V in high performance computing and by proxy, another shot for Europe to shed total reliance on American chip technologies beyond Arm-driven architectures.

The “Monte Cimone” cluster will not be crunching massive weather simulations or the like anytime soon since it’s just an experimental machine. That said, it does show that performance sacrifices for lower power envelopes aren’t necessarily as dramatic as many believe.

The six-node cluster, built by folks at Università di Bologna and CINECA, the largest supercomputing center in Italy, was part of a broader student cluster competition to showcase various elements of HPC performance beyond just floating-point capability. The cluster-building team, called NotOnlyFLOPs, wanted to establish the power-performance profile of RISC-V when using SiFive’s Freedom U740 system-on-chip.

That 2020-era SoC has five 64-bit RISC-V CPU cores – four U7 application cores and an S7 system management core – 2MB of L2 cache, gigabit Ethernet, and various peripheral and hardware controllers. It can run up to around 1.4GHz.

Here’s a look at the components as well as feeds and speeds of Monte Cimone:

An overhead view of each node, showing the two SiFive Freedom SoC boards

The Freedom SoC motherboards are essentially HiFive Unmatched boards from SiFive. Two of the six compute nodes are outfitted with an Infiniband Host Channel Adapter (HCA), as most supercomputers use. The goal was to deploy 56GB/s Infiniband to allow RDMA to eke out what I/O performance was possible.

This is ambitious for a young architecture and it wasn’t without a few hiccups. “PCIe Gen 3 lanes are currently supported by the vendor,” the cluster team wrote.

“The first experimental results show that the kernel is able to recognize the device driver and mount the kernel module to manage the Mellanox OFED stack. We are not able to use all the RDMA capabilities of the HCA due yet-to-be-pinpointed incompatibilities of the software stack and the kernel driver. Nevertheless we successfully run an IB ping test between two boards and between a board and an HPC server showing that full Infiniband support could be feasible. This is currently a feature under development.”

The HPC software stack proved easier than one might expect. “We ported on Monte Cimone all the essential services needed for running HPC workloads in a production environment, namely NFS, LDAP and the SLURM job scheduler. Porting all the necessary software packages to RISC-V was relatively straightforward, and we can hence claim that there is no obstacle in exposing Monte Cimone as a computing resource in a HPC facility,” the team noted.

While it’s a noteworthy architectural addition to the supercomputing ranks, a RISC-V cluster like this is unlikely to make it onto the Top 500 list of the world’s fastest systems. Its design spec is as a low-power workhorse, not a floating point monster.

As the development team notes in their detailed description of the system, “Monte Cimone does not aim to achieve strong floating-point performance, but it was built with the purpose of “priming the pipe” and exploring the challenges of integrating a multi-node RISC-V cluster capable of providing an HPC production stack including interconnect, storage and power monitoring infrastructure on RISC-V hardware.”

E4 Computer Engineering served as the integrator and partner on the “Monte Cimone” cluster, which will eventually be This will pave the way for further testing of the RISC-V platform itself along with its ability to play well with other architectures, an important element since we are not likely to see an exascale-class RISC-V system in the next few years at least.

According to E4, “Cimone enables developers to test and validate scientific and engineering workloads in a rich software stack, including development tools, libraries for message-passing programming, BLAS, FFT, drivers for HS networks and I/O devices. The objective is to achieve a future-ready position capable of addressing and leveraging the features of the RISC-V ISA for scientific and engineering applications and workloads in an operational environment.”

Dr Daniele Cesarini, HPC Specialist at CINECA: “As a supercomputing center, we are very interested in the RISC-V technology to support the scientific community. We are excited to contribute to the RISC-V ecosystem supporting the installation and tuning of widely-used scientific codes and mathematical libraries to push forward the development of high-performance RISC-V CPUs. We believe that Monte CIMONE will be the harbinger of the next generation of supercomputers based on RISC-V technology and we will continue to work in synergy with E4 Computer Engineering and the Università di Bologna to prove that RISC-V is ready to stay on the shoulder of the HPC giants.”

There is plenty of RISC-V in Europe funding and project-wise, although the fruits of those labors could take years to see. Now, even Intel is eyeing for the future of supercomputing. It’s all a RISC-Y (you saw that coming) bet, but with few native architectural options in Europe, at least picking an early winner is easy. 

Exit mobile version