A European team of university students has cobbled together the first RISC-V supercomputer capable of showing balanced power consumption and performance.
More importantly, it demonstrates a potential path forward for RISC-V in high performance computing and by proxy, another shot for Europe to shed total reliance on American chip technologies beyond Arm-driven architectures.
The “Monte Cimone” cluster will not be crunching massive weather simulations or the like anytime soon since it’s just an experimental machine. That said, it does show that performance sacrifices for lower power envelopes aren’t necessarily as dramatic as many believe.
The six-node cluster, built by folks at Università di Bologna and CINECA, the largest supercomputing center in Italy, was part of a broader student cluster competition to showcase various elements of HPC performance beyond just floating-point capability. The cluster-building team, called NotOnlyFLOPs, wanted to establish the power-performance profile of RISC-V when using SiFive’s Freedom U740 system-on-chip.
That 2020-era SoC has five 64-bit RISC-V CPU cores – four U7 application cores and an S7 system management core – 2MB of L2 cache, gigabit Ethernet, and various peripheral and hardware controllers. It can run up to around 1.4GHz.
Here’s a look at the components as well as feeds and speeds of Monte Cimone:
- Six dual-board servers with a form factor of 4.44 cm (1U) high, 42.5 cm width, 40 cm deep. Each board follows the industry standard Mini-ITX form factor (170 mm per 170 mm);
- Each board features one SiFive Freedom U740 SoC and 16GB of 64-bit DDR memory operating at 1866s MT/s, plus a PCIe Gen 3 x8 bus operating at 7.8 GB/s, one gigabit Ethernet port, and USB 3.2 Gen 1 interfaces;
- Each node has an M.2 M-key expansion slot occupied by a 1TB NVME 2280 SSD used by the operating system. A microSD card is inserted in each board and used for UEFI booting;
- Two 250 W power supplies are integrated inside each node to support the hardware and future PCIe accelerators and expansion boards.
An overhead view of each node, showing the two SiFive Freedom SoC boards
The Freedom SoC motherboards are essentially HiFive Unmatched boards from SiFive. Two of the six compute nodes are outfitted with an Infiniband Host Channel Adapter (HCA), as most supercomputers use. The goal was to deploy 56GB/s Infiniband to allow RDMA to eke out what I/O performance was possible.
This is ambitious for a young architecture and it wasn’t without a few hiccups. “PCIe Gen 3 lanes are currently supported by the vendor,” the cluster team wrote.
“The first experimental results show that the kernel is able to recognize the device driver and mount the kernel module to manage the Mellanox OFED stack. We are not able to use all the RDMA capabilities of the HCA due yet-to-be-pinpointed incompatibilities of the software stack and the kernel driver. Nevertheless we successfully run an IB ping test between two boards and between a board and an HPC server showing that full Infiniband support could be feasible. This is currently a feature under development.”
The HPC software stack proved easier than one might expect. “We ported on Monte Cimone all the essential services needed for running HPC workloads in a production environment, namely NFS, LDAP and the SLURM job scheduler. Porting all the necessary software packages to RISC-V was relatively straightforward, and we can hence claim that there is no obstacle in exposing Monte Cimone as a computing resource in a HPC facility,” the team noted.
While it’s a noteworthy architectural addition to the supercomputing ranks, a RISC-V cluster like this is unlikely to make it onto the Top 500 list of the world’s fastest systems. Its design spec is as a low-power workhorse, not a floating point monster.
As the development team notes in their detailed description of the system, “Monte Cimone does not aim to achieve strong floating-point performance, but it was built with the purpose of “priming the pipe” and exploring the challenges of integrating a multi-node RISC-V cluster capable of providing an HPC production stack including interconnect, storage and power monitoring infrastructure on RISC-V hardware.”
E4 Computer Engineering served as the integrator and partner on the “Monte Cimone” cluster, which will eventually be This will pave the way for further testing of the RISC-V platform itself along with its ability to play well with other architectures, an important element since we are not likely to see an exascale-class RISC-V system in the next few years at least.
According to E4, “Cimone enables developers to test and validate scientific and engineering workloads in a rich software stack, including development tools, libraries for message-passing programming, BLAS, FFT, drivers for HS networks and I/O devices. The objective is to achieve a future-ready position capable of addressing and leveraging the features of the RISC-V ISA for scientific and engineering applications and workloads in an operational environment.”
Dr Daniele Cesarini, HPC Specialist at CINECA: “As a supercomputing center, we are very interested in the RISC-V technology to support the scientific community. We are excited to contribute to the RISC-V ecosystem supporting the installation and tuning of widely-used scientific codes and mathematical libraries to push forward the development of high-performance RISC-V CPUs. We believe that Monte CIMONE will be the harbinger of the next generation of supercomputers based on RISC-V technology and we will continue to work in synergy with E4 Computer Engineering and the Università di Bologna to prove that RISC-V is ready to stay on the shoulder of the HPC giants.”
There is plenty of RISC-V in Europe funding and project-wise, although the fruits of those labors could take years to see. Now, even Intel is eyeing for the future of supercomputing. It’s all a RISC-Y (you saw that coming) bet, but with few native architectural options in Europe, at least picking an early winner is easy.
Picking a Europe native winner is easy indeed. Arm, it won years ago and is Europe native. More options are always better ofcourse
“a shot for Europe to shed total reliance on American chip technologies” OK, but already positions 1, 3 and 8 in the top ten most powerful computers in the world use AMD chips and position 2 uses chips based around the UKs ARM chip, originally standing for “Acorn RISC machine”. See https://www.top500.org/lists/top500/2022/06/
Had the Nvidia deal to buy ARM not fallen through, it’s possible the Grace CPU design might eventually have been licensed in a similar way to the rest of the ARM portfolio. The present alternative of taking ARM public may change the focus to monetisation of existing technology.
The unknown changes either possibility brings to the ARM ecosystem makes RISC-V a risk-averse alternative for HPC.
And why is it so critical to avoid buying anything from the USA? From where is this hatred coming from? Trying to wean countries from reliance on Communist China because of their ambitions to take over the Pacific, or from Russia because of their desires to subjugate all the countries grabbed by the Communists in the last century, has some merit, but the US?
And while I am at it, European shows are always saying “US Imperialism”. What the heck is that? What countries have we taken over to expand our “empire”? There was one very small window, in the 1890s where newspapers manipulated the public into a war with Spain to take their possessions. But that was a very short lapse in judgment.
All the other wars were to liberate others, or prevent the communist takeover of the planet. Certainly some of those were misjudgments, but never intended to subjugate.
How about you?
You benefit from NATO, but underneath you have no loyalties, friends of convenience…like sociopaths.
If the US was an Empire, it would have taken all of North and South America, easy, and never given back Japan to its people. The Philippines also would still be US. We also would have claimed all the waters of the Pacific. Our Navy is 10x as powerful as any other. We use the Navy to make the oceans free for every country to use it.
You’re welcome.
Calling it a “supercomputer” is pure hyperbole – it’s basically a small cluster using a board similar to a Raspberry PI 3. This might get ~60 GFLOPs overall, but a modern phone does about 180 GFLOPs at a fraction of the power (just the 4 little cores will do about 60). So do we all have supercomputers in our pockets now?!?
The real European supercomputer, Sipearl Rhea, uses 72 Neoverse V1 cores at ~2500 GFLOPs per chip (unaccelerated).
SiFive Freedom SoC boards? Anyone selling these?