Why Did China Keep Its Exascale Supercomputers Quiet?
There are no greater bragging rights in supercomputing than those that come with top ten listing on the bi-annual list of the world’s most powerful systems – the Top500. …
There are no greater bragging rights in supercomputing than those that come with top ten listing on the bi-annual list of the world’s most powerful systems – the Top500. …
Let’s just cut right to the chase scene. The latest Top500 ranking of supercomputers, announced today at the SC21 supercomputing conference being held in St Louis, needed the excitement of an actual 1 exaflops sustained performance machine running the High Performance Linpack benchmark at 64-bit precision. …
As we head toward the annual Supercomputing Conference season we wanted to take a moment for a level-set on exascale. …
While we are big fans of laissez faire capitalism like that of the United States and sometimes Europe — right up to the point where monopolies naturally form and therefore competition essentially stops, and thus monopolists need to be regulated in some fashion to promote the common good as well as their own profits — we also see the benefits that accrue from a command economy like that which China has built over the past four decades. …
While it might not happen anytime soon, traditional supercomputing could be in for a sea change with wider acceptance of lower-precision calculations. …
Promo If you want to get the benefits of accelerated computing and high bandwidth memory as well as catching the rising wave of Arm-based compute, you don’t have to wait, you don’t have to buy CPU-GPU systems, and you don’t have to adopt a complex, hybrid programming model. …
We made a joke – sort of – many years ago when we started this publication that the future compute engines would look more like a GPU card than they did a server as we knew it back then. …
Japan is home to one of only a few designated AI supercomputers open to public and private research partnerships via its ABCI (AI Bridging Cloud Infrastructure) system, which is set to reach nearly an exaflop of single-precision performance for ML workloads following a recent upgrade. …
It is always good to have options when it comes to optimizing systems because not all software behaves the same way and not all institutions have the same budgets to try to run their simulations and models on HPC clusters. …
There was an outside chance that China might pull a surprise on the HPC community and launch the first true exascale system – meaning capable of more than 1 exaflops of peak theoretical 64-bit floating point performance if you want to be generous, and 1 exaflops sustained on the High Performance Linpack (HPL) benchmark if you don’t – but that didn’t happen. …
All Content Copyright The Next Platform