Singapore To Boost Its Supercomputing Capacity Ten-Fold

When it comes to supercomputing, Singapore is certainly not the first country that comes to mind. In the Asia/Pacific region to which it belongs, Singapore is overshadowed by its more populous neighbors, especially China and Japan, both of which operate some of the largest systems on the planet. But the island state is about to raise its HPC profile.

At the Supercomputing Asia conference held in Singapore earlier this month, finance minister Heng Swee Keat announced that the government’s National Research Foundation has earmarked $200 million ($148 million in US dollars) to upgrade the nation’s supercomputing infrastructure. The money will be drawn from a $19 billion fund allocated for the Research, Innovation, and Enterprise 2020 plan, which is described as “Singapore’s national strategy to develop a knowledge-based innovation-driven economy and society.”

Like most modern economies, the one in Singapore is underpinned by leveraging technological advances to sustain growth and compete on the world stage. Currently, the country depends upon high-value exports in areas such as information technology products, financial services, pharmaceuticals, and consumer electronics to generate revenue. And although it has a population of less then 6 million people, Singapore claims the third highest per capita gross domestic product in the world.

But when it comes to per capita supercomputing, it doesn’t stand out. Singapore’s largest known system today is ASPIRE 1, which is installed at the National Supercomputing Centre (NSCC) and which is rated at 1 petaflops. Built by Fujitsu in 2016, its 1,288 nodes are powered primarily by “Haswell” Xeon E5-2690 v3 processors from Intel, with 128 nodes accelerated with Nvidia Tesla K40 GPUs. DataDirect Networks supplied 14 PB of storage, split between Lustre and GPFS. Mellanox Technologies also got into the action, supplying 100 Gb/sec EDR InfiniBand as the interconnect.

As the only major supercomputing center in Singapore, NSCC serves four major research institutions around the country, including the Agency for Science, Technology and Research, Singapore at Fusionopolis, the Agency for Science, Technology and Research, Singapore at Biopolis, the National University of Singapore (NUS), and the Nanyang Technological University, Singapore (NTU).

ASPIRE 1 is heavily utilized and has been upgraded over time to meet growing demand across a wide array of applications. Among others, these include molecular dynamics, materials science, life sciences, genomic studies, large-scale structural analysis and urban planning, shipbuilding, and climate modeling – the last three being critical to the social and economic well-being of Singapore.

NSCC has a user base of about 4,000 people, spread over more than 400 projects. According to center officials, they expect those numbers to grow substantially as local demand for supercomputing increases in both the public and private sectors. They predict demand to be particularly acute for data-intensive application in areas such as AI, genomic analysis, and precision medicine.

Those additional needs appear to be much of what’s behind the $200 million investment that Keat announced this month.  The money will go towards the purchase of a new supercomputer and its operation, as well to upgrade of supercomputing facilities, primarily at NSCC. That will include what’s described as “a major hardware refresh,” along with the necessary system software.

NSSC officials told us that the new system will probably be a heterogenous supercomputer that delivers something between 15 petaflops and 20 petaflops of performance, along with accompanying storage. Given the new focus on data-demanding applications and the fact that NSCC users have already been exposed to GPUs in ASPIRE 1, its reasonable to assume that the new system will be equipped with a lot more GPUs. If it ends up being another Fujitsu machine, you don’t have to look any further than the Japan’s AI Bridging Cloud Infrastructure (ABCI) to get some idea of what an NSCC supercomputer skewed heavily toward AI and analytics would look like.

The ABCI supercomputer, which tops out at 19.9 petaflops (on the Linpack benchmark), is heavily laden with Nvidia’s Tesla V100 GPUs, the compute engine of choice for machine learning these days. The Singaporeans are likely to order a more balanced machine, given that the system has to satisfy all of NSCC’s traditional HPC users as well. And, of course, they could always go with a different vendor than Fujitsu.

Some portion of the $200 million will go towards the development of an “ultra-green” datacenter, in addition to a new high-performance network to connect that center to NSCC’s academic and commercial partners. Also in the plan is an upgrade of network links to existing users plus increased connectivity to other supercomputing centers in Asia, Europe and the US.

No timeline has been offered for when the new system and other upgrades will happen.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.