HPC to Deep Learning from an Asian Perspective

Big data, data science, machine learning, and now deep learning are all the rage and have tons of hype, for better—and in some ways, for worse. Advancements in AI such as language understanding, self-driving cars, automated claims, legal text processing, and even automated medical diagnostics are already here or will be here soon.

In Asia, several countries have made significant advancements and investments into AI, leveraging their historical work in HPC.

China now owns the top three positions in the Top500 with Sunway TaihuLight, Tianhe-2, and Tianhe, and while Tianhe-2 and Tianhe were designed for HPC style workloads, TaihuLight is expected to run deep learning frameworks very efficiently. In addition, Baidu of China probably has one of the largest AI teams in this part of the world, and it would not be surprising to learn that these large Internet companies are working closely with the likes of TaihuLight and the Tianhe team to develop their own AI supercomputers.

Japan is no stranger to AI and robotics, and has been leading the way in consumer-style AI systems for a long time. Remember that Fuzzy Logic washing machine? Japan’s car industry is probably one of the largest investors into AI technology in Japan today, with multiple self-driving projects within Japan and globally.

RIKEN is deploying the country’s largest “Deep learning system” based on 24 NVIDIA DGX-1 and 32 Fujitsu servers this year. Tokyo Tech and the National Institute of Advanced Industrial Science and Technology (AIST) have also announced their joint “Open Innovation Laboratory” (OIL), which will have the innovative TSUBAME3.0 AI supercomputer this year and an upcoming massive AI supercomputer named “ABCI” in 2018.

South Korea announced a whopping US $863M investment into AI in 2016 after AlphaGo’s defeat of grandmaster Lee Sedol, and this is an additional investment on top of existing investments made since early 2013 (Exobrain and Deep view projects). It will establish a new high profile, public/private research center with participation from several Korean conglomerates, including Samsung, LG, telecom giant KT, SK Telecom, Hyundai Motor, and Internet portal Naver.

Closer to home, Singapore has recently announced a modest US $110M (SGD $150M) national effort over five years to build its capabilities in Artificial Intelligence called AI@SG. Funded by the National Research Foundation of Singapore and hosted by the National University of Singapore, this is a multi-agency effort comprising government ministries, institutes of higher learning, and industry to tackle specific industry problems in Singapore. Besides a grand challenge problem (to be identified by end of the year), a major focus is on partnering with local industry to drive the adoption of AI technology to significantly improve productivity and competitiveness.

In particular, an effort called SG100 — for 100 industry “experiments” over five years — will work closely with industry partners to help solve their problems using AI and HPC with the combined efforts of all the government agencies and institutes of higher learning and research centers. As is typical of Singapore style, three big bets for AI have been identified in Finance, Healthcare, and Smart City projects. The compute backbone of AI@SG is expected to ride on new AI HPC systems and also leverage various HPC systems existing in Singapore, including the newly established National Supercomputing Centre.

AI being powered on HPC-style clusters is not an accident. It has been and always was a workload that HPC folks have been running — it’s just that it was not sexy to be associated with AI back then. Now, we can all come out of the closet.

About the Author

Laurence Liew is the CEO and Founder of REAL Analytics Pte Ltd, a Singapore-based analytics firm providing advisory services to organizations embarking onto the path of data science. Prior to founding REAL, Laurence led Revolution Analytics’ Asia business and R&D until its acquisition by Microsoft in 2015. Laurence and his team were core contributors to the open source San Diego Supercomputer Centre’s Rocks Cluster distribution for HPC systems from 2002-2006 and contributed the integration of SGE, PVSF, Lustre, Intel, and PGI compilers into Rocks.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.