When Will AI Replace Traditional Supercomputing Simulations?

The science fiction of a generation ago predicted a future in which humans were replaced by the reasoning might of a supercomputer. But in an unexpected twist of events, it appears the it is the supercomputer’s main output—scientific simulations—that could be replaced by an even higher order of intelligence.

While we will always need supercomputing hardware, the vast field of scientific computing, or high performance computing, could also be in the crosshairs for disruptive change, altering the future prospects for scientific code developers, but opening new doors in more energy-efficient, finer-grained scientific discovery. With code that can write itself based on what is has learned about the problem and solution to the ability for a system to predict complex phenomena, we are in for an interesting decade ahead.

Advancements in deep neural network frameworks like TensorFlow, MXNet, and others have opened the door for a new generation of developers to bolt intelligence capabilities onto existing workflows or datasets—and to build entirely new approaches to some of the most difficult problems. Although for now, many use cases tend to focus on image recognition, for instance, there is little doubt that the next few years will add to the already-long list of major deep learning-driven discoveries in medicine, energy, finance, and beyond.

At the GPU Technology Conference (GTC) last week, a show that has emphasized GPU computing in traditional supercomputing in years past, the emphasis was on AI and as a side element, how deep learning can lend itself to HPC. In the keynote, a massive galaxy simulation showed advances made possible by Nvidia’s latest Volta GPU, but also shed light on how neural networks can learn from data to predict everything from ray tracing elements to an AI “artist’s” next stroke. All of this led us to question what role traditional simulation, at least for HPC applications that are driven by observations (weather and climate, as one example), will play if the predicts they create can be taught and produced as predictions.

Nvidia’s GM of the Accelerated Computing group, Ian Buck, tells The Next Platform that just as deep learning can be implemented to fill in the gaps in incomplete or fuzzy images, for example, it might do the same for a select set of applications in scientific computing. Buck referred us to research at the University of Florida where AI is being used to recognize protein structures and predict how they will collapse (if the neural network sees an arc shape, for instance, it can infer that it will collapse into a DNA alpha curve, for instance).

While this is only part of the larger workflow, when it comes to simulating this versus inferring it automatically, the computational savings can add up. Considering genomics is one workload that can benefit from the jump to exascale-class computing, shaving off a calculation’s overall time and making it more efficient can have an impact. After all, we are looking at systems that will eventually have to fit into a 20-30 megawatt power profile—and slimming down on the hardware side alone will not get us to that target.

While deep learning can be implemented in both supervised and unsupervised ways in various scientific areas, question is whether neural networks can replace scientific simulations altogether—and in what arenas?

“In the future, many years down the road, we might see this taking over more and more of HPC, but for the rest of the decade, it will be combination of simulation and neural networks,” Buck says. “We have to base everything on some reality or simulation and while AI can help fill in the gaps or take a project further, I don’t think it will replace simulation wholesale anytime soon.”

One area where Buck says the scientific simulation base should watch, however, is generative adversarial networks, which can tackle problems that are bound to exact accuracy. At GTC, Nvidia CEO, Jen-Hsun Huang highlighted an example of these in action by showing how one network could, just by being shown a Picasso painting (unsupervised learning) “paint” its own replica while a dueling network ran alongside, detecting the real Picasso from the fake. These adversarial networks, Buck explains, could prove valuable in finance/trading and other areas to push accuracy and end results, putting traditional simulations to the test. These approaches could also replace simulations in image-based fraud and forgery investigation and other areas.

“Computer vision is one domain in computer science that has been pretty wiped out already,” Buck points. “If you have enough pictures of a dog, AI will start to recognize the breed. It no longer requires complex facial detection on dogs, it’s all unsupervised learning. AI in fields like this is an obvious tool for real breakthroughs that can replace traditional computing.” While simulations will take far longer to eliminate, recall that just five years ago, computer vision was one of the most hyped areas for developers seeking to build career paths.

Some areas cannot, at least given the current state of neural networks, rely on mere predictions—they require complex chemical and physics interactions via simulations. While it is true, we could feed millions of time-stepped images of combustions into an unsupervised network, the nature and purpose of the simulation is to understand the variable, complex interactions based on changing conditions.

“If you have a lot of data to learn on and can measure the outcomes and represent that as data, neural networks can be great predictors,” says Buck. “But I struggle with the idea that simulation won’t exist. Computer science is important in that it provides that computational microscope; the ability to watch combustion at nanosecond levels and study it like a god. That computational microscope isn’t going away; we still have to follow the laws of physics, but we can on creating all that data from simulations and turn it over to an AI that can do at least some of it for us. It is with observable and computed data that we can build this forward.”

The emphasis on deep learning development in HPC today is on finding ways to integrate learning elements into large, existing workflows. As we described earlier this year, centers that have forthcoming supercomputers that are primed with the right GPU balance to handle training at scale alongside traditional HPC are still focused on finding the right application fit and figuring out how to scale deep learning across an unprecedented number of nodes. Given efficiency targets for next-generation supercomputing will add to the urgency of finding where neural networks might speed time to result—and potentially, richness of results.

As with every conversation about the impact of AI, there is to be an equal balance of “good for society/end users/consumers” and “bad for people that did that used to do that for a living.” And while it is not always a cut and dry argument on either side, for now, let’s focus on the positive—how large-scale, traditional supercomputing, with its megawatt requirements and inevitable code or system scalability limits—can find a new, productive, efficient, and perhaps more scientifically fine-tuned life through AI.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.