Supercomputing Gets Neural Network Boost in Quantum Chemistry

Just two years ago, supercomputing was thrust into a larger spotlight because of the surge of interest in deep learning. As we talked about here, the hardware similarities, particularly for training on GPU-accelerated machines and key HPC development approaches, including MPI to scale across a massive number of nodes, brought new attention to the world of scientific and technical computing.

What wasn’t clear then was how traditional supercomputing could benefit from all the framework developments in deep learning. After all, they had many of the same hardware environments and problems that could benefit from prediction, but what they lacked were models that could be mapped to traditional HPC codes. In that short amount of time—mostly in the last year—there has been a big push in many traditional HPC areas to do just that…to find ways to make supercomputing simulations more streamlined by training on datasets to predict properties, filter through noise, and make broad connections that would take power-hungry simulations long periods to chew through.

Also just a few years ago, the real traction in deep learning was focused on image, video, and speech recognition and analysis, often for consumer-facing services. However, as we have described in detail, there is a new wave of applications for neural networks that could upend the way we think about scientific and technical computing—those traditional realms of supercomputing.

One of the emerging areas cited in the above review of recent work in scientific computing areas that are being altered by deep learning is in molecular and materials science. While the work here is still in the early stages, Google Brain researchers are among those making strides in applying deep learning to solve more complex materials science and molecular interaction problems in quantum chemistry. The goal is to build complex machine learning models for chemical prediction that can learn from their own features—saving a great deal of computational time and cost over traditional simulations.

The issue here is not just about increasing efficiency or performance of quantum chemistry simulations. The computational resources freed up by applying learning methods can allow for larger and more fine-grained analysis of molecular structures. However, traditional quantum chemistry architectures are having trouble keeping up with the vast data volumes generated from high-throughput experiments, and condensing much of this via training makes sense from a problem scalability standpoint.

As the Google Brain team that built out a machine learning alternative to traditional simulation-based quantum chemistry explains, “the time is ripe to apply more powerful and flexible machine learning methods to these problems, assuming we can find models with suitable inductive biases.” They note that the “symmetries of atomic systems suggest neural networks that operate on graph structured data and are invariant to graph isomorphism might also be appropriate for molecules.” Once these are isolated, a new playing can open for some of the most large-scale and pressing problems in materials, chemical, and drug discovery areas.

At the heart of this work is what the Google Brain team calls Message Passing Neural Networks, which takes traditional approaches to quantum chemistry, refits them into neural networks, and shows rather impressive efficiency, performance, and complexity gains over established supercomputing simulation results. Finding models that could mapped to a supervised learning approach took time and effort, but using this approach, the Google Brain researchers were able to make more efficient use of datasets fed from simulation and loop those back in for better results—or more comprehensive simulations where much of the legwork had already been done.

Quantum chemistry lends itself well to the argument that more development into neural networks for scientific computing should happen. Just as in areas like weather and astronomy, for example, there is high computational cost on some of the world’s most power-hungry and expensive machines doing filtering, classification, or noise removal work that can be “automated” by learning algorithms in advance or post-simulation.

Such developments would not mean that exascale-class supercomputers would no longer be necessary, but further additions of neural network components in traditional application areas would mean more efficient use of those resources—freeing them up to do more complex, high-resolution, or large-scale simulations.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Note that you need floating point precision for most applications here, since fixed point won’t do all too well for scientific computing…

    On a sidenote, it seems like that some of the accuracy bounds might be relaxed with the use of neural approximation though, for example, the 64-bits heuristic for molecular dynamics (although technically DE Shaw showed you could do it with 48) might be relaxed somewhat if you’re willing to use a neural approximation. However, it’s pretty obvious that floating point would be necessary even in that case.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.