Combining AI With HPC To Find Better Battery Designs

The melding of low and high precision mathematics to accelerate the pace of scientific discovery has been a topic of discussion for some time now. During her keynote at ISSC last year, AMD chief executive officer Lisa Su mused that the combo could vastly reduce the energy requirements to scale compute in a meaningful way without resorting to nuclear powered super clusters.

The implication here is that a smarter application of compute resources is required to solve exponentially larger problems, not necessarily more of it.

HPC largely centers around high precise simulation. But, before you can get to work, you first need to identify the ideal candidate or hypothesis to test. So, rather than deploying HPC resources to test hypothesis after hypothesis until you find the perfect recipe, you instead use speedy low precision AI models to help sort through all the possibilities for the ideal candidates.

This is exactly the formula that Microsoft’s Azure Quantum Elements (AQE) team and the Department of Energy’s Pacific Northwest National Lab (PNNL) has employed under a multi-year collaboration.

As part of a proof of concept detailed in a recent blog post, PNNL and the AQE team put Microsoft’s expansive compute resources to task on a combination of AI/ML and HPC workloads designed to speed the development of an experimental battery. And this time, Microsoft isn’t just building a better uninterruptible power supply (UPS) using someone else’s cells.

One quick note, despite the team being called Azure Quantum Elements, quantum computing wasn’t a factor in this particular experiment. The general consensus is that accurate modeling using quantum systems is still a few years off, but materials science is one of the areas expected to benefit from the technology.

In a matter of weeks, the team went from a problem statement to holding an actual battery based on a newly identified chemical compound. Perhaps the most surprising element of this project was the amount of time required for computation, which totaled just 80 hours. One can imagine the compute time could be further reduced with more resources.

Reduce, Simulate, Repeat

The process by which the AQE and PNNL teams achieved this feat involved multiple applications of machine learning and simulation.

The first step involved training AI models to evaluate different materials for desirable qualities and suggest combinations that show promise. Running this algorithm narrowed the field to 32 million candidates – still too many to be practical, but this was only the first round.

This dataset was then further sorted and reduced to identify compounds based on their stability, reactivity, and potential to conduct electricity. Repeated runs helped to narrow the scope down to 500,000 and eventually 800 chemical compounds.

From here, researchers began applying more HPC models. The first of these used density functional theory to calculate the energy level of each material at various states. The next employed a combination of HPC and AI to simulate the molecular dynamics of each material and then analyze the movements of the atoms inside them. This got researchers down to 150 viable candidates.

Finally, a third round of simulations were conducted on the dataset to determine which would be the most practical in terms of cost, time, and availability. From this, PNNL and the AQE team arrived at a shortlist of 18 previously unknown compounds.

Perhaps the most interesting element of all of this is the distribution of compute. Conventional wisdom would suggest double precision HPC workloads would require the bulk of the time allotted, but it wasn’t.

According to Microsoft, 90 percent of the compute resources were directed to machine learning tasks designed to narrow the field of possibilities. Only 10 percent of 80 hours of compute was spent on HPC workloads.

Putting It To The Test

At the end of the day simulations are simulations. They’re the best approximation of how a chemical or process should behave, but we won’t know for sure until it’s put to the test. There’s a reason that, in addition to building multi-million dollar supercomputers to simulate the efficacy of the US nuclear stockpile, the Department of Energy also conducts subcritical tests deep underground to see just how close those simulations are to reality.

So, having identified the most likely candidate, a solid-state electrolyte consisting of 70 percent sodium and 30 percent lithium, PNNL went to work turning it into a battery. This process involved synthesizing the compound, grinding, pressing, and then heating the material to between 450 degrees and 650 degrees Celsius. However, compared to the compute stage, this process was relatively quick, requiring about ten hours to complete.

Traditionally the most energy-dense batteries – those found in electric vehicles, laptops, smartphones, and power tools – use lithium. Unfortunately, the stuff isn’t exactly the most sustainable or easy to get. Sodium, on the other hand, is comparatively abundant and therefore cheap. The problem is sodium alone isn’t nearly as power dense as lithium. By combining the two, it appears Microsoft and PNNL have managed to find a happy middle ground between material supply and density.

While PNNL is thrilled to have identified a new battery chemistry, Brian Abrahamson, chief digital officer at PNNL, emphasized in a Microsoft blog post that the real achievement has less to do with the battery chemistry itself, but how quickly the teams were able to do it.

Past battery research at PNNL, such as the development of a vanadium redox flow battery took years, a fact that only serves to reinforce the importance of this project.

Taking Simulation To The Next Level

Whether the battery will ultimately prove viable remains to be seen. As Microsoft and PNNL note, testing will require the production of hundreds of prototype batteries which will take considerably longer.

However, PNNL and Microsoft are already thinking of ways to speed this process up. Vijay Murugesan, who leads PNNL’s material sciences group, suggests developing a digital twin for chemistry and material sciences could reduce testing time.

Nvidia has been particularly outspoken about the concept of digital twins in recent years. The company has pitched them as a way for enterprises, particularly those dealing with logistics and manufacturing, to test production changes digitally before committing to them in real life.

In the case of PNNL, Murugesan suggests a researcher could use a digital twin to input details like anode, electrolyte, voltage, and other factors in order to predict the outcome. It’s not hard to see how something like this could speed up the development and productization of new chemistries.

Speaking of which, while Microsoft’s ongoing collaboration with PNNL has thus far focused in large part on battery tech, the software giant is keen to point out that the machine learning algorithms were developed for chemistry in general and can be applied to any number of problems related to material science.

If nothing else, the project serves to illustrate the possibilities when we start applying compute resources – whether it’s low precision, high precision, or some mix of the two – to the entire problem.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.