DSI Cognitive Logo

 

 Research Spotlight: Cognitive Simulation

 

cognitive simulation team photo

“Cognitive simulation uses machine learning to improve predictive numerical models.”

– Brian Spears

A central challenge for LLNL’s national security missions is to improve and advance predictive simulations by challenging them with precision experiments. This integration of simulated and experimental data is increasingly driven by large-scale data analytics. Modern research that addresses core challenges—such as maintenance of the U.S. nuclear stockpile, nuclear nonproliferation, pharmaceutical design and cancer research, engineering design optimization, and the resilience and security of critical infrastructure—relies increasingly on high-performance computing (HPC) simulations and experiments that produce data of unprecedented complexity.

The contemporary modeling effort must incorporate and balance high-dimensional model parameter spaces, multi-scale and multi-fidelity analysis, a need for uncertainty quantification (UQ), and rich experimental data. Researchers must compare simulation output with experimental data in an effort to adapt and improve predictions. However, the scale and complexity of both experimental and simulated data has moved beyond that which humans can handle in their heads.

LLNL is advancing a new class of simulations—called cognitive simulations—that use machine learning (ML) to help scientists navigate this new data-rich environment to improve simulation models. For example, LLNL computational physicist Brian Spears explains, “We strive to advance our predictive capability by challenging our simulations with experiments. But the more complex the data, the more difficult it is to take full advantage of it. New techniques will help improve our predictions.”

High-Dimensional Models

Spears leads a Laboratory Directed Research and Development (LDRD) project that integrates ML with simulations to more effectively and accurately compare large-scale simulations and experiments. The interdisciplinary team brings together experts in ML architectures, deep learning, data harvesting, workflow tools, intelligent data sampling, laser-driven fusion, and more.

The backbone of the project’s cognitive simulation approach is leveraging deep neural networks (DNNs) to map inputs to outputs by revealing structure in large data sets. “Engineering and exploiting the latent space are among our key strategies,” says Spears. Latent spaces lie at the heart of ML processing, enabling researchers to evaluate important—and even hidden—features of compressed data.

The team is integrating ML into simulation in four ways: (1) “in the loop” algorithm or resolution switching, in which approximate regression models replace complex physics calculations; (2) “on the loop” prediction and correction of mesh tangling and step-wise simulation execution; (3) “steering the loop” learning that proposes the next simulation needed to reach an optimal data set and stops uninformative ensemble members from continuing; and (4) “after the loop” improving ML models by incorporating experimental data.

LLNL’s cognitive simulation strategy will improve simulation accuracy, efficiency, and robustness. With more accurate UQ and better alignment of simulations with experimental data, the technique will also enable exploration of more complex design spaces. This new high-dimensional ML modeling capability will provide researchers with effective predictions, reliable UQ estimates, and, ultimately, new scientific understanding.

A New Way of Thinking

This strategy goes beyond developing ML models to proposing a new way of thinking about the relationship between simulation and experimental data. For example, the simulation workflow can be enhanced with complementary techniques such as intelligent parameter sampling, post-processing conducted during simulation instead of afterwards, and transfer learning to calibrate the model.

“In addition to developing a cognitive simulation strategy, we must also invest in innovative software tools and advancements in heterogeneous computer architectures,” notes Spears. For instance, the open-source Livermore Big Artificial Neural Network Toolkit (LBANN) provides a DNN training framework for massively parallel computing environments, like those at LLNL’s HPC center. LLNL is also engaging hardware vendors to develop future computational platforms that excel at both high-precision scientific computing and ML training and inference.

The LDRD team has identified several mission-critical projects that can benefit from strategic use of learning-based predictive techniques. From inertial confinement fusion and high-explosive performance predictions to predictive biology and drug design, LLNL researchers are forging a new path in both ML technique development and ML applications for science.

Team Acknowledgments

Pictured, left to right: Back row: Peter Robinson, Vic Castillo, Jessica Semler, Sam Jacobs, Kelli Humbird, Yamen Mubarka, and Rushil Anirudh. Front row: Michael Kruse, Brian Spears, John Field, Brian Van Essen, David Hysom, Jae-Seung Yeom, Luc Peterson, Peer-Timo Bremer, and Joe Koning.

Not pictured: Gemma Anderson, Ben Bay, Francisco Beltran, David Domyancic, Jim Gaffney, Robert Hatarik, Richard Klein, Bogdan Kustowski, Steve Langer, Dave Munro, and Jayaraman Thiagarajan.