Assured and robust…or bust

June 30, 2022- 
The consequences of a machine learning (ML) error that presents irrelevant advertisements to a group of social media users may seem relatively minor. However, this opacity, combined with the fact that ML systems are nascent and imperfect, makes trusting their accuracy difficult in mission-critical situations, such as recognizing life-or-death risks to military personnel or advancing materials...

CASC team wins best paper at visualization symposium

May 25, 2022- 
A research team from LLNL’s Center for Applied Scientific Computing won Best Paper at the 15th IEEE Pacific Visualization Symposium (PacificVis), which was held virtually on April 11–14. Computer scientists Harsh Bhatia, Peer-Timo Bremer, and Peter Lindstrom collaborated with University of Utah colleagues Duong Hoang, Nate Morrical, and Valerio Pascucci on “AMM: Adaptive Multilinear Meshes.”...

Unprecedented multiscale model of protein behavior linked to cancer-causing mutations

Jan. 10, 2022- 
LLNL researchers and a multi-institutional team have developed a highly detailed, machine learning–backed multiscale model revealing the importance of lipids to the signaling dynamics of RAS, a family of proteins whose mutations are linked to numerous cancers. Published by the Proceedings of the National Academy of Sciences, the paper details the methodology behind the Multiscale Machine...

LLNL establishes AI Innovation Incubator to advance artificial intelligence for applied science

Dec. 20, 2021- 
LLNL has established the AI Innovation Incubator (AI3), a collaborative hub aimed at uniting experts in artificial intelligence (AI) from LLNL, industry and academia to advance AI for large-scale scientific and commercial applications. LLNL has entered into a new memoranda of understanding with Google, IBM and NVIDIA, with plans to use the incubator to facilitate discussions and form future...

Lab researchers explore ‘learn-by-calibration’ approach to deep learning to accurately emulate scientific process

Feb. 10, 2021- 
An LLNL team has developed a “Learn-by-Calibrating” method for creating powerful scientific emulators that could be used as proxies for far more computationally intensive simulators. Researchers found the approach results in high-quality predictive models that are closer to real-world data and better calibrated than previous state-of-the-art methods. The LbC approach is based on interval...

Lab team studies calibrated AI and deep learning models to more reliably diagnose and treat disease

May 29, 2020- 
A team led by LLNL computer scientist Jay Thiagarajan has developed a new approach for improving the reliability of artificial intelligence and deep learning-based models used for critical applications, such as health care. Thiagarajan recently applied the method to study chest X-ray images of patients diagnosed with COVID-19, arising due to the novel SARS-Cov-2 coronavirus. Read more at LLNL...

Interpretable AI in healthcare (PODCAST)

May 17, 2020- 
LLNL's Jay Thiagarajan joins the Data Skeptic podcast to discuss his recent paper "Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models." The episode runs 35:50. Listen at Data Skeptic.

The incorporation of machine learning into scientific simulations at LLNL (VIDEO)

May 5, 2020- 
In this video from the Stanford HPC Conference, Katie Lewis presents "The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory." Read more and watch the video at insideHPC.

How machine learning could change science

April 29, 2019- 
Artificial intelligence tools are revolutionizing scientific research and changing the needs of high-performance computing. LLNL has been exploiting the relationship between simulation and experiments to build predictive codes using machine learning and data analytics techniques. Read more at Data Center Dynamics.