Measuring failure risk and resiliency in AI/ML models
Aug. 27, 2024-
The widespread use of artificial intelligence (AI) and machine learning (ML) reveals not only the technology’s potential but also its pitfalls, such as how likely these models are to be inaccurate. AI/ML models can fail in unexpected ways even when not under attack, and they can fail in scenarios differently from how humans perform. Knowing when and why failure occurs can prevent costly...
Measuring attack vulnerability in AI/ML models
Aug. 26, 2024-
LLNL is advancing the safety of AI/ML models in materials design, bioresilience, cyber security, stockpile surveillance, and many other areas. A key line of inquiry is model robustness, or how well it defends against adversarial attacks. A paper accepted to the renowned 2024 International Conference on Machine Learning explores this issue in detail. In “Adversarial Robustness Limits via...
Evaluating trust and safety of large language models
Aug. 8, 2024-
Accepted to the 2024 International Conference on Machine Learning, two Livermore papers examined trustworthiness—how a model uses data and makes decisions—of large language models, or LLMs. In “TrustLLM: Trustworthiness in Large Language Models,” Bhavya Kailkhura and collaborators from universities and research organizations around the world developed a comprehensive trustworthiness...
LLNL’s Kailkhura elevated to IEEE senior member
Nov. 8, 2023-
IEEE, the world’s largest technical professional organization, has elevated LLNL research staff member Bhavya Kailkhura to the grade of senior member within the organization. IEEE has more than 427,000 members in more than 190 countries, including engineers, scientists and allied professionals in the electrical and computer sciences, engineering and related disciplines. Just 10% of IEEE’s...
Explainable artificial intelligence can enhance scientific workflows
July 25, 2023-
As ML and AI tools become more widespread, a team of researchers in LLNL’s Computing and Physical and Life Sciences directorates are trying to provide a reasonable starting place for scientists who want to apply ML/AI, but don’t have the appropriate background. The team’s work grew out of a Laboratory Directed Research and Development project on feedstock materials optimization, which led to...
Cognitive simulation supercharges scientific research
Jan. 10, 2023-
Computer modeling has been essential to scientific research for more than half a century—since the advent of computers sufficiently powerful to handle modeling’s computational load. Models simulate natural phenomena to aid scientists in understanding their underlying principles. Yet, while the most complex models running on supercomputers may contain millions of lines of code and generate...
LLNL researchers win HPCwire award for applying cognitive simulation to ICF
Nov. 17, 2022-
The high performance computing publication HPCwire announced LLNL as the winner of its Editor’s Choice award for Best Use of HPC in Energy for applying cognitive simulation (CogSim) methods to inertial confinement fusion (ICF) research. The award was presented at the largest supercomputing conference in the world: the 2022 International Conference for High Performance Computing, Networking...
Assured and robust…or bust
June 30, 2022-
The consequences of a machine learning (ML) error that presents irrelevant advertisements to a group of social media users may seem relatively minor. However, this opacity, combined with the fact that ML systems are nascent and imperfect, makes trusting their accuracy difficult in mission-critical situations, such as recognizing life-or-death risks to military personnel or advancing materials...
CASC team wins best paper at visualization symposium
May 25, 2022-
A research team from LLNL’s Center for Applied Scientific Computing won Best Paper at the 15th IEEE Pacific Visualization Symposium (PacificVis), which was held virtually on April 11–14. Computer scientists Harsh Bhatia, Peer-Timo Bremer, and Peter Lindstrom collaborated with University of Utah colleagues Duong Hoang, Nate Morrical, and Valerio Pascucci on “AMM: Adaptive Multilinear Meshes.”...
Unprecedented multiscale model of protein behavior linked to cancer-causing mutations
Jan. 10, 2022-
LLNL researchers and a multi-institutional team have developed a highly detailed, machine learning–backed multiscale model revealing the importance of lipids to the signaling dynamics of RAS, a family of proteins whose mutations are linked to numerous cancers. Published by the Proceedings of the National Academy of Sciences, the paper details the methodology behind the Multiscale Machine...
LLNL establishes AI Innovation Incubator to advance artificial intelligence for applied science
Dec. 20, 2021-
LLNL has established the AI Innovation Incubator (AI3), a collaborative hub aimed at uniting experts in artificial intelligence (AI) from LLNL, industry and academia to advance AI for large-scale scientific and commercial applications. LLNL has entered into a new memoranda of understanding with Google, IBM and NVIDIA, with plans to use the incubator to facilitate discussions and form future...
Lab researchers explore ‘learn-by-calibration’ approach to deep learning to accurately emulate scientific process
Feb. 10, 2021-
An LLNL team has developed a “Learn-by-Calibrating” method for creating powerful scientific emulators that could be used as proxies for far more computationally intensive simulators. Researchers found the approach results in high-quality predictive models that are closer to real-world data and better calibrated than previous state-of-the-art methods. The LbC approach is based on interval...
Lab team studies calibrated AI and deep learning models to more reliably diagnose and treat disease
May 29, 2020-
A team led by LLNL computer scientist Jay Thiagarajan has developed a new approach for improving the reliability of artificial intelligence and deep learning-based models used for critical applications, such as health care. Thiagarajan recently applied the method to study chest X-ray images of patients diagnosed with COVID-19, arising due to the novel SARS-Cov-2 coronavirus. Read more at LLNL...
Interpretable AI in healthcare (PODCAST)
May 17, 2020-
LLNL's Jay Thiagarajan joins the Data Skeptic podcast to discuss his recent paper "Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models." The episode runs 35:50. Listen at Data Skeptic.
The incorporation of machine learning into scientific simulations at LLNL (VIDEO)
May 5, 2020-
In this video from the Stanford HPC Conference, Katie Lewis presents "The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory." Read more and watch the video at insideHPC.
How machine learning could change science
April 29, 2019-
Artificial intelligence tools are revolutionizing scientific research and changing the needs of high-performance computing. LLNL has been exploiting the relationship between simulation and experiments to build predictive codes using machine learning and data analytics techniques. Read more at Data Center Dynamics.