June 19, 2024

DOE and LLNL Take the Stage at Inaugural AI Expo
Held May 7–8 in Washington, DC, the Special Competitive Studies Project (SCSP) AI Expo showcased groundbreaking initiatives in AI and emerging technologies. Kim Budil and other Lab speakers presented at center stage and the Department of Energy (DOE) exhibition booth. Notable attendees included DOE Deputy Secretary David Turk, DOE Under Secretary for Science and Innovation Geraldine Richmond, NNSA administrator Jill Hruby, DOE director of the Office of Critical and Emerging Technologies Helena Fu, US Senate majority leader Chuck Schumer, and White House Office of Science and Technology Policy director Arati Prabhakar.
LLNL is rapidly expanding research investments to build transformative AI-driven solutions to critical national security challenges. While developing these novel scientific AI tools, the Lab is also doing deliberate research to ensure that solutions are both safe and trustworthy for LLNL’s high-consequence missions. Budil and Spears took center stage to discuss the ways LLNL leverages AI tools to improve stockpile science, fusion targets, disease therapies, and more. In addition, Turk announced DOE’s new Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative, which will build foundational AI capabilities tailored for national security needs. At the DOE booth, Lab staff demonstrated the Sidekick tabletop self-driving lab with NVIDIA as well as a digital twin/virtual reality setup, which was connected in real time to employees at the Advanced Manufacturing Laboratory. Read more about the event at LLNL News. (Photo at left is courtesy of SCSP.)

Computational and Data Science Outreach to UT Austin’s Rising Stars
The Rising Stars in Computational and Data Sciences, a workshop for graduate students and postdocs interested in academic and research careers, took place at the Oden Institute at the University of Texas, Austin on April 30–May 1. The workshop aimed to increase the participation of underrepresented gender identities in computational and data sciences. Originally launched at MIT in 2012, the event has been hosted in various fields worldwide, and the fifth Rising Stars workshop in Computational and Data Sciences was organized in collaboration with Los Alamos, Lawrence Livermore, and Sandia National labs. LLNL’s delegates included Jamie Bramwell, Marisol Gamboa, Cindy Gonzales, Jeffrey Hittinger, Judy Hill, and Katie Lewis.

GUIDE Program: Redesigning Antibodies Against Viral Pandemics
In a groundbreaking development for addressing future viral pandemics, a multi-institutional team involving LLNL researchers has successfully combined an artificial intelligence (AI)–backed platform with supercomputing to redesign and restore the effectiveness of antibodies whose ability to fight viruses has been compromised by viral evolution. The team’s research is published in Nature and showcases a novel antibody design platform comprising experimental data, structural biology, bioinformatic modeling, and molecular simulations—driven by a machine learning (ML) algorithm.
With funding from the Department of Defense’s Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense’s Generative Unconstrained Intelligent Drug Engineering (GUIDE) program, the interagency team used the platform to computationally optimize an existing SARS-CoV-2 antibody to restore its effectiveness to emerging SARS-CoV-2 Omicron subvariants, while ensuring continued efficacy against the then-dominant Delta variant. Their computational approach has the potential to significantly accelerate the drug development process and improve pandemic preparedness.
The GUIDE program was developed to address the urgent need for a rapid and agile approach for responding to biological threats, including the relentless mutation of the SARS-CoV-2 virus. SARS-CoV-2 evolution has led to the emergence of subvariants that have eluded existing clinical antibody therapeutics. GUIDE researchers said the achievement could potentially lower drug-development costs, reduce developability risks, and accelerate the timeline to clinical use when compared to a novel drug-product screen with comparable breadth and efficacy. This acceleration continues to be relevant as SARS-CoV-2 variants continue to emerge, researchers said.

LLNL Contributes to NSF AI Engineering Vision
Last fall, the National Science Foundation (NSF) invited experts from academia, industry, national labs, and other government agencies to participate in an AI-focused “visioning event” for the Engineering Research Visioning Alliance (ERVA) initiative. The event articulated the role of AI in 14 grand challenges across the spectrum of design, manufacturing, foundation models, dataset curation, training programs, and other areas. DSI director and materials engineer Brian Giera was among the participants who produced the executive summary and full report now available on the ERVA website.
According to the report, “The U.S. engineering enterprise is positioned to lead in the research and education necessary for the creation and development of AI Engineering, thereby enhancing U.S. leadership in AI and engineering technologies. Engineering researchers must assist with defining future AI systems through the evolution of existing and new systems, even as they employ existing AI systems to help drive the future of engineering.” Giera points out, “Many of ERVA’s AI Engineering goals, such as developing trustworthy AI systems and mitigating rare event consequences, align with those of the NNSA, so it’s important that we at Livermore contribute to dovetailing AI-related science and engineering conversations at the national level.”

Statistical Framework Synchronizes Medical Study Data
The risks and benefits of heart surgery, chemotherapy, vaccination, and other medical treatments can change based on the time of day they are administered. These variations arise in part due to changes in gene expression levels throughout the 24-hour day-night cycle, with around 50% of genes displaying oscillatory behavior. To evaluate new therapies, investigators study how a gene’s oscillatory behavior changes under different experimental conditions. Yet a problem can still arise when measuring this behavior relative to patients’ internal clocks.
LLNL researcher Tavish McDonald and colleagues from MTG Research Consulting and the University of British Columbia set out to improve statistical analysis of gene expression data for these types of clinical studies. The team analyzed data from several circadian transcriptomic studies and developed a statistical framework that accounts for individual differences in a gene’s oscillatory behavior. “Many statistical analyses in this field assume that every study participant has the same internal timing system. This is rarely true,” says co-author Michael Gorczyca. The team’s statistical method could remove time and cost barriers in determining dim-light melatonin onset measurements, which would encourage more researchers to investigate this space and identify personalized treatment therapies for patients.

Machine Learning Meets X-Ray Absorption Spectroscopy
LLNL scientists have developed a new approach that can rapidly predict the structure and chemical composition of heterogeneous materials. In a new study in ACS Chemistry of Materials, Wonseok Jeong and Tuan Anh Pham developed a new approach that combines ML with X-ray absorption spectroscopy (XANES) to elucidate the chemical speciation of amorphous carbon nitrides. The research offers profound new insights into the local atomic structure of the systems, and in a broader context, represents a critical step in establishing an automated framework for rapid characterization of heterogeneous materials with intricate structures. By coupling ML potentials with high-fidelity atomistic simulations, the researchers establish correlations between local atomic structures and spectroscopic signatures. This correlation serves as the basis for interpreting experimental XANES data, allowing for the extraction of crucial chemical information from complex spectra.
The study’s findings represent a significant advancement in the field of materials science, offering a robust framework for elucidating the atomic speciation of disordered systems. Moreover, the versatility of the approach means it can be readily adapted to investigate other materials classes and experimental characterization probes, paving the way for real-time interpretation of spectroscopic measurements.

Manufacturing Optimized Designs for High Explosives
When materials are subjected to extreme environments, they face the risk of mixing together. This mixing may result in hydrodynamic instabilities, yielding undesirable side effects. Such instabilities present a grand challenge across multiple disciplines, especially in astrophysics, combustion, and shaped charges—a device used to focus the energy of a detonating explosive, thereby creating a high velocity jet that is capable of penetrating deep into metal, concrete, or other target materials. To address the challenges in controlling these instabilities, LLNL researchers are coupling computing capabilities and manufacturing methods to rapidly develop and experimentally validate modifications to a shaped charge. This work, published in the Journal of Applied Physics, is a part of Project DarkStar, which is aimed at controlling material deformation by investigating the scientific problems of complex hydrodynamics, shockwave physics, and energetic materials.
Applying modern technologies to von Neumann’s computational theories, the team employed AI and ML to explore new, computationally optimized designs. The use of additive manufacturing made it possible for researchers to rapidly realize even the most radical AI-designed components that would otherwise be considered “impossible” to create using traditional manufacturing methods. Project DarkStar illuminates the potential of AI/ML to support a wide range of national security missions.

Machine Learning Optimizes High-Power Laser Experiments
Commercial fusion energy plants and advanced compact radiation sources may rely on high-intensity, high-repetition rate lasers, capable of firing multiple times per second, but humans could be a limiting factor in reacting to changes at these shot rates. Applying advanced computing to this problem, a team of international scientists from LLNL, Fraunhofer Institute for Laser Technology (ILT), and the Extreme Light Infrastructure (ELI ERIC) collaborated on an experiment to optimize a high-intensity, high-repetition-rate laser using ML.
“Our goal was to demonstrate robust diagnosis of laser-accelerated ions and electrons from solid targets at a high intensity and repetition rate,” said LLNL’s Matthew Hill, the lead researcher. “Supported by rapid feedback from a machine-learning optimization algorithm to the laser front end, it was possible to maximize the total ion yield of the system.” The researchers trained a closed-loop ML code developed by LLNL’s Cognitive Simulation team on laser-target interaction data to optimize the laser pulse shape, allowing it to make adjustments as the experiment ran. Data generated during the experiment was fed back into the ML-based optimizer, allowing it to tweak the pulse shape on the fly.

FAA Awards Approval for Drone Swarm Testing
LLNL’s Autonomous Sensors team has received the Federal Aviation Administration’s (FAA’s) first and—to date—only certificate of authorization allowing autonomous drone swarming exercises on the Lab main campus. These flights will test swarm controls and sensor payloads used in a variety of national security applications. Autonomous drone swarms differ from those used for entertainment purposes that you might see at a baseball stadium because autonomous drones are designed to operate spontaneously and independently in real time. “The Lab has been exploring how to apply cutting-edge artificial intelligence and machine learning to its autonomous sensors, but we couldn’t actually field-test those tools,” said Brian Wihl, systems engineer at the Lab and project lead for this initiative. “Receiving this approval enables us to take the next step in our research. We’ll be able to apply swarming technology across several national security mission spaces to see how the swarms learn and respond in real-time.”

Video: AI for a Safe and Secure Future
A new video produced for the Special Competitive Studies Project AI Expo in May illustrates how LLNL, alongside the DOE’s 17 national labs, is harnessing the transformative potential of AI for a safer, more secure future. In 2022, LLNL made history by achieving fusion ignition, marking a pivotal moment for national security and clean energy. While AI continues to unlock new insights into fusion, through the combination of cutting-edge computer modeling, experimental data, and AI algorithms, LLNL and DOE are pushing the boundaries of scientific exploration like never before—from improving stockpile science to developing new fusion targets to discovering new drugs and materials. Through emerging AI-powered research, LLNL and the DOE/National Nuclear Security Administration are making the impossible possible, and redefining the future of science and technology. Discover how the national labs, in collaboration with industry and academia, are forging a path towards a secure, AI-driven landscape.

Video: Data-Driven Finite Element Exterior Calculus
Sponsored by the Livermore-led MFEM (Modular Finite Element Methods) project, the FEM@LLNL Seminar Series focuses on finite element research and applications. On April 2, Nat Trask of the University of Pennsylvania presented “A Data-Driven Finite Element Exterior Calculus.” A video of his talk is available on the Lab’s YouTube channel.
Despite the recent flurry of work employing ML to develop surrogate models to accelerate scientific computation, the “black-box” underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. This talk presented a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity. Trask demonstrated how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real-time data assimilation and optimal control, his team further develops the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex.

Meet an LLNL Data Scientist
James Diffenderfer is an ML researcher in LLNL’s Center for Applied Scientific Computing. He currently contributes to two ML-related projects: ZOO, or zeroth-order optimization for scientific ML, and AsyncML, asynchronous circuit design for dynamic ML adaptation. Much of his data science career has focused on adapting ML to real-world settings, ensuring that models function when they’re compressed or exposed to changes in data. Diffenderfer received an extensive education in mathematics, culminating in a PhD in Applied Mathematics and an MS in Computer Science from the University of Florida in 2020. He began his work at LLNL as an intern during his graduate studies. Now, as a researcher, he’s co-authored two recently accepted papers (in NeurIPS and ICLR) and presented at the 2023 Monterey Data Conference. Diffenderfer attributes much of his growth and success as a researcher to the mentorship he’s received at LLNL. He now prioritizes giving back, and he’s been mentoring summer interns since 2021. “I feel a sense of responsibility and privilege to serve as a mentor to student interns, and I hope that they can learn and grow from the experience as I did,” he says.