May 2, 2025

LLNL employees dive into AI’s transformative potential at aiEDGE for Innovation Day
More than 3,200 LLNL employees participated in the inaugural aiEDGE for Innovation Day on March 26, a hybrid event designed to dive into AI topics as a Lab community. Hosted with OpenAI and Anthropic, the event showcased demos and breakout sessions on real-world AI applications aimed at boosting productivity, streamlining operations, and advancing scientific research. LLNL Director Kim Budil urged employees to "think big" and adopt an explorer’s mindset, highlighting AI’s role in reshaping national security missions. Keynote speakers emphasized ethical AI use, data privacy, and the transformative potential of AI tools like OpenAI’s GPT-4o. The day reinforced LLNL’s commitment to leveraging AI for innovation and scientific leadership. Read the full article.

Sidekick gives researchers an inside look at high-tech facilities
Self-driving laboratories (SDLs) are revolutionizing experimental science by integrating robotics, AI, and remote data collection to autonomously perform high-repetition-rate (HRR) experiments. These setups are critical for advancing high energy-density (HED) research and achieving breakthroughs in inertial fusion energy (IFE), a potential large-scale power source. Current facilities like LLNL’s National Ignition Facility operate as single-shot systems, requiring extensive preparation for each experiment. To overcome the challenge of analyzing terabytes of data per second and optimizing experimental parameters in real time, researchers developed Sidekick—a tabletop experimental setup that emulates the variability of real-world HED facilities. Using hardware-in-the-loop techniques and EPICS software, Sidekick enables robust testing of AI-driven machine learning models. Demonstrated at Supercomputing 2024, Sidekick showcased the superiority of AI in closed-loop pulse shaping, rapidly fine-tuning laser parameters compared to human efforts. This innovation paves the way for autonomous HRR labs, accelerating progress toward IFE and groundbreaking scientific discoveries. Read more about Sidekick.

A language model that thinks before speaking
Together with collaborators from the University of Tübingen and the University of Maryland, LLNL researchers have developed Huginn, a prototype large language model (LLM) designed to enhance scientific reasoning. Unlike traditional LLMs that often rely on verbalized intermediate steps, Huginn incorporates a recurrent element in its neural network architecture, allowing it to perform extensive calculations in latent space before generating natural language outputs. This approach enables the model to assess and reassess its conclusions, reducing risks such as hallucination errors and omissions of critical information. By prioritizing introspection over immediate responses, Huginn aims to provide more reliable and physics-grounded insights, particularly in complex scientific domains like protein–protein interactions. This innovation represents a significant step toward developing AI tools capable of capturing intricate, non-verbalized scientific patterns, thereby advancing the application of AI in scientific research. Read more about Huginn and join the conversation on LinkedIn.

LLNL scientists use AI to optimize antibodies against mutations and accelerate pandemic preparedness
LLNL researchers, in collaboration with multiple institutions, have developed an AI-driven platform to redesign antibodies compromised by viral mutations, enhancing pandemic preparedness. Utilizing supercomputing resources, the team optimized an existing SARS-CoV-2 antibody to restore its effectiveness against emerging Omicron subvariants while maintaining efficacy against the Delta variant. This approach, part of the Department of Defense’s GUIDE program, integrates experimental data, structural biology, bioinformatics, and molecular simulations, enabling rapid identification of key amino acid substitutions to counteract viral escape. The platform evaluated 376 antibody candidates from a theoretical space of over 10^17 possibilities, significantly accelerating the drug development process. Subsequent laboratory testing confirmed the restored potency of selected antibodies. This methodology not only expedites therapeutic updates in response to evolving viruses but also allows for preemptive optimization of antibodies, potentially reducing development costs and risks associated with novel drug discovery. Learn more about this AI application.

Distributed stochastic optimization of a neural representation network for time-space tomography reconstruction
This recently published research article introduces a cutting-edge method for dynamic 4D X-ray computed tomography (4DCT). Conventional CT methods struggle with reconstructing rapidly changing objects due to assumptions of static scenes, resulting in artifacts and inaccuracies. The authors propose a Distributed Implicit Neural Representation (DINR) network, which uses a novel distributed stochastic optimization algorithm to reconstruct high-resolution 4D images of deforming objects. DINR leverages neural networks to represent the object's properties as continuous functions of time-space coordinates, enabling superior reconstructions with reduced memory and computational requirements. Tested on both experimental and simulated datasets, DINR achieves high fidelity, resolving complex dynamics like crack propagation and deformation in real-world scenarios. This approach is scalable across multiple GPUs, making it revolutionary for imaging fast-changing scenes in scientific and industrial applications. Learn more about this method.

Asynchronous trajectory balance for scalable LLM training
This paper introduces Trajectory Balance with Asynchrony (TBA), a scalable reinforcement learning framework designed to enhance LLM post-training by decoupling data generation and policy updates. TBA leverages off-policy data stored in a central replay buffer and optimizes the policy using the trajectory balance objective, enabling faster training, improved exploration, and better performance in sparse reward settings. Experimental results demonstrate significant speedups (up to 50x) and performance improvements over existing methods in tasks like mathematical reasoning, preference fine-tuning, and automated red-teaming. Read the full research paper.

LLNL presence strong at computer science principal investigators meeting
LLNL researchers participated in the Computer Science Principal Investigators Meeting held in Frisco, TX, in late March. Sponsored by the U.S. Department of Energy (DOE) Office of Science’s Advanced Scientific Computing Research (ASCR) program, the meeting brought together leading scientists to engage in discussions and share progress in AI for science, quantum computing, and other emerging areas.
LLNL researchers presented posters, gave talks, and engaged in discussions to help shape the future of scientific computing in support of the DOE mission. Some presentations included Early Career Award recipient Shusen Liu presenting on Bridging the Human-AI Gap with Visualization; Harshitha Menon on Advancing LLMs for HPC Software Development; Bhavya Kailkhura on Developing Trustworthy Scientific Foundation Models; Johannes Doerfert on Compiler and Runtimes for HPC Architectures; and Anders Petersson on Quantum Computing.
LLNL’s ASCR leads Timo Bremer, Kathryn Mohror, and Kristin Beck were also in attendance, facilitating discussions and collaboration across ASCR’s research portfolio.

Meet an LLNL data scientist: Min Priest
Min Priest is a computing scientist in the Computing Directorate’s Center for Applied Scientific Computing (CASC) and 2025 lead of the Data Science Summer Institute (DSSI) internship program. Their time at Livermore began as a CASC graduate student intern in 2018, before they eventually became a postdoctoral researcher and now a staff scientist. Their work lies mostly at the intersection of theoretical computer science—which they studied for their PhD—and high-performance computing, particularly centered around reducing communication in distributed memory algorithms using randomization. They have worked in all areas of this process, from designing algorithms and proving theorems to engineering optimized software, and they have also implemented scalable statistics for a variety of Laboratory projects. “I enjoy this sort of research for several reasons,” Priest said. “One, I find the math I focus on—randomized dimensionality reduction, specifically—to be beautiful and compelling. But I also chase the feeling of taking a blackboard idea and making it run on a huge computer; it still feels like magic every time something works.”
Throughout their several years of Livermore research, Priest has served as a mentor or co-mentor to more than 20 students and postdocs. They look forward to continuing this mentorship by serving as the lead for DSSI in its first fully-in person program since the pandemic, where they will provide students with a dynamic and connected experience and bring together data science research from around the Laboratory. “One- or few-on-one mentorship is how the next generation of scientists is formed,” Priest said. “We need to make personal connections with the people who came before us to gain practical knowledge of how to do science. It’s also rewarding as a mentor as we get to learn a lot from the mentees.”