Did you know we have a monthly newsletter? View past volumes and subscribe.
DOE honors seven early-career Lab scientists
Sept. 19, 2024 -
Seven LLNL scientists are recipients of the DOE's Office of Science Early Career Research Program (ECRP) award. Among them is Shusen Liu, a computer scientist in the Machine Intelligence Group in the Center for Applied Scientific Computing. His work focuses on understanding and interpreting the inner mechanisms of neural networks and integrating human domain knowledge with machine...
Measuring failure risk and resiliency in AI/ML models
Aug. 27, 2024 -
The widespread use of artificial intelligence (AI) and machine learning (ML) reveals not only the technology’s potential but also its pitfalls, such as how likely these models are to be inaccurate. AI/ML models can fail in unexpected ways even when not under attack, and they can fail in scenarios differently from how humans perform. Knowing when and why failure occurs can prevent costly...
Measuring attack vulnerability in AI/ML models
Aug. 26, 2024 -
LLNL is advancing the safety of AI/ML models in materials design, bioresilience, cyber security, stockpile surveillance, and many other areas. A key line of inquiry is model robustness, or how well it defends against adversarial attacks. A paper accepted to the renowned 2024 International Conference on Machine Learning explores this issue in detail. In “Adversarial Robustness Limits via...
LLNL, DOD, NNSA dedicate Rapid Response Laboratory and supercomputing system to accelerate biodefense
Aug. 15, 2024 -
LLNL recently welcomed officials from the Department of Defense (DOD) and National Nuclear Security Administration (NNSA) to dedicate a new supercomputing system and Rapid Response Laboratory (RRL). DOD is working with NNSA to significantly increase the computing capability available to the national biodefense programs. The collaboration has enabled expanding systems of the same architecture...
International workshop focuses on AI for critical infrastructure
Aug. 12, 2024 -
On August 4, LLNL researchers Felipe Leno da Silva and Ruben Glatt hosted the AI for Critical Infrastructure workshop at the 33rd International Joint Conference on Artificial Intelligence (IJCAI) in Jeju, South Korea. Professors Wencong Su (University of Michigan – Dearborn) and Yi Wang (University of Hong Kong) joined them in organizing the workshop focused on exploring AI opportunities and...
Evaluating trust and safety of large language models
Aug. 8, 2024 -
Accepted to the 2024 International Conference on Machine Learning, two Livermore papers examined trustworthiness—how a model uses data and makes decisions—of large language models, or LLMs. In “TrustLLM: Trustworthiness in Large Language Models,” Bhavya Kailkhura and collaborators from universities and research organizations around the world developed a comprehensive trustworthiness...
Probing carbon capture, atom-by-atom
July 31, 2024 -
A team of scientists at LLNL has developed a machine-learning model to gain an atomic-level understanding of CO2 capture in amine-based sorbents. This innovative approach promises to enhance the efficiency of direct air capture (DAC) technologies, which are crucial for reducing the excessive amounts of CO2 already present in the atmosphere. The low cost of these sorbents has enabled several...
ISCP projects make machine learning advantages tangible
July 17, 2024 -
Data science tools are not only rapidly taking hold across disciplines, they are constantly evolving. The applications, services, and techniques one cohort of scientists and engineers may have learned could be out of date by the next cohort, especially as machine learning (ML) and artificial intelligence (AI) tools become commonplace.
To keep employees abreast of the latest tools, two data...
Department of Energy announces FASST initiative
July 16, 2024 -
On July 16, the Department of Energy formally announced the proposed Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative via the web page www.energy.gov/fasst (with accompanying video and fact sheet). As stated on the web page, the speed and scale of the AI landscape are significant motivators for investing in strategic AI capabilities: “Without FASST...
AI, fusion, and national security with Brian Spears (VIDEO)
July 13, 2024 -
This episode of the Eye on AI podcast delves into the cutting-edge world of AI and high-performance computing with Brian Spears, director of LLNL's AI Innovation Incubator. The episode is presented here as a video with the following description: "Brian shares his experience in driving AI into national security science and managing the nation’s nuclear stockpile. With a PhD in mechanical...
Signal and image science community comes together for annual workshop
June 26, 2024 -
Nearly 150 members of the signal and image science community recently came together to discuss the latest advances in the field and connect with colleagues, friends, and potential collaborators at the 28th annual Center for Advanced Signal and Image Science (CASIS) workshop. The event featured more than 50 technical contributions across six workshop tracks and a parallel tutorials session...
The surprising places you’ll find machine learning (VIDEO)
June 20, 2024 -
LLNL data scientists are applying ML to real-world applications on multiple scales. A new DSI-funded video highlights research at the nanoscale (developing better water treatment methods by predicting the behavior of water molecules under the extremely confined conditions of nanotubes); mesoscale (determining the likelihood and location of a dangerous wildfire-causing phenomenon called arcing...
LLNL and BridgeBio announce trials for supercomputing-discovered cancer drug
June 6, 2024 -
In a substantial milestone for supercomputing-aided drug design, LLNL and BridgeBio Oncology Therapeutics (BridgeBio) today announced clinical trials have begun for a first-in-class medication that targets specific genetic mutations implicated in many types of cancer. The development of the new drug—BBO-8520—is the result of collaboration among LLNL, BridgeBio and the National Cancer...
DOE, LLNL take center stage at inaugural AI expo
June 4, 2024 -
Held May 7–8 in Washington, DC, the Special Competitive Studies Project (SCSP) AI Expo showcased groundbreaking initiatives in AI and emerging technologies. Kim Budil and other Lab speakers presented at center stage and the DOE exhibition booth. LLNL is rapidly expanding research investments to build transformative AI-driven solutions to critical national security challenges. While developing...
Statistical framework synchronizes medical study data
June 3, 2024 -
The risks and benefits of heart surgery, chemotherapy, vaccination, and other medical treatments can change based on the time of day they are administered. These variations arise in part due to changes in gene expression levels throughout the 24-hour day-night cycle, with around 50% of genes displaying oscillatory behavior.
To evaluate new therapies, investigators study how a gene’s...
FAA awards approval for drone swarm testing
May 29, 2024 -
LLNL’s Autonomous Sensors team has received the Federal Aviation Administration’s (FAA’s) first and—to date—only certificate of authorization allowing autonomous drone swarming exercises on the Lab main campus. These flights will test swarm controls and sensor payloads used in a variety of national security applications. Autonomous drone swarms differ from those used for entertainment...
Harnessing the power of AI for a safe and secure future (VIDEO)
May 13, 2024 -
LLNL, alongside the Department of Energy’s (DOE’s) 17 national labs, is harnessing the transformative potential of AI for a safer, more secure future. In 2022, LLNL made history by achieving fusion ignition, marking a pivotal moment for national security and clean energy. While AI continues to unlock new insights into fusion, through the combination of cutting-edge computer modeling...
Manufacturing optimized designs for high explosives
May 13, 2024 -
When materials are subjected to extreme environments, they face the risk of mixing together. This mixing may result in hydrodynamic instabilities, yielding undesirable side effects. Such instabilities present a grand challenge across multiple disciplines, especially in astrophysics, combustion, and shaped charges—a device used to focus the energy of a detonating explosive, thereby creating a...
GUIDE team develops approach to redesign antibodies against viral pandemics
May 8, 2024 -
In a groundbreaking development for addressing future viral pandemics, a multi-institutional team involving LLNL researchers has successfully combined an AI-backed platform with supercomputing to redesign and restore the effectiveness of antibodies whose ability to fight viruses has been compromised by viral evolution. The team’s research is published in the journal Nature and showcases a...
UC/LLNL joint workshop sparks crucial dialogue on AI safety
May 2, 2024 -
Representatives from DOE national laboratories, academia and industry convened recently at the University of California Livermore Collaboration Center (UCLCC) for a workshop aimed at aligning strategies for ensuring safe AI. The daylong event, attended by dozens of AI researchers, included keynote speeches by thought leaders, panels by technical researchers and policymakers and breakout...