Data Science in the News

Did you know we have a monthly newsletter? View past volumes and subscribe.

DSI Consulting Service spurs innovation

Nov. 22, 2024 - 
Today, research in nearly every scientific discipline involves data science techniques. Whether using sophisticated tools to manage and analyze massive datasets or applying machine learning algorithms to gain new insights, such techniques are becoming ever more prevalent. However, scientists and engineers may not have specific training in the newest, most pertinent data science and...

El Capitan verified as world's fastest supercomputer

Nov. 18, 2024 - 
LLNL, in collaboration with the National Nuclear Security Administration (NNSA), Hewlett Packard Enterprise and AMD, have officially unveiled El Capitan as the world's most powerful supercomputer and first exascale system dedicated to national security. Verified at 1.742 exaflops (1.742 quintillion calculations per second) on the High Performance Linpack—the standard benchmark used by the...

ICECap looks to use exascale fusion simulations to pioneer digital design

Oct. 17, 2024 - 
A groundbreaking multidisciplinary team of LLNL researchers is combining the power of exascale computing with AI, advanced workflows and graphics processor (GPU)-acceleration to advance scientific innovation and revolutionize digital design. The project, called ICECap (Inertial Confinement on El Capitan), is a transformative approach to inertial confinement fusion (ICF) design optimization...

DOE honors seven early-career Lab scientists

Sept. 19, 2024 - 
Seven LLNL scientists are recipients of the DOE's Office of Science Early Career Research Program (ECRP) award. Among them is Shusen Liu, a computer scientist in the Machine Intelligence Group in the Center for Applied Scientific Computing. His work focuses on understanding and interpreting the inner mechanisms of neural networks and integrating human domain knowledge with machine...

Measuring failure risk and resiliency in AI/ML models

Aug. 27, 2024 - 
The widespread use of artificial intelligence (AI) and machine learning (ML) reveals not only the technology’s potential but also its pitfalls, such as how likely these models are to be inaccurate. AI/ML models can fail in unexpected ways even when not under attack, and they can fail in scenarios differently from how humans perform. Knowing when and why failure occurs can prevent costly...

Measuring attack vulnerability in AI/ML models

Aug. 26, 2024 - 
LLNL is advancing the safety of AI/ML models in materials design, bioresilience, cyber security, stockpile surveillance, and many other areas. A key line of inquiry is model robustness, or how well it defends against adversarial attacks. A paper accepted to the renowned 2024 International Conference on Machine Learning explores this issue in detail. In “Adversarial Robustness Limits via...

LLNL researchers unleash machine learning in designing advanced lattice structures

Aug. 22, 2024 - 
Characterized by their intricate patterns and hierarchical designs, lattice structures hold immense potential for revolutionizing industries ranging from aerospace to biomedical engineering, due to their versatility and customizability. However, the complexity of these structures and the vast design space they encompass have posed significant hurdles for engineers and scientists, and...

LLNL, DOD, NNSA dedicate Rapid Response Laboratory and supercomputing system to accelerate biodefense

Aug. 15, 2024 - 
LLNL recently welcomed officials from the Department of Defense (DOD) and National Nuclear Security Administration (NNSA) to dedicate a new supercomputing system and Rapid Response Laboratory (RRL). DOD is working with NNSA to significantly increase the computing capability available to the national biodefense programs. The collaboration has enabled expanding systems of the same architecture...

International workshop focuses on AI for critical infrastructure

Aug. 12, 2024 - 
On August 4, LLNL researchers Felipe Leno da Silva and Ruben Glatt hosted the AI for Critical Infrastructure workshop at the 33rd International Joint Conference on Artificial Intelligence (IJCAI) in Jeju, South Korea. Professors Wencong Su (University of Michigan – Dearborn) and Yi Wang (University of Hong Kong) joined them in organizing the workshop focused on exploring AI opportunities and...

Evaluating trust and safety of large language models

Aug. 8, 2024 - 
Accepted to the 2024 International Conference on Machine Learning, two Livermore papers examined trustworthiness—how a model uses data and makes decisions—of large language models, or LLMs. In “TrustLLM: Trustworthiness in Large Language Models,” Bhavya Kailkhura and collaborators from universities and research organizations around the world developed a comprehensive trustworthiness...

ISCP projects make machine learning advantages tangible

July 17, 2024 - 
Data science tools are not only rapidly taking hold across disciplines, they are constantly evolving. The applications, services, and techniques one cohort of scientists and engineers may have learned could be out of date by the next cohort, especially as machine learning (ML) and artificial intelligence (AI) tools become commonplace. To keep employees abreast of the latest tools, two data...

Department of Energy announces FASST initiative

July 16, 2024 - 
On July 16, the Department of Energy formally announced the proposed Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative via the web page www.energy.gov/fasst (with accompanying video and fact sheet). As stated on the web page, the speed and scale of the AI landscape are significant motivators for investing in strategic AI capabilities: “Without FASST...

AI, fusion, and national security with Brian Spears (VIDEO)

July 13, 2024 - 
This episode of the Eye on AI podcast delves into the cutting-edge world of AI and high-performance computing with Brian Spears, director of LLNL's AI Innovation Incubator. The episode is presented here as a video with the following description: "Brian shares his experience in driving AI into national security science and managing the nation’s nuclear stockpile. With a PhD in mechanical...

Signal and image science community comes together for annual workshop

June 26, 2024 - 
Nearly 150 members of the signal and image science community recently came together to discuss the latest advances in the field and connect with colleagues, friends, and potential collaborators at the 28th annual Center for Advanced Signal and Image Science (CASIS) workshop. The event featured more than 50 technical contributions across six workshop tracks and a parallel tutorials session...

LLNL and BridgeBio announce trials for supercomputing-discovered cancer drug

June 6, 2024 - 
In a substantial milestone for supercomputing-aided drug design, LLNL and BridgeBio Oncology Therapeutics (BridgeBio) today announced clinical trials have begun for a first-in-class medication that targets specific genetic mutations implicated in many types of cancer. The development of the new drug—BBO-8520—is the result of collaboration among LLNL, BridgeBio and the National Cancer...

DOE, LLNL take center stage at inaugural AI expo

June 4, 2024 - 
Held May 7–8 in Washington, DC, the Special Competitive Studies Project (SCSP) AI Expo showcased groundbreaking initiatives in AI and emerging technologies. Kim Budil and other Lab speakers presented at center stage and the DOE exhibition booth. LLNL is rapidly expanding research investments to build transformative AI-driven solutions to critical national security challenges. While developing...

FAA awards approval for drone swarm testing

May 29, 2024 - 
LLNL’s Autonomous Sensors team has received the Federal Aviation Administration’s (FAA’s) first and—to date—only certificate of authorization allowing autonomous drone swarming exercises on the Lab main campus. These flights will test swarm controls and sensor payloads used in a variety of national security applications. Autonomous drone swarms differ from those used for entertainment...

Manufacturing optimized designs for high explosives

May 13, 2024 - 
When materials are subjected to extreme environments, they face the risk of mixing together. This mixing may result in hydrodynamic instabilities, yielding undesirable side effects. Such instabilities present a grand challenge across multiple disciplines, especially in astrophysics, combustion, and shaped charges—a device used to focus the energy of a detonating explosive, thereby creating a...

Harnessing the power of AI for a safe and secure future (VIDEO)

May 13, 2024 - 
LLNL, alongside the Department of Energy’s (DOE’s) 17 national labs, is harnessing the transformative potential of AI for a safer, more secure future. In 2022, LLNL made history by achieving fusion ignition, marking a pivotal moment for national security and clean energy. While AI continues to unlock new insights into fusion, through the combination of cutting-edge computer modeling...

Accelerating material characterization: Machine learning meets X-ray absorption spectroscopy

May 10, 2024 - 
LLNL scientists have developed a new approach that can rapidly predict the structure and chemical composition of heterogeneous materials. In a new study in ACS Chemistry of Materials, Wonseok Jeong and Tuan Anh Pham developed a new approach that combines machine learning with X-ray absorption spectroscopy (XANES) to elucidate the chemical speciation of amorphous carbon nitrides. The research...