Did you know we have a monthly newsletter? View past volumes and subscribe.
Cognitive simulation supercharges scientific research
Jan. 10, 2023 -
Computer modeling has been essential to scientific research for more than half a century—since the advent of computers sufficiently powerful to handle modeling’s computational load. Models simulate natural phenomena to aid scientists in understanding their underlying principles. Yet, while the most complex models running on supercomputers may contain millions of lines of code and generate...
Supercomputing’s critical role in the fusion ignition breakthrough
Dec. 21, 2022 -
On December 5th, the research team at LLNL's National Ignition Facility (NIF) achieved a historic win in energy science: for the first time ever, more energy was produced by an artificial fusion reaction than was consumed—3.15 megajoules produced versus 2.05 megajoules in laser energy to cause the reaction. High-performance computing was key to this breakthrough (called ignition), and HPCwire...
LLNL staff returns to Texas-sized Supercomputing Conference
Nov. 23, 2022 -
The 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) returned to Dallas as a large contingent of LLNL staff participated in sessions, panels, paper presentations, and workshops centered around HPC. The world’s largest conference of its kind celebrated its highest in-person attendance since the start of the COVID-19 pandemic, with about 11...
LLNL researchers win HPCwire award for applying cognitive simulation to ICF
Nov. 17, 2022 -
The high performance computing publication HPCwire announced LLNL as the winner of its Editor’s Choice award for Best Use of HPC in Energy for applying cognitive simulation (CogSim) methods to inertial confinement fusion (ICF) research. The award was presented at the largest supercomputing conference in the world: the 2022 International Conference for High Performance Computing, Networking...
Understanding the universe with applied statistics (VIDEO)
Nov. 17, 2022 -
In a new video posted to the Lab’s YouTube channel, statistician Amanda Muyskens describes MuyGPs, her team’s innovative and computationally efficient Gaussian Process hyperparameter estimation method for large data. The method has been applied to space-based image classification and released for open-source use in the Python package MuyGPyS. MuyGPs will help astronomers and astrophysicists...
Scientific discovery for stockpile stewardship
Sept. 27, 2022 -
Among the significant scientific discoveries that have helped ensure the reliability of the nation’s nuclear stockpile is the advancement of cognitive simulation. In cognitive simulation, researchers are developing AI/ML algorithms and software to retrain part of this model on the experimental data itself. The result is a model that “knows the best of both worlds,” says Brian Spears, a...
Assured and robust…or bust
June 30, 2022 -
The consequences of a machine learning (ML) error that presents irrelevant advertisements to a group of social media users may seem relatively minor. However, this opacity, combined with the fact that ML systems are nascent and imperfect, makes trusting their accuracy difficult in mission-critical situations, such as recognizing life-or-death risks to military personnel or advancing materials...
CASC team wins best paper at visualization symposium
May 25, 2022 -
A research team from LLNL’s Center for Applied Scientific Computing won Best Paper at the 15th IEEE Pacific Visualization Symposium (PacificVis), which was held virtually on April 11–14. Computer scientists Harsh Bhatia, Peer-Timo Bremer, and Peter Lindstrom collaborated with University of Utah colleagues Duong Hoang, Nate Morrical, and Valerio Pascucci on “AMM: Adaptive Multilinear Meshes.”...
Unprecedented multiscale model of protein behavior linked to cancer-causing mutations
Jan. 10, 2022 -
LLNL researchers and a multi-institutional team have developed a highly detailed, machine learning–backed multiscale model revealing the importance of lipids to the signaling dynamics of RAS, a family of proteins whose mutations are linked to numerous cancers. Published by the Proceedings of the National Academy of Sciences, the paper details the methodology behind the Multiscale Machine...
LLNL establishes AI Innovation Incubator to advance artificial intelligence for applied science
Dec. 20, 2021 -
LLNL has established the AI Innovation Incubator (AI3), a collaborative hub aimed at uniting experts in artificial intelligence (AI) from LLNL, industry and academia to advance AI for large-scale scientific and commercial applications. LLNL has entered into a new memoranda of understanding with Google, IBM and NVIDIA, with plans to use the incubator to facilitate discussions and form future...
Building confidence in materials modeling using statistics
Oct. 31, 2021 -
LLNL statisticians, computational modelers, and materials scientists have been developing a statistical framework for researchers to better assess the relationship between model uncertainties and experimental data. The Livermore-developed statistical framework is intended to assess sources of uncertainty in strength model input, recommend new experiments to reduce those sources of uncertainty...
Summer scholar develops data-driven approaches to key NIF diagnostics
Oct. 20, 2021 -
Su-Ann Chong's summer project, “A Data-Driven Approach Towards NIF Neutron Time-of-Flight Diagnostics Using Machine Learning and Bayesian Inference,” is aimed at presenting a different take on nToF diagnostics. Neutron time-of-flight diagnostics are an essential tool to diagnose the implosion dynamics of inertial confinement fusion experiments at NIF, the world’s largest and most energetic...
Ana Kupresanin featured in FOE alumni spotlight
March 10, 2021 -
LLNL's Ana Kupresanin, deputy director of the Center for Applied Scientific Computing and member of the Data Science Institute council, was recently featured in a Frontiers of Engineering (FOE) alumni spotlight. Kupresanin develops statistical and machine learning models that incorporate real-world variability and probabilistic behavior to quantify uncertainties in engineering and physics...
Lab researchers explore ‘learn-by-calibration’ approach to deep learning to accurately emulate scientific process
Feb. 10, 2021 -
An LLNL team has developed a “Learn-by-Calibrating” method for creating powerful scientific emulators that could be used as proxies for far more computationally intensive simulators. Researchers found the approach results in high-quality predictive models that are closer to real-world data and better calibrated than previous state-of-the-art methods. The LbC approach is based on interval...
What put LLNL at the center of U.S. supercomputing in 2020?
Nov. 12, 2020 -
The HPC world is waiting for the next series of transitions to far larger machines with exascale capabilities. By this time next year, the bi-annual ranking of the Top500 most powerful systems will be refreshed at the top as Frontier, El Capitan, Aurora, and other DOE systems come online. While LLNL was already planning around AI acceleration for its cognitive simulation aims and had a number...
AI gets a boost via LLNL, SambaNova collaboration
Oct. 20, 2020 -
LLNL has installed a state-of-the-art artificial intelligence (AI) accelerator from SambaNova Systems, the National Nuclear Security Administration (NNSA) announced today, allowing researchers to more effectively combine AI and machine learning (ML) with complex scientific workloads. LLNL has begun integrating the new AI hardware, SambaNova Systems DataScale™, into the NNSA’s Corona...
LLNL, ANL and GSK provide early glimpse into Cerebras AI system performance
Oct. 13, 2020 -
AI chip and systems startup Cerebras was one of many AI companies showcased at the AI Hardware Summit which concluded last week. Cerebras invited collaborators from LLNL, Argonne National Laboratory, and GlaxoSmithKline to talk about their early work on Cerebras machines and future plans. Livermore Computing's CTO Bronis de Supinski said, “We have this vision for performing cognitive...
Machine learning speeds up and enhances physics calculations
Oct. 1, 2020 -
Interpreting data from NIF’s cutting-edge high energy density science experiments relies on physics calculations that are so complex they can challenge LLNL supercomputers, which stand among the best in the world. A collaboration between LLNL and French researchers found a novel way to incorporate machine learning and neural networks to significantly speed up inertial confinement fusion...
DL-based surrogate models outperform simulators and could hasten scientific discoveries
June 17, 2020 -
Surrogate models supported by neural networks can perform as well, and in some ways better, than computationally expensive simulators and could lead to new insights in complicated physics problems such as inertial confinement fusion (ICF), LLNL scientists reported. Read more at LLNL News.
Modeling neuronal cultures on 'brain-on-a-chip' devices
June 12, 2020 -
For the past several years, LLNL scientists and engineers have made significant progress in development of a three-dimensional “brain-on-a-chip” device capable of recording neural activity of human brain cell cultures grown outside the body. The team has developed a statistical model for analyzing the structures of neuronal networks that form among brain cells seeded on in vitro brain-on-a...