Jan. 26, 2023
LLNL Achieves Fusion Ignition…with Help from Data Science
On December 13, the Department of Energy (DOE) and National Nuclear Security Administration (NNSA) announced the achievement of fusion ignition at LLNL—a major scientific breakthrough decades in the making that will pave the way for advancements in national defense and the future of clean power. In the early hours of December 5, a team at LLNL’s National Ignition Facility (NIF) conducted the first controlled fusion experiment in history to reach this milestone, also known as scientific energy breakeven, meaning it produced more energy from fusion than the laser energy used to drive it. This first-of-its-kind feat will provide unprecedented capability to support NNSA’s Stockpile Stewardship Program and will provide invaluable insights into the prospects of clean fusion energy.
“The pursuit of fusion ignition in the laboratory is one of the most significant scientific challenges ever tackled by humanity, and achieving it is a triumph of science, engineering, and most of all, people,” LLNL director Kim Budil said. “Crossing this threshold is the vision that has driven 60 years of dedicated pursuit—a continual process of learning, building, expanding knowledge and capability, and then finding ways to overcome the new challenges that emerged. These are the problems that the U.S. national laboratories were created to solve.”
Considered the “holy grail” of fusion energy research, ignition comes just over a year after NIF reached a then-record-setting 1.3 megajoule shot, which produced about 70% of the energy put into the experiment via fusion reaction, planting NIF firmly on the doorstep of the milestone. Researchers attributed the success after previous near-misses to a combination of improvements in target design, better predictive modeling backed by machine learning (ML) and cognitive simulation, advances in laser capabilities, and other adjustments (see next story). With these advancements and promising models, team members said they had “high hopes” and “good reasons to be optimistic” that the December 5 shot would be extraordinary.
Watch a short video (1:28) and the full DOE press conference and panel discussion (1:53:30) about this historic event. Various news outlets continue to provide coverage, and more details can be found on the NIF website. In addition, the DSI invites everyone to attend a February 15 (10:00am Pacific) hybrid seminar by LLNL researchers Kelli Humbird and Luc Peterson, titled “Calling the Shot: How AI Predicted Fusion Ignition Before It Happened.” Attendees outside LLNL can request a WebEx invitation from datascience [at] llnl.gov (datascience[at]llnl[dot]gov).
Cognitive Simulation Supercharges Scientific Research
To improve the fidelity of complex computer models and to wrangle the growing amount of data, LLNL researchers are developing an array of hardware, software, and artificial intelligence (AI) techniques they call cognitive simulation (CogSim). Researchers will use CogSim to find large-scale structures in big datasets, teach existing models to better mirror experimental results, and create a feedback loop between experiments and models that accelerates research advances. CogSim’s goal is ambitious: to transform itself into a fourth pillar of scientific research, joining the three pillars of theory, experiment, and computer modeling as tools of discovery. The latest issue of the Lab’s magazine Science & Technology Review provides an in-depth look at CogSim in the cover story.
“We can train AI’s deep-learning models on our simulations to make a perfect surrogate that knows exactly what the model’s software code knows,” explains Spears, who leads the Lab Director’s Initiative in CogSim. “We then retrain part of this model on the experimental data itself. Now we get a model that knows the best of both worlds. It understands the theory from the simulation side, and it makes accurate corrections to what we actually see in the experiments.”
Applying CogSim to a wide range of research could benefit many fields. For example, a CogSim method called transfer learning is helping researchers improve inertial confinement fusion models, like those used in the groundbreaking NIF shot described in the previous story. Jim Brase, LLNL’s deputy associate director for Computing, notes in the issue’s Commentary that “CogSim improves the predictive power of models and guides optimization methods for unique designs and superior performance in multiple Laboratory applications.”
Save the Date: WiDS on March 8
For the sixth consecutive year, Women in Data Science (WiDS) Livermore returns on Wednesday, March 8 to coincide with the annual worldwide conference hosted by Stanford University. Again this year the regional event will include a tie-in with the winter hackathon/datathon, as well as a “fireside chat” between Dona Crawford, board chair of the Livermore Lab Foundation, and Nisha Mulakken, LLNL biostatistician and co-director of the Data Science Summer Institute. Guest speakers will discuss their career paths and real-world data science applications. The agenda will also include a livestream of the main Stanford conference where LLNL bioinformatics group leader Marisa Torres has been invited to speak to the global audience.
The registration link, agenda, and other details about how to participate will be posted at data-science.llnl.gov/wids. Registration opens on February 1 and closes February 27.
WiDS Livermore is free, open to everyone, and co-sponsored by the DSI and the Lab’s Office of Strategic Diversity and Inclusion Programs. This will be a hybrid event hosted at the Livermore Valley Open Campus (LVOC) and via WebEx. LVOC attendees are required to bring their badge or valid ID. Videos, photos, and audio recordings from personal (non-LLNL) devices are not permitted at LVOC.
New Research at NeurIPS
Founded in 1987, the international Conference on Neural Information Processing Systems (NeurIPS) focuses on ML and computational neuroscience. LLNL researchers have had papers, posters, and other work accepted to NeurIPS in each of the past five years. Here are some highlights from the 2022 conference, with content linked where available:
- A Unified Framework for Deep Symbolic Regression – Mikel Landajuela, Chak Lee, Jiachen Yang, Ruben Glatt, Claudio Santiago, Ignacio Aravena, T. Nathan Mundhenk, former student intern Garrett Mulcahy, and Brenden Petersen
- Analyzing Data-Centric Properties for Graph Contrastive Learning – Mark Heimann, Jayaraman Thiagarajan, and University of Michigan collaborators
- AutoML-Based Almond Yield Prediction and Projection in California – Shiheng Duan and UC Davis collaborators
- Certified Data-Driven Physics-Informed Greedy Auto-Encoder Simulator – Youngsoo Choi, Jonathan Belof, and UC San Diego and University of Arizona collaborators (image at left: relative error distribution in a comparison of two different models)
- Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities – Brian Bartoldson, Bhavya Kailkhura, and MosaicML collaborator
- Do Domain Generalization Methods Generalize Well? – Kailkhura and IBM Research AI and Tulane University collaborators
- Models Out of Line: A Fourier Lens on Distribution Shift Robustness – Bartoldson, James Diffenderfer, Kailkhura, Peer-Timo Bremer, and UC Berkeley collaborator
- Scalable Gaussian Process Hyperparameter Optimization via Coverage Regularization – Alec Dunton, Amanda Muyskens, Benjamin Priest, and University of Colorado, Boulder collaborator
- Single Model Uncertainty Estimation via Stochastic Data Centering – Thiagarajan, Rushil Anirudh, Bremer, and Arizona State University collaborator
ML Model Instantly Predicts Polymer Properties
Hundreds of millions of tons of polymer materials are produced globally for use in a vast and ever-growing application space with new material demands such as green chemistry polymers, consumer packaging, adhesives, automotive components, fabrics, and solar cells. But discovering suitable polymer materials for use in these applications lies in accurately predicting the properties that a candidate material will have. Obtaining a quantitative understanding of the relationship between chemical structure and observable properties is particularly challenging for polymers, due to their complex 3D chemical assembly that can consist of extremely long chains of thousands of atoms.
A team of LLNL materials and computer scientists tackled this challenge with a data-driven approach. By using datasets of polymer properties, the researchers developed a novel ML model that can predict 10 distinct polymer properties more accurately than was possible with previous ML models. In a recent Journal of Chemical Information and Modeling paper, they explain how the ML model is able to generate property predictions nearly immediately. “The secret to the success of the new ML model lies in a new polymer representation that compactly captures the polymers’ structure, in combination with powerful graph-based ML techniques that autonomously learn how to best describe the structure of the polymer,” says lead author and LLNL postdoctoral researcher Evan Antoniuk. Co-authors are Peggy Li, Bhavya Kailkhura, and Anna Hiszpanski.
Digital Twins for Aerospace Manufacturing
A partnership involving LLNL aimed at developing digital twins for producing aerospace components is one of six new projects funded under the HPC for Energy Innovation (HPC4EI) initiative, the Department of Energy’s Office of Energy Efficiency and Renewable Energy announced on January 18. The collaboration between LLNL and specialty materials producer Allegheny Technologies Incorporated (ATI) will leverage advanced HPC software to create a digital twin of the near-net shape mill-products (NNS-MP) system—a strategy in which components are initially fabricated to be as close to the finished product as possible.
The project will address reducing energy usage and CO2 production in aircraft manufacturing, where about 95% of metal used in the process is converted to scrap due to the complex shape of components. For the collaboration with ATI, led at LLNL by co-principal investigators Aaron Fisher and Vic Castillo, researchers will simulate the multiphysics problem of multi-stand bar-shaped rolling and produce a machine learning model. The model will act as a digital object for a digital twin of the NNS-MP system, helping to optimize the process for manufacturing aerospace components.
50 Seminars and Counting!
Since launching its seminar series in April 2018, the DSI has hosted more than 50 speakers on a wide range of data science topics. Seminars have occurred approximately monthly except for a few months’ hiatus at the beginning of the pandemic. The series pivoted to a virtual format in the summer of 2020, and the first hybrid seminar was held in December 2022 (see recap below).
“We strive to host accessible seminars that are of interest to the broad data science community at LLNL. Our speakers this past year presented thought-provoking seminars on topics including fairness in risk assessment models, machine learning reproducibility, and model interpretability,” notes Sarah Mackay, the series’ technical coordinator.
Some data about the seminars through the end of 2022 (n = 51):
- 5 speakers were from LLNL; Lawrence Berkeley, Sandia, and Pacific Northwest national labs had 1 speaker each
- 6 speakers were PhD students
- 13 speakers were hosted in 2021, the most of any year
- 35 speakers were from universities (see pie chart at left), with University of California schools accounting for 15 of those
All of the speakers’ abstracts are available on the seminar series web page, and many recordings are posted to the YouTube playlist. To become or recommend a speaker for a future seminar, or to request a WebEx link for an upcoming seminar if you're outside LLNL, contact datascience [at] llnl.gov (subject: Seminar%20series) (datascience[at]llnl[dot]gov).
DSSI Student Represents Lab at Diversity Conference
UC Santa Cruz student Lucy Zheng represented LLNL at the SACNAS National Diversity in STEM Conference, which was held in Puerto Rico in October. She presented a poster about her summer project, “Graph Embeddings for Drug Classification.” As a DSSI class of 2022 intern, Zheng worked under the direction of mentor Mary Silva (see more on Silva below).
The conference gave Zheng the opportunity to interact with the data science community beyond her university and the Lab. “I wanted to share my research project with other passionate researchers from all over the world and to network with well-established scientists,” she said. Zheng also attended workshops that were supportive of minority researchers and cultural empowerment. She added, “I liked how I was in such an inclusive and supportive environment while learning about and appreciating the culture of Puerto Rico.”
New Initiative Fosters LLNL and UC Partnership
A new joint initiative between LLNL’s Weapons and Complex Integration (WCI) Directorate and the University of California (UC) is aimed at developing next-generation academic leadership with strong and enduring national lab connections. The LLNL Early Career UC Faculty Initiative is accepting proposals from untenured, tenure-track faculty at one of 10 UC campuses, soliciting innovative ideas in AI/ML. The winning proposal is anticipated to receive funding for up to $1 million over a five-year period, beginning in summer 2023. The fund will allow faculty to build a research group, including undergraduates, graduate students, and postdoctoral fellows.
“We’re really pleased to be launching this initiative as it will foster further collaboration and sustain long-term academic partnerships between LLNL researchers and the UC academic community,” said Kelli Humbird, LLNL lead for the initiative. “Not only is this a great opportunity for UC faculty to receive funding and Lab support for their research, but LLNL technical researchers are also able to get involved in the research for the winning project. This initiative is another win-win for our long-standing partnership with UC.”
For additional information about the initiative, including eligibility requirements, submission templates, proposal review criteria, timeline, and more, visit the LLNL Early Career UC Faculty Initiative website. Expressions of interest are due on February 10.
Virtual Seminar Explores Physics-Based ML
In the December seminar—the 51st overall and 1st in a hybrid format—Aditi Krishnapriyan of UC Berkeley presented “Physics-Based Machine Learning with Differentiable Solvers.” Her talk covered the challenges with neural network approaches to solving differential equations through adding the underlying governing equations as a soft constraint to the loss function. She discussed how her team overcomes these challenges by developing a neural network architecture that incorporates differential equation constrained optimization. Finally, she showed that this architecture allows researchers to accurately and efficiently fit solutions to new problems, and demonstrated this on fluid flow and transport phenomena problems.
Krishnapriyan is Assistant Professor at UC Berkeley where she is a member of Berkeley AI Research, the AI+Science group in Electrical Engineering and Computer Sciences, and the theory group in Chemical Engineering. Her research interests include physics-inspired ML methods; geometric deep learning; inverse problems; and development of ML methods informed by physical sciences applications including molecular dynamics, fluid mechanics, and climate science. A former DOE Computational Science Graduate Fellow, Krishnapriyan holds a PhD from Stanford University and was the Luis W. Alvarez Fellow in Computing Sciences at Lawrence Berkeley National Lab.
Meet an LLNL Data Scientist
With an M.S. in Statistics and Applied Math from UC Santa Cruz, Mary Silva knows firsthand how the Lab’s multidisciplinary approach to teamwork can elevate everyone involved. “Even without an extensive background in biology, I can contribute to vaccine development and target identification while utilizing domain experts’ knowledge to interpret models and results. These experiences have inspired me to take computational biology courses. A data scientist is forever a student,” she explains. A former DSSI intern, Silva joined LLNL in 2020 and today works on active learning and Bayesian spatial models for rapid design of COVID-19 antibodies, as well as enhancing ML models through the multi-institutional Scalable Precision Medicine Open Knowledge Engine (SPOKE) project. As a mentor, she helps students improve their weaknesses. For instance, she says, “If a student doesn’t have public speaking confidence, I can give them opportunities to present their work to an audience.” Silva also co-organizes the Lab’s WiDS event and datathon challenge. “I found my Lab internship by attending WiDS Livermore with my professor, and the ability to socialize and network with LLNL researchers kicked off my career,” she states.