Seminar Series

Our seminar series features talks from innovators from academia, industry, and national labs. These talks provide a forum for thought leaders to share their work, discuss trends, and stimulate collaboration. Recordings are posted to a YouTube playlist.

General AI Safety

Stuart Russell
Stuart Russell | UC Berkeley

LLNL’s Office of the Deputy Director for Science and Technology and the Data Science Institute co-hosted a colloquium by Dr. Stuart Russell from UC Berkeley on October 3, 2024.

The media are agog with claims that recent advances in AI put artificial general intelligence (AGI) within reach. Is this true? If so, is that a good thing? Alan Turing predicted that AGI would result in the machines taking control. I will argue that Turing was right to express concern but wrong to think that doom is inevitable. Instead, we need to develop a new kind of AI that is provably beneficial to humans. Unfortunately, we are heading in the opposite direction and we need to take steps to correct this.

Dr. Stuart Russell is the Michael H. Smith and Lotfi A. Zadeh Chair in Engineering and a professor in UC Berkeley’s Division of Computer Science. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI, with translations in 14 languages and use in 1,500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. In 2021, he was appointed by Her Majesty The Queen as an Officer of the Most Excellent Order of the British Empire.


Ontologies, Graph Deep Learning, & AI

Pawan Tripathi
Pawan Tripathi | Case Western Reserve University

The integration of ontologies, semantic reasoning, and graph-based deep learning and AI signifies a paradigm shift in studying high-dimensional multimodal problems, particularly within advanced manufacturing, synchrotron science, and photovoltaics. Ontologies provide structured frameworks for knowledge representation, while graphs model complex relationships and interactions, enhancing AI’s reasoning and predictive capabilities. In this talk, we explore ‘mds-onto’: a low-level ontology developed for multiple materials science domains such as laser powder bed fusion (LPBF), direct ink writing (DIW), and synchrotron x-ray experiments. Foundation models, which are domain-specific deep learning neural network models trained using self-supervised learning, can be fine-tuned for multiple specific learning tasks. Utilizing spatiotemporal graph neural networks as graph foundation models enables multimodal analysis, wherein preprocessing extracts features from diverse datasets and constructs spatiotemporal graphs with these feature vectors for foundation model training. These ddDTs are capable of answering task-specific questions such as classifying parts with or without pores and ensuring track continuation in LPBF, performing data imputation and regression for error estimation in DIW, and predicting PV power plant performance, enabling real-time monitoring, predictive maintenance, and optimization of manufacturing processes. Incorporating ontologies and knowledge graphs into ddDTs enhances their intelligence and decision-making capabilities, thereby improving process efficiency and product innovation. This underscores the importance of data-centric AI for ensuring accurate and robust AI models.

Dr. Pawan Tripathi is a research assistant professor in the Department of Materials Science and Engineering at CWRU in Ohio. He leads projects related to materials data science at the DOE/NNSA-funded Center of Excellence for Materials Data Science for Stockpile Stewardship. His expertise lies in interface structural simulations and developing automated analysis pipelines for large multimodal datasets from diverse experiments. Dr. Tripathi’s current research focuses extensively on data FAIRification, deep learning, image processing, semantic segmentation, and statistical modeling, particularly in the context of advanced manufacturing and laser powder bed fusion.


How Could We Design Aligned and Provably Safe AI?

Yoshua Bengio
Yoshua Bengio | Mila – Quebec AI Institute

Evaluating the risks with a learned AI system statically seems hopeless because the number of contexts in which it could behave is infinite or exponentially large and static checks can only verify a finite and relatively small set of such contexts. However, if we had a run-time evaluation of risk, we could potentially prevent actions with an unacceptable level of risk. The probability of harm produced by an action or a plan in a given context and past data under the true explanation for how the world works is unknown. However, under reasonable hypotheses related to Occam's Razor and having a non-parametric Bayesian prior (that thus includes the true explanation) it can be shown to be bounded by quantities that can in principle be numerically approximated or estimated by large neural networks, all based on a Bayesian view that captures epistemic uncertainty about what is harm and how the world works. Capturing this uncertainty is essential: The AI could otherwise be confidently wrong about what is “good” and produce catastrophic existential risks, for example through instrumental goals or taking control of the reward mechanism (wrongly thinking that the rewards recorded in the computer are what it should maximize). The bound relies on a kind of paranoid theory, the one that has maximal probability given that it predicts harm and given the past data. The talk will discuss the research program based on these ideas and how amortized inference with large neural networks could be made to estimate the required quantities.

Dr. Yoshua Bengio's talk was co-sponsored by LLNL’s Data Science Institute and the Center for Advanced Signal and Image Sciences. A Turning Award winner, Bengio is recognized as one of the world’s leading AI experts, known for his pioneering work in deep learning. He is a full professor at the University of Montreal, and the founder and scientific director of the Mila – Quebec AI Institute. In 2022, Bengio became the most-cited computer scientist in the world. Watch a video of his seminar on YouTube.


GeoAI: Past, Present, and Future

Shawn Newsam
Shawn Newsam | UC Merced

This talk will focus on GeoAI which is the application of artificial intelligence (AI) to geographic data. First, I will briefly describe some of my work in this area over the last 25 years which has been driven largely by two themes. One theme is that spatial data is special in that space (and time) provides a rich context in which to analyze it. The challenge is how to incorporate spatial context into AI methods when adapting or developing them for geographic data—that is, to make them spatially explicit. A second theme is that location is a powerful key (in the database sense) that allows us to associate large amounts of different kinds of data. This can be especially useful, for example, for generating large collections of weakly labelled data when training machine learning models. In the second part of my talk, I’ll discuss near-term opportunities in GeoAI related to foundation models particularly for multi-modal data. Finally, I’ll point out some anticipated challenges in GeoAI as generative models like OpenAI’s generative pre-trained transformer (GPT) become pervasive.

Dr. Shawn Newsam is a Professor of Computer Science and Engineering and Founding Faculty at the University of California, Merced. He has degrees from UC Berkeley, UC Davis, and UC Santa Barbara, and did a postdoc in the Sapphire Scientific Data Mining group in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory from 2003 to 2005. (So, UC Merced is his 5th UC institution!) Dr. Newsam is the recipient of a U.S. Department of Energy Early Career Scientist and Engineer Award, a U.S. National Science Foundation Faculty Early Career Development (CAREER) Award, and a U.S. Office of Science and Technology Policy Presidential Early Career Award for Scientists and Engineers (PECASE). He has held leadership positions in SIGSPATIAL, the ACM special interest group on the acquisition, management, and processing of spatially-related information, including serving as the general and program chair of its flagship conference and as the chair of the SIG. His research interests include computer vision and machine learning particularly applied to geographic data.


Using AI to Expand What Is Possible in Cardiovascular Medicine

Geoffrey Tison
Geoffrey H. Tison | UCSF

Machine learning and artificial intelligence (ML/AI) methods have shown great promise across various industries, including in medicine. Medicine has unique characteristics, however, that can make medical data more complex and in some respects harder to analyze compared to data outside of medicine. These issues include the complicated clinical workflow and the many human stakeholders and decision makers that all contribute at various time-points to any given patient’s medical data record. In this talk, Dr. Tison will discuss the application of ML/AI approaches in medicine, focusing on his prior work spanning several cardiovascular diagnostic modalities including electrocardiograms, echocardiograms, photoplethysmography, and angiography.

Dr. Geoffrey H. Tison, MD, MPH, is an Associate Professor of Medicine and Cardiology, and faculty in the Bakar Computational Health Sciences Institute at the University of California, San Francisco (UCSF). He is a practicing cardiologist who also leads a computational research lab at UCSF (tison.ucsf.edu) focused on improving cardiovascular disease prediction and prevention by applying artificial intelligence and epidemiologic and statistical methods to large-scale medical data. He received the DP2 New Innovator Award from the National Institutes of Health Office of the Director, and his work has been supported by the National Institutes of Health and the Patient-Centered Outcomes Research Institute, among others.