Seminar Series

Our seminar series features talks from innovators from academia, industry, and national labs. These talks provide a forum for thought leaders to share their work, discuss trends, and stimulate collaboration. These monthly seminars are held onsite and virtually. Recordings are posted to a YouTube playlist.

Quantifying the Benefits of Immersion in Virtual Reality

Doug Bowman
Doug Bowman | Virginia Tech

Virtual reality (VR) technology has become mainstream, affordable, and powerful in recent years, but there is still skepticism about the usefulness of VR for serious applications. Although VR provides a compelling and unique experience, is there anything beyond this “wow factor,” or is it simply a flashy demo? How can VR be used effectively for real-world applications beyond gaming and entertainment? In this talk, I will review decades of research on the benefits of immersion in VR. Starting with an objective definition of immersion, we will discuss hypothesized benefits, and then numerous examples of empirical studies that provide quantitative evidence for these hypotheses. Finally, case studies of successful real-world VR applications will demonstrate how these results can be applied in areas such as scientific visual data analysis.

Doug A. Bowman is the Frank J. Maher Professor of Computer Science and Director of the Center for Human-Computer Interaction at Virginia Tech. He is the principal investigator of the 3D Interaction Group, focusing on the topics of 3D user interfaces, VR/AR user experience, and the benefits of immersion in virtual environments. Dr. Bowman is one of the co-authors of 3D User Interfaces: Theory and Practice. He has served in many roles for the IEEE Virtual Reality Conference, including program chair, general chair, and steering committee chair. He also co-founded the IEEE Symposium on 3D User Interfaces (now part of IEEE VR). He received a CAREER award from the National Science Foundation for his work on 3D Interaction and has been named an ACM Distinguished Scientist. He received the Technical Achievement award from the IEEE Visualization and Graphics Technical Committee in 2014, and the Career Impact Award from IEEE ISMAR in 2021. His undergraduate degree in mathematics and computer science is from Emory University, and he received his M.S. and Ph.D. in computer science from Georgia Tech.


Tensor Factorization for Biomedical Representation Learning

Joyce Ho
Joyce Ho | Emory University

Biomedical datasets are often noisy, irregularly sampled, sparse, and high-dimensional. One key question is how to produce appropriate representations that are amenable to a variety of downstream tasks. Tensors, generalizations of matrices to multiway data, are natural structures for capturing higher-order interactions. Factorization of these tensors can provide a powerful, data-driven framework for learning representations useful across a variety of downstream prediction tasks. In this talk, I will introduce how tensors can succinctly capture patient representations from electronic health records to deal with missing and time-varying measurements while providing better predictive power than deep learning models. I will also discuss how tensor factorization can be used for learning node embeddings for both dynamic and heterogeneous graphs, and illustrate their use for automating systematic reviews.

Joyce Ho is an Associate Professor in the Computer Science Department at Emory University. She received her PhD in Electrical and Computer Engineering from the University of Texas at Austin, and an MA and BS in Electrical Engineering and Computer Science from Massachusetts Institute of Technology. Her research focuses on the development of novel machine learning algorithms to address problems in healthcare such as identifying patient subgroups or phenotypes, integration of new streams of data, fusing different modalities of data (e.g., structured medical codes and unstructured text), and dealing with conflicting expert annotations. Her work has been supported by the National Science Foundation (including a CAREER award), National Institutes of Health, Robert Wood Johnson Foundation, and Johnson and Johnson.


Photorealistic Reconstruction from First Principles

Sara Fridovich-Keil
Sara Fridovich-Keil | Stanford University

In computational imaging, inverse problems describe the general process of turning measurements into images using algorithms: images from sound waves in sonar, spin orientations in magnetic resonance imaging, or X-ray absorption in computed tomography. Today, the two dominant algorithmic approaches for solving inverse problems are compressed sensing and deep learning. Compressed sensing leverages convex optimization and comes with strong theoretical guarantees of correct reconstruction, but requires linear measurements and substantial processor memory, both of which limit its applicability to many imaging modalities. In contrast, deep learning methods leverage nonconvex optimization and neural networks, allowing them to use nonlinear measurements, data-driven priors, and limited memory. However, they can be unreliable, and it is difficult to inspect, analyze, and predict when they will produce correct reconstructions. In this talk, I focus on an inverse problem central to computer vision and graphics: given calibrated photographs of a scene, recover the optical density and view-dependent color of every point in the scene. For this problem, we take steps to bridge the best aspects of compressed sensing and deep learning: (i) combining an explicit, non-neural scene representation with optimization through a nonlinear forward model, (ii) reducing memory requirements through a compressed representation that retains aspects of interpretability, and extends to dynamic scenes, and (iii) presenting a preliminary convergence analysis that suggests faithful reconstruction under our modeling.

Sara Fridovich-Keil is a postdoctoral scholar at Stanford University, where she works on foundations and applications of machine learning and signal processing in computational imaging. She is currently supported by an NSF Mathematical Sciences Postdoctoral Research Fellowship. Sara received her PhD in electrical engineering and computer sciences in 2023 from UC Berkeley and BSE in electrical engineering from Princeton University in 2018. During her time at UC Berkeley, Sara worked as a student researcher at Google Brain and collaborated with researchers at LLNL, the University of Southern California, and UC San Diego.


Leveraging Latent Representations for Predictive Physics-Based Modeling and Uncertainty Quantification

Katiana Kontolati
Katiana Kontolati | Data Scientist | Bayer R&D

Nonlinear PDEs provide a quantitative description for a vast array of phenomena in physics involving reaction, diffusion, convection, shocks, equilibrium, and more. Commonly, physical and engineering systems are associated with stochastic and epistemic uncertainties which can be characterized, quantified, and propagated through models by utilizing tools from UQ. UQ becomes prohibitively expensive when considering complex PDEs, and to tackle this limitation, surrogate models have been developed for approximating expensive numerical solvers while maintaining solution accuracy. Yet, the performance of surrogates in terms of predictive accuracy, robustness and generalizability deteriorates in cases of high-fidelity simulations, highly non-linear PDE mappings, and high-dimensional uncertainties sources. This presentation showcases a set of approaches, based on dimension reduction principles, that leverage latent representations of high-dimensional data, to improve the performance of surrogate models and enable UQ for complex PDE applications. The first part of the talk focuses on inverse problems and the development of a manifold-based approach for the probabilistic parameterization of nonlinear PDEs based on atomistic simulation data. The proposed approach is applied for modeling plastic deformation in a bulk metallic glass (amorphous solid) system based on available observations from molecular dynamics simulations. The second part of the talk focuses on the Latent Deep Operator Network (L-DeepONet) for training neural operators on latent spaces which significantly improves predictive accuracy for time-dependent PDEs of varying degrees of complexity. The final component of this talk focuses on transfer learning (TL) for conditional shift in PDE regression using DeepONet. We propose a TL framework based on Hilbert space embeddings of conditional distributions and construct task-specific models by leveraging domain-invariant features and finetuning pre-trained neural operators. Our approach provides a powerful tool in complex physics and engineering applications as it enables generalizability and mitigates the need for big-data and large-scale computational resources.

Katiana Kontolati is a data scientist at Bayer R&D with a focus on machine learning and genome modeling for designing high-performing crops. She received her PhD from the Department of Civil and Systems Engineering at Johns Hopkins University in 2023. Her doctoral research revolved around physics-informed machine learning with a focus on high-dimensional surrogate modeling and uncertainty quantification in physics-based and engineering problems involving nonlinear partial differential equations under uncertainty. In parallel to her research activities, Kontolati is contributing to the development of the open-source python software UQpy for modeling uncertainty in physical and mathematical systems. Her work has been published in top journals including Acta Materialia and Nature Machine Intelligence and she has received multiple awards and recognition including the Joseph Meyerhof Fellowship from Johns Hopkins, the Applied Machine Learning Research Fellowship from Los Alamos National Lab, the Gerondelis Foundation Graduate Scholarship and was recently selected as a Rising Star in Computational and Data Sciences. A native of Athens, Greece, Kontolati received a BSc in Structural Engineering from the University of Thessaly and a MSc in Applied Mechanics from the National Technical University of Athens.


Adversarial Machine Learning: Categories, Concepts, and Current Landscape

Philip Kegelmeyer
Philip Kegelmeyer | Senior Scientist | Sandia National Laboratory

Machine learning depends critically on data—on the data that trains a machine learning model, on the data that exercises it. Tight dependence on the data means that machine learning can be subverted by an adversary who does nothing more than manipulate some of that data. That is, most adversarial computer attacks are attacks on an implementation, and depend on corruption of the hardware, software, or network that runs some program. Machine learning, on the other hand, has algorithmic vulnerabilities, and can be subverted even when its hardware, software, and network environment is pristine. In some cases, these vulnerabilities can be triggered by simply querying the model in a fashion nearly indistinguishable from normal, non-adversarial use. This talk will provide an overview of the three main categories of these vulnerabilities, speaking to how an adversary might: subvert the original training data to manipulate the resulting model, change the test data in order to evade the correct outcome from the model, or cause the model to reveal details of its training data or its structure that it did not intend to reveal. The intent is to define and illustrate these attacks in just enough detail to usefully alarm anyone who might be building or using machine learning models. A secondary goal is to motivate thinking carefully about who your adversary might be. That is, what distinguishes counter adversarial machine learning from other aspects of machine learning (e.g., reliability, accuracy, or quantification of its uncertainties) is indeed the presence of an adversary. If you wish to do or use adversarial machine learning research, it is important to build a model of the adversary you are considering: their goals, capabilities, success measures, costs, observables, and so on. Much academic work in “adversarial machine learning” has greatly limited its utility due to the lack of a reasonable adversarial model. Still, there is still a great deal of academic work being published in adversarial machine learning, much of it entertainingly or worrisomely creative. So, the tail end of this talk will be a brief survey of recent work, focusing on edge cases that don’t smoothly fit into the subvert/evade/reveal categorization.

Philip Kegelmeyer (E.E. PhD, Stanford) is a Senior Scientist at SNL Livermore. His current interests are machine learning and graph algorithms, especially as applied to ugly, obdurate, real-world data which is actively resistant to analysis. Since 2013 Dr. Kegelmeyer has been leading research efforts in “Counter Adversarial Data Analytics,” starting with adversarial machine learning. The core idea is to take a vulnerability assessment approach to quantitatively assessing, and perhaps countering, the result of an adversary knowing and adapting to exactly the specific data analysis method in use. Dr. Kegelmeyer has 30 years’ experience inventing, tinkering with, quantitatively improving, and now, subverting supervised machine learning algorithms (particularly ensemble methods), including investigations into how to accurately and statistically significantly compare such algorithms. His work has resulted in over 80 refereed publications, 2 patents, and commercial software licenses.


Using Data Science to Advance the Impact of Vascular Digital Twins in Medicine

Amanda Randles
Amanda Randles | Assistant Professor | Duke University

The recognition of the role hemodynamic forces have in the localization and development of disease has motivated large-scale efforts to enable patient-specific simulations. When combined with computational approaches that can extend the models to include physiologically accurate hematocrit levels in large regions of the circulatory system, these image-based models yield insight into the underlying mechanisms driving disease progression and inform surgical planning or the design of next-generation drug delivery systems. Building a detailed, realistic model of human blood flow, however, is a formidable mathematical and computational challenge. The models must incorporate the motion of fluid, intricate geometry of the blood vessels, continual pulse-driven changes in flow and pressure, and the behavior of suspended bodies such as red blood cells. Combining physics-based modeling with data science approaches is critical to addressing open questions in personalized medicine. In this talk, I will discuss how we’re building and using high-resolution digital twins of patients’ vascular anatomy to inform the treatment of a range of human diseases. I will present the data challenges we run into and identify key areas where data science can play a role in advancing the work.

Dr. Amanda Randles is the Alfred Winborne Mordecai and Victoria Stover Mordecai Assistant Professor of Biomedical Sciences and Biomedical Engineering at Duke University. Focusing on the intersection of HPC, ML, and personalized modeling, her group is developing new methods to aid in the diagnosis and treatment of a diseases ranges from cardiovascular disease to cancer. Amongst other recognitions, she has received the NIH Pioneer Award, the NSF CAREER Award, and the ACM Grace Hopper Award. She was named to the World Economic Forum Young Scientist List and the MIT Technology Review World’s Top 35 Innovators under the Age of 35 list and is a Fellow of the National Academy of Inventors. Randles received her PhD in Applied Physics from Harvard University as a DOE Computational Graduate Fellow and NSF Fellow.


Calling the Shot: How AI Predicted Fusion Ignition Before It Happened

portraits of Kelli and Luc
Left: Kelli Humbird | Design Physicist | LLNL
Right: Luc Peterson | Associate Program Leader | LLNL

At 1:03am on December 5, 2022, 192 laser beams at the National Ignition Facility focused 2.05 megajoules of energy onto a peppercorn-sized capsule of frozen hydrogen fuel. In less time than it takes light to travel 10 feet, the laser crushed the capsule to smaller than the width of a human hair, vaulting the fuel to temperatures and densities exceeding those found in the sun. Under these extreme conditions, the fuel ignited and produced 3.15 megajoules of energy, making it the first experiment to ever achieve net energy gain from nuclear fusion. Nuclear fusion is the universe’s ultimate power source. It drives our sun and all the stars in the night sky. Harnessing it would mean a future of limitless carbon-free, safe, clean energy. After several decades of research, fusion breakeven at NIF brings humanity one step closer to that dream. Yet, the shot that finally ushered in the Fusion Age was not actually that surprising. A few hours before the experiment, our physics team used an artificial intelligence model to predict the outcome of the experiment. Our model, which blends supercomputer simulations with experimental data, indicated that ignition was the most likely outcome for this shot. As such, hopes were high that something big was about to occur. In this talk, we discuss the breakthrough experiment, nuclear fusion, and how we used machine learning to call the shot heard around the world.

Dr. Kelli Humbird’s work focuses on machine learning (ML) discovery and design for inertial confinement fusion and integrated hohlraum design. During her time at the Lab, she has worked in stockpile certification, technical nuclear forensics, ML accelerators for multiphysics codes, and ML analysis for the spread of COVID-19 during the first year of the pandemic. The common thread throughout much of her work is the application of ML to scientific problems with sparse data.

Dr. Jayson “Luc” Peterson is the Associate Program Leader for Data Science within LLNL’s Space Science and Security Program, where he is responsible for the leadership and development of a broad portfolio of projects at the intersection of data science and outer space. He also leads the ICECap and Driving Design with Cognitive Simulation projects, which aim to bring ML-enhanced digital design to exascale supercomputers.