Seminar Series

Our seminar series features talks from innovators from academia, industry, and the Lab. These talks provide a forum for thought leaders to share their work, discuss trends. and stimulate collaboration. These monthly seminars are held onsite and virtually. Recordings are posted to a YouTube playlist.

A Universal Law of Robustness via Isoperimetry

Mark Sellke
Mark Sellke | Ph.D. Student | Stanford University

Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparameterization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires d times more parameters than mere interpolation, where d is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry. In the case of two-layers neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.

Mark Sellke is a PhD student in mathematics at Stanford advised by Andrea Montanari and Sébastien Bubeck. He graduated from MIT in 2017 and received a Master of Advanced Study with distinction from the University of Cambridge in 2018, both in mathematics. Mark received the best paper and best student paper awards at SODA 2020, and the outstanding paper award at NeurIPS 2021. He has broad research interests in probability, statistics, optimization, and machine learning. Mark's research is supported by a National Science Foundation graduate research fellowship and the William R. and Sara Hart Kimball endowed Stanford Graduate Fellowship.


Neural Representations for Volume Visualization

Joshua Levine
Joshua Levine | Associate Professor | University of Arizona

In this talk, I will describe two projects, both joint work with collaborators at Vanderbilt University.  The first project studies how generative neural models can be used to model the process of volume rendering scalar fields.  We construct a generative adversarial network that learns the mapping from volume rendering parameters, such as viewpoint and transfer function, to the rendered image.  In doing so, we can analyze the volume itself and provide new mechanisms for guiding the user in transfer function editing and exploring the space of possible images that can be volume rendered.  Both our training process and applications are available on the web at https://github.com/matthewberger/tfgan.

In the second part of my talk, I will explore a recent neural modeling approach for building compressive representations of volume data.  This approach represents volumetric scalar fields as learned implicit functions wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressive function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state-of-the-art volume compression approaches. We study the impact of network design choices on compression performance, highlighting how conceptually simple network architectures are beneficial for a broad range of volumes.  Our compression approach is hosted at https://github.com/matthewberger/neurcomp

Joshua A. Levine is an associate professor in the Department of Computer Science at University of Arizona. Prior to starting at Arizona in 2016, he was an assistant professor at Clemson University from 2012 to 2016, and before that a postdoctoral research associate at the University of Utah’s SCI Institute from 2009 to 2012. He is a recipient of the 2018 DOE Early Career award. He received his PhD in Computer Science from The Ohio State University in 2009 after completing BS degrees in Computer Engineering and Mathematics in 2003 and an MS in Computer Science in 2004 from Case Western Reserve University. His research interests include visualization, geometric modeling, topological analysis, mesh generation, vector fields, performance analysis, and computer graphics.Joshua A. Levine is an associate professor in the Department of Computer Science at University of Arizona. Prior to starting at Arizona in 2016, he was an assistant professor at Clemson University from 2012 to 2016, and before that a postdoctoral research associate at the University of Utah’s SCI Institute from 2009 to 2012. He is a recipient of the 2018 DOE Early Career award. He received his PhD in Computer Science from The Ohio State University in 2009 after completing BS degrees in Computer Engineering and Mathematics in 2003 and an MS in Computer Science in 2004 from Case Western Reserve University. His research interests include visualization, geometric modeling, topological analysis, mesh generation, vector fields, performance analysis, and computer graphics.


Data-Driven Mechanistic Models – Design Inference

Babak Shahbaba
Babak Shahbaba | Chancellor's Fellow and Professor of Statistics | UC Irvine

Mechanistic models provide a flexible framework for modeling heterogeneous and dynamic systems in ways that enable prediction and control. In this talk, we focus on the application of mechanistic models for investigating dynamic biological systems. We show that by embedding these models in a hierarchical Bayesian framework, we can account for the underlying structure and stochasticity of the systems. Further, we discuss how to use a Bayesian utility theory in order to find the optimal experimental design for studying biological systems. While our proposed approach could be quite flexible and powerful, its computational complexity could hinder its feasibility. To alleviate this issue, we propose a class of scalable Bayesian inference methods that utilize deep learning algorithms for fast approximation or the likelihood function and its gradient.

Babak Shahbaba is Chancellor's Fellow and Professor of Statistics with a joint appointment in Computer Science at UC Irvine. His independent research focuses on Bayesian methods and their applications in data-intensive biomedical problems. His research experience spans a broad spectrum of areas including statistical methodologies (Bayesian nonparametrics and hierarchical Bayesian models), computational techniques (efficient sampling algorithms), and a wide range of applied and collaborative projects (statistical methods in neuroscience, genomics, and health sciences). Currently, Shahbaba is the PI on three grants: 1) NSF-HDR-DSC: Data Science Training and Practices: Preparing a Diverse Workforce via Academic and Industrial Partnership, 2) NSF-MODULUS: Data-Driven Mechanistic Modeling of Hierarchical Tissues, 3) NIH-NIMH-R01: Scalable Bayesian Stochastic Process Models for Neural Data Analysis. Before joining UC Irvine, he was a Postdoctoral Fellow at Stanford University under the supervision of Rob Tibshirani and Sylvia Plevritis. Shahbaba received his PhD at University of Toronto under Radford Neal’s supervision.