2021 Volume 7

Published on May 4, 2021

New Research in Machine Learning Robustness

neural network diagram showing weights of pruned nodes in red
In LLNL's new deep learning paradigm, random weights are assigned scores, which are then used to identify the weights in each layer that are least important to the success of the binary subnetwork.

LLNL postdoctoral researcher James Diffenderfer and computer scientist Bhavya Kailkhura are co-authors on a paper that offers a novel and unconventional way to train deep neural networks (DNNs). The LLNL team shows both empirically and theoretically that it is possible to learn highly accurate NNs simply by compressing (i.e., pruning and binarizing) randomized NNs without ever updating the weights. This is in sharp contrast to prevailing weight-training paradigm—i.e., iteratively learning the values of the weights by stochastic gradient descent. In this process, Diffenderfer and Kailkhura proposed (and proved) what they call the multi-prize lottery ticket hypothesis

Hidden within a sufficiently large, randomly weighted NN lie several subnetworks (winning tickets) that have comparable accuracy to a dense target network with learned weights (prize 1); (b) do not require any training to achieve prize 1 (prize 2); and (c) are robust to extreme forms of quantization (i.e., binary weights and/or activation) (prize 3).

In several state-of-the-art DNNs, the LLNL team found subnetworks that are significantly smaller (only 5–10% of the original parameter counts and keep only signs of weights and/or activations). These subnetworks do not require any training but match (or exceed) the performance of weight-trained, dense, and full-precision DNNs. The paper, “Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning a Randomly Weighted Network,” demonstrated that this counterintuitive learn-by-compressing approach results in subnetworks that are approximately 32x smaller and up to 58x faster than traditional DNNs without sacrificing accuracy.

“This is a completely new way to train neural networks that is computationally inexpensive and reduces the amount of necessary compute storage,” said Diffenderfer. Kailkhura added, “This work unlocks a range of potential areas where DNNs can now be applied, such as edge computing applications in environment/ocean/urban monitoring with cheap sensors.” The research has been accepted to the upcoming International Conference on Learning Representations 2021.

WiDS Livermore Goes Virtual

WiDS logo in green

Coinciding with International Women’s Day on March 8, LLNL’s fourth annual Women in Data Science (WiDS) regional event brought women together to discuss successes, opportunities, and challenges of being female in a mostly male field. The Lab’s first-ever virtual WiDS gathering attracted dozens of data scientists from inside and outside the Lab. “If some of you have experienced being literally the only woman at the table, or the only woman on a panel, I’m sure you can agree it’s quite special to have an event centered on women in data science,” WiDS Livermore ambassador Marisa Torres told attendees. “We’re here to celebrate a lot of accomplishments.”

Speakers included statistician Ana Kupresanin, who discussed her path from a high school student in her home country of Croatia to LLNL, where she focuses on the uncertainty quantification of predictive models for the weapons program. Bioinformaticist Nisha Mulakken, who is studying the impact of microbiomes on COVID-19 severity among other pursuits, shared tips for mentoring young women and urged viewers to seek guidance through DSI and the Lab’s Data Science Summer Institute, which she co-directs.

screen shot of 5x5 video chat panels
WiDS Livermore ambassador Marisa Torres (top row, third from left) welcomed attendees to the virtual event.

The career panel was moderated by data scientist Amar Saini and featured Brenda Ng, Mary Silva, Juanita Ordóñez, and Hiranmayi Ranganathan. The panelists discussed balancing work and family obligations, imposter syndrome, preventing burnout, and other challenges unique to women in data science. The event also featured a speed mentoring session moderated by computer scientist Stephanie Brink, a collaborative platform for chatting, virtual breakout rooms, and a trivia game. Throughout the day, attendees also tuned into Stanford University’s worldwide livestream. Livermore’s was one of more than 200 WiDS events held in more than 60 countries in conjunction with the main conference. 

Lab Offers Forum on Machine Learning for Industry

ML4I logo with image of neural network on black background

LLNL is looking for participants and attendees from industry, research institutions and academia for the first-ever Machine Learning for Industry Forum (ML4I), a three-day virtual event starting Aug. 10. The event is sponsored by LLNL’s High Performance Computing Innovation Center and the Data Science Institute. The deadline for submitting presentations or industry use cases is June 30. The deadline for attendee registration is July 29.

Multimedia Highlights

  • University of Washington seminar. Physicist Brian Spears gave a virtual talk for UW’s seminar series. In “Cognitive Simulations: Combining Simulation and Experiment with Artificial Intelligence,” he described advances in machine learning architectures and methods for inertial confinement fusion science and other applications.
  • George Mason University colloquium. Computational scientist Youngsoo Choi spoke about “Where Are We with Data-Driven Surrogate Modeling for Various Physical Simulations?” at a virtual colloquium for GMU’s Center for Mathematics and Artificial Intelligence.
  • The Data Standard podcast. Data Science Summer Institute co-director Nisha Mulakken appeared on The Data Standard podcast to discuss her bioinformatics work at LLNL. Audio and video will be available in the coming weeks.
  • DDPS virtual seminar series. The Data-Driven Physical Simulations (DDPS) reading group has expanded to welcome external speakers. Led by Youngsoo Choi, the group’s virtual format is now available as a video playlist in the Livermore Lab Events YouTube channel.

Virtual Seminar

Robert Gramacy in front of a chalkboard

The DSI invited Dr. Robert Gramacy of Virginia Tech to speak at the March virtual seminar. Gramacy (pictured) recently published a book on surrogate modeling of computer experiments. His talk, “Replication or Exploration? Sequential Design for Stochastic Simulation Experiments,” showed that replication offers the potential to be beneficial from both design and computational perspectives.

Premier Computer Vision Conference Papers

Coming up in June, the international Computer Vision and Pattern Recognition (CVPR) conference is the ML/AI community’s premier annual computer vision event. Two LLNL co-authored papers have been accepted this year: