- The Widest, Deepest Images of a Dynamic Universe
- Quest for Source of Black Hole Dark Matter
- Probabilistic Cosmological Mass Mapping from Weak Lensing Shear
- Blind Detection of Ultra-Faint Streaks with a Maximum Likelihood Method
- Synthesis of Disparate Optical Imaging Data for Space Domain Awareness
- Hierarchical Probabilistic Inference of Cosmic Shear
- LSST website
- Understanding the Universe with Applied Statistics (video)
“We want to close gaps in our understanding of how the universe works. Astrophysics holds some of the most pressing unsolved questions.”
– Michael Schneider
Data analysis is crucial when it comes to probing big questions about the physical universe. For instance, what is the nature of dark matter? Might dark energy point us toward a new understanding of gravity on cosmological scales? Can gravity and quantum physics be reconciled? “We’re digging through gigantic data samples for quantitative meaning,” says LLNL physicist Michael Schneider.
Astrophysics is a major growth area in LLNL’s advancement of basic science for national and global security needs. Scientists strive to understand fundamental characteristics of the Solar System and the universe to push the frontiers of our physical understanding of the world. Many of these research projects leverage data science techniques.
One Laboratory Directed Research and Development (LDRD) project explores dark energy by crunching measurements from ground- and space-based telescopic surveys. This is no small undertaking, Schneider explains, because “it’s complicated to measure the average of empty space in the universe. Everything else gets in the way.” For example, gravitational lensing, also termed cosmic shear, interferes with observations of light sources like galaxies.
The project’s data analysis requires image processing and inference algorithms, such as hierarchical Bayesian forward models, to account for such systematic errors. The LDRD team also simulates noise from atmospheric turbulence and diffusion effects from telescopes’ cameras, using data-processing software to correct errors. This approach to data calibration changes the way astronomic images are typically analyzed, thus improving scientists’ ability to distinguish dark energy from modified gravity and other characteristics of the universe.
LLNL’s work in this area will continue as the Large Synoptic Survey Telescope (LSST) comes online in Chile. Expected to capture 5.5 million wide-field images of the sky over a 10-year period—about 60 petabytes of data—the LSST promises to be a primary dark energy instrument in the coming years.
Another LDRD effort focuses on black holes to better describe dark matter. (Both dark energy and dark matter are invisible—as far as we know. The former is a repulsive force while the latter is attractive.) LLNL scientists are surveying the Magellanic Clouds and Milky Way for evidence of intermediate black holes, which are 10 to 10,000 times the mass of the sun. Dark matter may be made up of black holes in this mass range, so finding their gravitational microlensing signatures is key.
The team has developed a 10-dimensional maximum likelihood detection method to identify parallax microlensing events from nearly 12 terabytes of imaging data—more than 500 million stars—collected from recent surveys at the Cerro Tololo Inter-American Observatory (also located in Chile) as well as from an LLNL-led study in the 1990s. Principal investigator Will Dawson explains, “We can detect microlensing signatures by measuring the increased light from multiple images as a function of time.” By combining observations and simulations of these light curves, the team aims to make the first direct measurement of black hole mass spectra in the Milky Way.
A third LDRD project combines Bayesian statistical methods and deep learning to increase the nation’s space situational awareness—that is, cataloging and interpreting debris, satellites, and other objects orbiting Earth. Unlike LLNL’s dark energy and dark matter studies, these telescope-generated data sets are comparatively small. “We need a newer predictive physics model to evaluate less data with higher accuracy,” notes Schneider.
LLNL scientists and collaborators are also looking at ways machine learning can play a role in such data-starved environments. For example, machine learning techniques can enable collaborative autonomy within a constellation of satellites tasked with tracking orbiting objects. In this setup, each satellite exchanges data with the rest of the network until they reach consensus on a detected object’s location. This capability could help predict when space debris will hit a satellite or Earth.
Pictured (left to right): Michael Schneider, Nate Golovich, Bob Armstrong, Eric Green, Will Dawson, Roger Pearce, Josh Meyers, and Jason Bernstein.
Not pictured: Brian Bauman, James Buchanan, George Chapline, Ryan Dana, Imene Goumiri, Dan Merl, Caleb Miller (external collaborator), Amanda Muyskens, Ben Priest, Kerianne Pruett, Eddie Schlafly, and Travis Yeager.
Previous team members and external collaborators: Jonathan DuBois, Julia Ebert, Ya Ju Fan, Maya Gokhale, Chandrika Kamath, Noah Lifset, Eisha Nathan, Karen Ng, Matt Otten, Geoff Sanders, Karan Shaw, and Tom Zick.