Data Science in the News

Did you know we have a monthly newsletter? View past volumes and subscribe.

NNSA and Cornelis Networks to collaborate on next-generation high-performance networking

May 4, 2022 - 
The Next-Generation High Performance Computing Network (NG-HPCN) project for the NNSA’s Advanced Simulation and Computing (ASC) program will enable NNSA to co-design and partner with Cornelis on development and productization of next-generation interconnect technologies for HPC. The project is led by LLNL for the NNSA Tri-Labs: LLNL, Los Alamos and Sandia national laboratories. The resulting...

Accelerating the path to precision medicine

March 22, 2022 - 
LLNL joined the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) consortium in 2018. The national, multiyear, multidisciplinary effort, led by the University of California at San Francisco in collaboration with Lawrence Berkeley and Argonne national laboratories and other leading research organizations and universities, combines neuroimaging, blood-based...

COVID-19 R&D: Computing responds to pandemic

Jan. 19, 2022 - 
When the COVID-19 pandemic began, the Laboratory immediately started seeking solutions to the myriad challenges posed by the global crisis. The Computing Directorate jumped right in with research and development activities that combine molecular screening to inform antiviral drug experimentation; a generative molecular design software platform to optimize properties of antiviral drugs; an...

Digital twins for cancer patients could be ‘paradigm shift’ for predictive oncology

Dec. 16, 2021 - 
A multi-institutional team, including an LLNL contributor, has proposed a framework for digital twin models of cancer patients that researchers say would create a “paradigm shift” for predictive oncology. Published online Nature Medicine on November 25, the proposed framework for Cancer Patient Digital Twins (CPDTs) — virtual representations of cancer patients using real-time data — would...

Building confidence in materials modeling using statistics

Oct. 31, 2021 - 
LLNL statisticians, computational modelers, and materials scientists have been developing a statistical framework for researchers to better assess the relationship between model uncertainties and experimental data. The Livermore-developed statistical framework is intended to assess sources of uncertainty in strength model input, recommend new experiments to reduce those sources of uncertainty...

Data Science Challenge welcomes UC Riverside

Oct. 11, 2021 - 
Together with LLNL’s Center for Applied Scientific Computing (CASC), the DSI welcomed a new academic partner to the 2021 Data Science Challenge (DSC) internship program: the University of California (UC) Riverside campus. The intensive program has run for three years with UC Merced, and it tasks undergraduate and graduate students with addressing a real-world scientific problem using data...

Lab-led effort one of nine DOE-funded data reduction projects

Sept. 17, 2021 - 
An LLNL-led effort in data compression was one of nine projects recently funded by the DOE for research aimed at shrinking the amount of data needed to advance scientific discovery. Under the project—ComPRESS: Compression and Progressive Retrieval for Exascale Simulations and Sensors—LLNL scientists will seek better understanding of data-compression errors, develop models to increase trust in...

Inaugural industry forum inspires ML community

Sept. 16, 2021 - 
LLNL held its first-ever Machine Learning for Industry Forum (ML4I) on August 10–12. Co-hosted by the Lab’s High-Performance Computing Innovation Center (HPCIC) and Data Science Institute (DSI), the virtual event brought together more than 500 enrollees from the Department of Energy (DOE) complex, commercial companies, professional societies, and academia. Industry sponsors included...

Brian Gallagher combines science with service

June 20, 2021 - 
Brian Gallagher works on applications of machine learning for a variety of science and national security questions. He’s also a group leader, student mentor, and the new director of LLNL’s Data Science Challenge. The Lab has enabled Gallagher to combine scientific pursuits with leadership positions and people-focused responsibilities. “For a long time, my primary motivation was learning new...

Laser-driven ion acceleration with deep learning

May 25, 2021 - 
While advances in machine learning over the past decade have made significant impacts in applications such as image classification, natural language processing and pattern recognition, scientific endeavors have only just begun to leverage this technology. This is most notable in processing large quantities of data from experiments. Research conducted at LLNL is the first to apply neural...

Conference papers highlight importance of data security to machine learning

May 12, 2021 - 
The 2021 Conference on Computer Vision and Pattern Recognition, the premier conference of its kind, will feature two papers co-authored by an LLNL researcher targeted at improving the understanding of robust machine learning models. Both papers include contributions from LLNL computer scientist Bhavya Kailkhura and examine the importance of data in building models, part of a Lab effort to...

Advanced Data Analytics for Proliferation Detection shares technical advances during two-day meeting

May 7, 2021 - 
The Advanced Data Analytics for Proliferation Detection (ADAPD) program held a two-day virtual technical exchange meeting recently. The goal of the meeting was to highlight the science-based and data-driven analysis work conducted by ADAPD to advance the state-of-the-art to accelerate artificial intelligence (AI) innovation and develop AI-enabled systems to enhance the United States’...

A winning strategy for deep neural networks

April 29, 2021 - 
LLNL continues to make an impact at top machine learning conferences, even as much of the research staff works remotely during the COVID-19 pandemic. Postdoctoral researcher James Diffenderfer and computer scientist Bhavya Kailkhura, both from LLNL’s Center for Applied Scientific Computing, are co-authors on a paper—“Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural...

Virtual seminar series explores data-driven physical simulations

April 6, 2021 - 
The rapidly growing fields of artificial intelligence (AI) and machine learning (ML) have become cornerstones of LLNL’s data science research activities. The Lab’s scientific community regularly publishes advancements in both AI/ML applications and theory, contributing to international discourse on the possibilities of these compelling technologies. The large volume of AI/ML scientific...

Winter hackathon highlights data science talks and tutorial

March 24, 2021 - 
The Data Science Institute (DSI) sponsored LLNL’s 27th hackathon on February 11–12. Held four times a year, these seasonal events bring the computing community together for a 24-hour period where anything goes: Participants can focus on special projects, learn new programming languages, develop skills, dig into challenging tasks, and more. The winter hackathon was the DSI’s second such...

Novel deep learning framework for symbolic regression

March 18, 2021 - 
LLNL computer scientists have developed a new framework and an accompanying visualization tool that leverages deep reinforcement learning for symbolic regression problems, outperforming baseline methods on benchmark problems. The paper was recently accepted as an oral presentation at the International Conference on Learning Representations (ICLR 2021), one of the top machine learning...

'Self-trained' deep learning to improve disease diagnosis

March 4, 2021 - 
New work by computer scientists at LLNL and IBM Research on deep learning models to accurately diagnose diseases from X-ray images with less labeled data won the Best Paper award for Computer-Aided Diagnosis at the SPIE Medical Imaging Conference on February 19. The technique, which includes novel regularization and “self-training” strategies, addresses some well-known challenges in the...

Lab researchers explore ‘learn-by-calibration’ approach to deep learning to accurately emulate scientific process

Feb. 10, 2021 - 
An LLNL team has developed a “Learn-by-Calibrating” method for creating powerful scientific emulators that could be used as proxies for far more computationally intensive simulators. Researchers found the approach results in high-quality predictive models that are closer to real-world data and better calibrated than previous state-of-the-art methods. The LbC approach is based on interval...

CASC research in machine learning robustness debuts at AAAI conference

Feb. 10, 2021 - 
LLNL’s Center for Applied Scientific Computing (CASC) has steadily grown its reputation in the artificial intelligence (AI)/machine learning (ML) community—a trend continued by three papers accepted at the 35th AAAI Conference on Artificial Intelligence, held virtually on February 2–9, 2021. Computer scientists Jayaraman Thiagarajan, Rushil Anirudh, Bhavya Kailkhura, and Peer-Timo Bremer led...

Lawrence Livermore computer scientist heads award-winning computer vision research

Jan. 8, 2021 - 
The 2021 IEEE Winter Conference on Applications of Computer Vision (WACV 2021) on Wednesday announced that a paper co-authored by LLNL computer scientist Rushil Anirudh received the conference’s Best Paper Honorable Mention award based on its potential impact to the field. The paper, titled "Generative Patch Priors for Practical Compressive Image Recovery,” introduces a new kind of prior—a...