Reports

DSI produces reports on critical topics to keep stakeholders informed of the latest developments in data science research. We often collaborate with strategic partners to deliver in-depth analyses of our findings and their implications for national security and the nation as a whole. Read below to learn more. 

AI in Eight Pages: Bridging Technology to Policy Through Science (2026)

AI in 8 Pages Cover

Executive Summary

What is AI? 

Artificial Intelligence (AI) describes computational tools that process data to identify patterns, solve specific problems, and generate outputs that resemble human-created work. 

Why does it matter now? 

AI has crossed from research labs into daily life. Rapid adoption and deployment across the economy introduces new capabilities and risks for safety, governance, and society. 

Opportunities:  

  • Jobs and Workforce: AI shifts human roles to more creative and strategic tasks, creating opportunities for new skills and career paths.

  • Productivity: Automation frees employees from repetitive work, driving productivity, innovation, and cost savings.

  • Economic Growth: Californian AI startups received $71bn in funding in 2024, 80% of the U.S. total.

  • Personalization and User Experiences: AI enables tailored services and products, improving individual outcomes and opening new business opportunities. 

Challenges: 

  • Economic and Job Disruption: Rapid AI adoption may cause job losses and wage polarization, challenging economic stability and workforce adaptation.

  • Technical Failures: AI systems are vulnerable to biases, lack of explainability, and system outages, leading to real-world risks in sensitive domains.

  • Misinformation and Security: AI can generate false or misleading information and is susceptible to attacks and data breaches, raising concerns about safety, privacy, and trust.

Governance:

  • Balanced Approach: Collaboration between technical experts and regulators to ensure innovation is secure, ethical, and  sustainable, without stifling progress.

  • Responsible Development: Aligning use with the public interest and fostering responsible use will require improving AI literacy, model transparency, and accountability.

  • Risk Mitigation and Targeted Safeguards: focusing on safety, transparency, and accountability across four AI pillars: data, compute, model design, and deployment.

Link to Download Full Report (5.3 MB) 

Link to Download Executive Summary (21.4 MB)

Author List

Brian Giera, Cindy Gonzales, and Caspar L. Donnison 
Lawrence Livermore National Laboratory 

Safety in Artificial Intelligence: Challenges and Opportunities for the U.S. National Labs and Beyond (2024)

Safe AI Report 2024 Cover

Executive Summary

This report discusses the importance of the critical and underexplored topic of artificial intelligence (AI) safety, as highlighted during the “Strategy Alignment on AI Safety” workshop convened by Lawrence Livermore National Laboratory (LLNL) and University of California (UC) at the UC Livermore Collaboration Center (UCLCC) in April 2024. 

Through a summary of keynote talks, panel discussions, and breakout sessions, world- leading AI safety experts from academia, industry, national labs, and government agencies addressed the importance of large-scale investments for research and capabilities in AI safety. 

With the field innovating at unprecedented rates, there is increasing urgency to develop novel evaluation methodologies that allow full considerations of risks/threats of AI technologies in different domains. Quantitative metrics and effective methodologies that can evaluate and audit the “safeness” of how a given AI technology is trained, deployed, or regulated are mainly focused on deep domain knowledge of specific applications, but are nascent for certain scenarios. This maturation gap could inadvertently create vulnerabilities that could be exploited by groups that pose a threat to national security. 

Additionally, the gap between the public’s and research community’s perceptions of AI risks/rewards is significant. While numerous voices from the AI community have expressed concern that the risks are very high (the most pessimistic voices being concerned that future AI systems could inflict extinction-level damage to humanity if deployed incorrectly), the public largely is aware only of risk in low-impact scenarios. This discrepancy highlights the crucial need for researchers to articulate what, why, and when various AI risks matter as part of motivating funding requests. Thus, the call to action for this community is to pursue AI safety as a “Big Science” project on a scale comparable to the Manhattan Project. High risks and high payoffs are on the table, but safe AI is a fast-moving target, and large-scale investments are needed to guide development of this technology in a responsible way. 

The authors highlight the need for a multilayered solution combining the development of new methods and algorithmic approaches to mitigate threats with an active participation of the government(s) in setting high industry standards and regulations based on state-of-the-art technology. The group agreed that, as we look to the future, national labs are well-positioned to play a role in the development of safeguarded AI technologies. 

Link to Download Full Report (3.7MB) 

Author List

Felipe Leno da Silva, Ruben Glatt, Brian Giera, Cindy Gonzales, Peer-Timo Bremer 
Lawrence Livermore National Laboratory 

Jessica Newman 
University of California, Berkeley 

Courtney Corley 
Pacific Northwest National Laboratory 

David Stracuzzi, Philip Kegelmeyer 
Sandia National Laboratories 

Francis Joseph Alexander 
Argonne National Laboratory 

Yarin Gal 
UK AI Safety Institute 

Mark Greaves 
Schmidt Sciences 

Adam Gleave 
FAR AI 

Timothy Lillicrap 
DeepMind & University College London 

Jean-Pierre Falet, Yoshua Bengio 
Mila & Université de Montréal