Executive Summary
This report discusses the importance of the critical and underexplored topic of artificial intelligence (AI) safety, as highlighted during the “Strategy Alignment on AI Safety” workshop convened by Lawrence Livermore National Laboratory (LLNL) and University of California (UC) at the UC Livermore Collaboration Center (UCLCC) in April 2024.
Through a summary of keynote talks, panel discussions, and breakout sessions, world- leading AI safety experts from academia, industry, national labs, and government agencies addressed the importance of large-scale investments for research and capabilities in AI safety.
With the field innovating at unprecedented rates, there is increasing urgency to develop novel evaluation methodologies that allow full considerations of risks/threats of AI technologies in different domains. Quantitative metrics and effective methodologies that can evaluate and audit the “safeness” of how a given AI technology is trained, deployed, or regulated are mainly focused on deep domain knowledge of specific applications, but are nascent for certain scenarios. This maturation gap could inadvertently create vulnerabilities that could be exploited by groups that pose a threat to national security.
Additionally, the gap between the public’s and research community’s perceptions of AI risks/rewards is significant. While numerous voices from the AI community have expressed concern that the risks are very high (the most pessimistic voices being concerned that future AI systems could inflict extinction-level damage to humanity if deployed incorrectly), the public largely is aware only of risk in low-impact scenarios. This discrepancy highlights the crucial need for researchers to articulate what, why, and when various AI risks matter as part of motivating funding requests. Thus, the call to action for this community is to pursue AI safety as a “Big Science” project on a scale comparable to the Manhattan Project. High risks and high payoffs are on the table, but safe AI is a fast-moving target, and large-scale investments are needed to guide development of this technology in a responsible way.
The authors highlight the need for a multilayered solution combining the development of new methods and algorithmic approaches to mitigate threats with an active participation of the government(s) in setting high industry standards and regulations based on state-of-the-art technology. The group agreed that, as we look to the future, national labs are well-positioned to play a role in the development of safeguarded AI technologies.
Link to Download Full Report (3.7MB)
Author List
Felipe Leno da Silva, Ruben Glatt, Brian Giera, Cindy Gonzales, Peer-Timo Bremer
Lawrence Livermore National Laboratory
Jessica Newman
University of California, Berkeley
Courtney Corley
Pacific Northwest National Laboratory
David Stracuzzi, Philip Kegelmeyer
Sandia National Laboratories
Francis Joseph Alexander
Argonne National Laboratory
Yarin Gal
UK AI Safety Institute
Mark Greaves
Schmidt Sciences
Adam Gleave
FAR AI
Timothy Lillicrap
DeepMind & University College London
Jean-Pierre Falet, Yoshua Bengio
Mila & Université de Montréal