May 2, 2024
Previous Next

UC/LLNL joint workshop sparks crucial dialogue on AI safety

Jeremy Thomas/LLNL

Representatives from DOE national laboratories, academia and industry convened recently at the University of California Livermore Collaboration Center (UCLCC) for a workshop aimed at aligning strategies for ensuring safe AI. The daylong event, attended by dozens of AI researchers, included keynote speeches by thought leaders, panels by technical researchers and policymakers and breakout discussions that addressed the urgent need for responsible AI development. Workshop organizers with LLNL's Data Science Institute (DSI) and Center for Advanced Signal and Image Sciences (CASIS) said the event’s goals included fostering collaboration between UC and the national labs, strategizing investments in AI and providing a platform for interdisciplinary dialogue with a focus on AI’s societal impact. Throughout the workshop, speakers, panelists and attendees focused on algorithm development, the potential dangers of superhuman AI systems and the importance of understanding and mitigating the risks to humans, as well as urgent measures needed to address the risks both scientifically and politically. They also addressed the importance of engaging with policymakers to ensure the responsible deployment of AI technologies to avoid models being used by bad actors for nefarious purposes. Others discussed considering development of a "Doomsday Clock"-style metric for quantifying and communicating risks of human extinction due to AI. Read more at LLNL News.