What's New
Frontiers in Artificial Intelligence for Science, Security and Technology (FASST)
The Department of Energy proposes the FASST initiative to advance national security, attract and build a talented workforce, harness AI for scientific discovery, address energy challenges, and develop technical expertise necessary for AI governance. Visit the FASST web page, download the fact sheet, and watch the AI at DOE video.
Monthly Newsletter
Don't miss the latest DSI news. Subscribe to our newsletter & read the latest volume.
Upcoming Seminar
Oct. 3: Stuart Russell, UC Berkeley. Contact DSI-Seminars [at] llnl.gov (DSI-Seminars[at]llnl[dot]gov) for a WebEx invite.
Data Scientist Spotlight
Paige Jones
Software Developer
Paige Jones has been a software developer in LLNL’s Enterprise Application Services division for three years. She is responsible for the integration of commercial off-the-shelf tools and software into LLNL’s internal systems, the development and enhancement of web applications, and the exploration of cutting-edge technologies for potential use at Livermore. With a B.S. in Computer Information Systems from California State University, Chico, Jones is currently advancing her expertise with an M.S. in computer science at Georgia Tech. She is an avid advocate for outreach and STEM education and participates in recruitment, Girls Who Code, and Science Accelerating Girls Engagement. She strives to inspire the next generation in the diverse realms of science, technology, engineering, and mathematics. In pursuit of this goal, Jones recently served on the organizing committee for the Lab’s 2024 Women in Data Science (WiDS) datathon. “WiDS plays a critical role in building a supportive data science community, helps ensure that resources reach underrepresented groups, and empowers women in their technical endeavors,” Jones says. “I am grateful for the opportunity to participate in WiDS and plan the WiDS datathon, and I am excited for what the future holds for women in data science!”
Recent Research
Evaluating Trust and Safety of LLMs
Amid the skyrocketing popularity of large language models (LLMs), Livermore researchers are taking a closer look at how these AI systems perform under measurable scrutiny. LLMs are generative AI tools trained on massive amounts of data in order to produce a text-based response to a query. This technology has the potential to accelerate scientific research in numerous ways, from cyber security applications to autonomous experiments. But even if a billion-parameter model has been trained on trillions of data points, can we still rely on its answer? Two LLNL co-authored papers examining LLM trustworthiness—how a model uses data and makes decisions—were accepted to the 2024 International Conference on Machine Learning, one of the world’s prominent AI/ML conferences. “This technology has a lot of momentum, and we can make it better and safer,” states Bhavya Kailkhura, who co-wrote both papers. Read more via LLNL Computing.