DSI logo in white next to the text Big machines, data, ideas on a textured blue background

What's New

Monthly Newsletter

Don't miss the latest DSI news. Subscribe to our newsletter & read the latest volume.

Newsletter archive

Upcoming Seminar

Oct. 3: Stuart Russell, UC Berkeley. Contact DSI-Seminars [at] llnl.gov (DSI-Seminars[at]llnl[dot]gov) for a WebEx invite.

Seminar archive

Data Scientist Spotlight

Paige Jones

Paige Jones

Software Developer

Paige Jones has been a software developer in LLNL’s Enterprise Application Services division for three years. She is responsible for the integration of commercial off-the-shelf tools and software into LLNL’s internal systems, the development and enhancement of web applications, and the exploration of cutting-edge technologies for potential use at Livermore. With a B.S. in Computer Information Systems from California State University, Chico, Jones is currently advancing her expertise with an M.S. in computer science at Georgia Tech. She is an avid advocate for outreach and STEM education and participates in recruitment, Girls Who Code, and Science Accelerating Girls Engagement. She strives to inspire the next generation in the diverse realms of science, technology, engineering, and mathematics. In pursuit of this goal, Jones recently served on the organizing committee for the Lab’s 2024 Women in Data Science (WiDS) datathon. “WiDS plays a critical role in building a supportive data science community, helps ensure that resources reach underrepresented groups, and empowers women in their technical endeavors,” Jones says. “I am grateful for the opportunity to participate in WiDS and plan the WiDS datathon, and I am excited for what the future holds for women in data science!”

Recent Research

Evaluating Trust and Safety of LLMs

teal lines adorned with multicolored dots extending upward from a single point, all on a black background

Amid the skyrocketing popularity of large language models (LLMs), Livermore researchers are taking a closer look at how these AI systems perform under measurable scrutiny. LLMs are generative AI tools trained on massive amounts of data in order to produce a text-based response to a query. This technology has the potential to accelerate scientific research in numerous ways, from cyber security applications to autonomous experiments. But even if a billion-parameter model has been trained on trillions of data points, can we still rely on its answer? Two LLNL co-authored papers examining LLM trustworthiness—how a model uses data and makes decisions—were accepted to the 2024 International Conference on Machine Learning, one of the world’s prominent AI/ML conferences. “This technology has a lot of momentum, and we can make it better and safer,” states Bhavya Kailkhura, who co-wrote both papers. Read more via LLNL Computing.

More news | Selected publications

Opportunities

high-resolution simulation of the electrical activation map in a human’s heart

Open Data Initiative

Data Science Challenge students stand in a group outside NIF

Careers & Internships

women attending wids at tables with laptops

Upcoming Events

Featured Research