In upholding its mission of centralizing data science activity at LLNL, the DSI launched an informal event series on July 25, 2018. The new “bull sessions” are intended to facilitate networking, brainstorming, and problem-solving among data scientists and engineers across the Lab.
Dan Faissol, a member of DSI’s Data Science Council, states, “We hope these bull sessions will also serve as a venue for team-forming for writing proposals and as a platform for disseminating new tools or advances in the field. The format has been specifically engineered to accomplish these goals.”
For example, the format is flexible. Presentations are accepted with or without slides and for any duration from 1 to 15 minutes. Presenters may use their time for a range of purposes—from reviewing a paper or showing project results to explaining a technical challenge or describing a new approach. Building awareness is key: Participants can share ideas, solicit feedback, and expose their work to a larger audience.
Faissol co-organizes the bull sessions with Computational Engineering Division colleague Alan Kaplan. They emphasize that all data science “enthusiasts” are welcome to attend, even if they are not presenting. Each session takes place in the afternoon, allowing for an easy transition offsite where employees can continue to get to know one another—in other words, happy hour at a local taproom.
The first bull session featured 8 presenters and netted approximately 40 attendees. Returning summer intern Jiachen Yang presented his work on mean field games (MFGs), a class of differential equations that model large-scale collective behavior. “Almost all work in this area has been analytical, so there is great opportunity for machine learning to make an impact by bridging the mathematics with real phenomena via data-driven methods,” he says. Yang joined the bull session to see how other researchers view the potential of MFGs. In doing so, he met a fellow scientist with a background in game theory and MFGs.
Machine learning researcher Dave Widemann used his presentation time to describe an inverse solution for the wave equation. “I gave an example using a compute graph framework that is normally used for neural networks. The goal was to show that optimization and reinforcement learning can be done for more complex physics simulations such as computational fluid dynamics,” he explains. Like Yang, Widemann looks forward to future bull sessions. “There’s a lot more research happening here than we had time to cover in one afternoon,” he says.
According to Faissol, many participants expressed excitement about the new series. He notes, “People approached the speakers afterward to continue technical discussions beyond the Q&A, and in at least one case, they were meeting each other for the first time and exchanged contact information after realizing they work on similar topics.”