Sept. 24, 2025

ParaView-MCP levels the playing field for complex scientific visualization

Elliot Jaffe/LLNL

Visualizing the structures, forces, and processes involved in scientific research projects is essential for clearly conveying complex aspects of experiments. However, outside of designing, running, and analyzing their experiments, few scientists have enough time to commit to learning the software tools used to create scientific visualizations.

LLNL researchers Shusen Liu, Haichao Miao, and Peer-Timo Bremer from LLNL’s Center for Applied Scientific Computing (CASC) set out to enable scientists to make use of a visualization tool without extensive training and allow for direct, expert input into the visualization design process. They focused on ParaView, a premier, open-source application for scientific visualization used across the National Laboratories. The outcome of their study, called ParaView-MCP, empowers users to interact with the application through natural-language and visual inputs instead of the typical graphic user interface (GUI), which can appear daunting for novice users. Their work helps lower the barrier of entry to using ParaView’s capabilities while empowering users by autonomously conducting analysis of complex datasets via AI-driven decision support.

The team’s approach was to pair ParaView with a multimodal large language model (MLLM). In addition to text, MLLMs are trained on additional modalities such as images or audio, making them ideal to interact with the visual-heavy functionality of ParaView. The team leveraged the reasoning, command execution, and vision capabilities of MLLMs to enable users to directly operate the ParaView application by means of text prompts and images to achieve a desired output.

Language models and software applications are not compatible by default, however. In order for a user to provide a text or image input to ParaView, the team adopted a model context protocol (MCP). Introduced by the AI company Anthropic in late 2024, MCPs are an open-source framework that has become a standard for integrating existing data sources and programs with AI tools—as Anthropic phrases it, “like a USB-C port for AI applications.” The LLNL team used an MCP to bridge the gap between the user input and ParaView application so that user input could be properly interpreted and implemented in the application.

“[Using an MCP], we can allow any LLM to control complex software, such as ParaView. Imagine this tool as [a useful agent] that sits with a user, executing natural language instructions in ParaView and also explaining the results. [MCPs lessen] the trade-off between the time investments and the benefits of using such software. By minimizing time investment, we can accelerate discovery and insights,” says Haichao Miao.

This approach to user–application interaction is poised to amplify the utility and accessibility of visualization tools like ParaView—and potentially many other application types, as well. Lowering the level of experience necessary to begin using such programs, scientists whose training and expertise lies elsewhere will be empowered to weigh in more directly on how their research ought to be accurately communicated.

Tags: