Computer modeling has been essential to scientific research for more than half a century—since the advent of computers sufficiently powerful to handle modeling’s computational load. Models simulate natural phenomena to aid scientists in understanding their underlying principles. Yet, while the most complex models running on supercomputers may contain millions of lines of code and generate billions of data points, they never simulate reality perfectly. Experiments—in contrast—have been fundamental to the study of natural phenomena from science’s earliest days. However, some of today’s complex experiments generate too much data for the human mind to interpret, or they generate too much of some data types and not enough of others. To improve the fidelity of complex computer models, and to wrangle the growing amount of data, Livermore researchers are developing an array of hardware, software codes, and artificial intelligence techniques such as machine learning they call cognitive simulation (CogSim). Researchers will use CogSim to find large-scale structures in big data sets, teach existing models to better mirror experimental results, and create a feedback loop between experiments and models that accelerates research advances. CogSim’s goal is ambitious—to transform itself into a fourth pillar of scientific research, joining the three pillars of theory, experiment, and computer modeling, as tools of discovery. Read more in Science & Technology Review.