“Nature Machine Intelligence” Study: Language models from Artificial Intelligence can predict how the human brain responds to visual stimuli
Large language models (LLMs) from the field of artificial intelligence can predict how the human brain responds to visual stimuli. This is shown in a new study published in Nature Machine Intelligence by Professor Adrien Doerig (Freie Universität Berlin) together with colleagues from Osnabrück University, University of Minnesota, and Université de Montréal, titled “High-Level Visual Representations in the Human Brain Are Aligned with Large Language Models.” For the study, the team of scientists used LLMs similar to those behind ChatGPT.

Cognitive neuroscientist Adrien Doerig is guest professor at the Cognitive Computational Neuroscience Lab, Freie Universität Berlin. Image credit: Joëlle Schwitguébel
Bernstein member involved: Adrien Doering
To predict the semantic fingerprints directly from the images, they also trained computer vision models. These models – guided by linguistic representations – aligned better with human brain responses than state-of-the-art image classification systems.
“Our results suggest that human visual representations mirror how modern language models represent meaning – which opens new doors for both neuroscience and AI,” says Doerig.





