A symphony of electrical signals and a dynamic tangle of connections between brain cells help us to make new memories. Using AI-powered models of groups of neurons, FMI researchers are working towards unlocking how the brain orchestrates this dance. Their latest study has achieved a major advance in accurately simulating the changes in the connections between neurons that sense the external environment, opening the door to a greater understanding of how countless brain cells transform sensations into perceptions and thoughts. Eventually, AI-powered tools may help to illuminate some of the workings of real brains.
Illustration: Friedemann Zenke/FMI
Bernstein members involved: Friedemann Zenke
Over the past few years, artificial intelligence — or AI — has started to revolutionize the world as we know it: some people now ask AI-based chatbots to write essays and summarize documents, others use AI-powered virtual assistants to send messages and control smart-home devices, others leverage the technology for drug discovery and development. Computational neuroscientist Friedemann Zenke uses AI to interrogate how the brain works.
In a study published today in Nature Neuroscience, researchers led by Friedemann Zenke investigated how specific groups of neurons adjust their connections in response to external stimuli. The work could help neuroscientists to understand how sensory neurons, which carry information about changes in the environment, make sense of the external world.
Zenke and his colleagues use mathematical tools and theories to study how networks of neurons in the brain work together to learn and store memories. By developing approaches to deal with the complexity of the human brain, Zenke’s team creates AI-based models of networks of neurons that can tell us useful things about the real organ.
“Everybody has a brain, yet we don’t really understand how it functions,” Zenke says. “Utimately, our goal is to acquire some form of understanding — that’s because before we can get to disease, we have to understand how the healthy system works.”
Models of the mind
Zenke had an early glimpse of what a career in science would be like. His father, who is a cell biologist, introduced him early on to the environment of a biomedical lab. During weekends and school holidays, Zenke used to join his dad at work. “I fondly remember putting my finger into the Vortex as a child,” Zenke says. “But even at the time, I was most fascinated by the computers in the lab.”
Zenke eventually set out to study physics, and in the late 2000 he went on to work in the branch of physical science that investigates the fundamental building blocks of matter. Although Zenke found the field fascinating, the timescales of the experiments were too long. He hoped that his research could have a more immediate impact. The blossoming field of computational neuroscience, he realized, was prompting warp-speed advances in our understanding of the brain. “That’s what made me switch,” he says. Another aspect that drew Zenke to neuroscience is its intricacy. “It’s probably one of the most complex research topics at the moment, and it requires a diverse approach.”
After setting up his own group at the FMI, Zenke set out to study how individual neurons contribute to the formation of memories — a process that plays a vital role in learning, problem-solving and personal identity. When we see someone for the first time, for example, the brain activates specific groups of neurons, resulting in a unique pattern of neuronal activity that helps create a memory. But the only information that an individual neuron has about the external world is in the form of electrical spikes that it receives from — and then transmits to — other neurons. “How does a single neuron contribute to this computation to memory and recognition of, for instance, that someone you met?,” Zenke says.
Researchers in his group address this question using diverse approaches from mathematics, computer science and physics. Memories are made by changes in groups of neurons and the connections, or synapses, between them. So, the researchers simulate these groups of neurons, or neural networks, in the computer. Then, they use approaches from physics to get a theoretical understanding of what’s happening in the networks. “Physics brings the power of abstraction — trying to boil down a problem to the bare minimum, the simplest parts that you can understand,” Zenke says.
But in the brain, hundreds or thousands of neurons interact to form memories, and sometimes purely analytic approaches are not enough to understand how these cells compute information. That’s when the researchers turn to machine learning methods to generate large-scale simulations. One such technology is deep learning, which has been used in many recent artificial intelligence advances, including autonomous driving.
Deep learning is based on neural networks that mimic the information processing of the human brain, allowing it to “learn” from large amounts of data. “A neural network per se doesn’t do anything useful, it only starts doing something useful when you train it with an algorithm,” Zenke says.
As the algorithm “feeds” data to the neural network, the connections within the network change, leading to a more complex model. Such neural network models allow computational neuroscientists to explore questions about how the brain works, similar to what biologists do with living animals.
In-silico neuronal circuits
If researchers can design neural network models that perform similarly to the brain, that may offer an explanation for how the real organ computes information and stores memories, Zenke says. Over the past few years, his team has developed mathematical descriptions of how synapses change through experience. The researchers trained a spiking neural network, which mimics the electrical spikes that neurons use to communicate with each other, and found that this network has some remarkable similarities to the workings of real brains.
For example, experiments in animal models have shown that the proper balance of excitatory and inhibitory electric signals enables neurons to be active in some circumstances and muted in others. When Zenke’s team trained the spiking neural network to perform a specific task — for example, recognize spoken words from a sentence — the artificial neurons in the network developed a balance between excitatory and inhibitory inputs, without being told to do so. “That’s where the circle closes: the model reaches a balance that we can find in biology,” he says.
In their latest study, Zenke and his colleagues asked how sensory networks represent the external world in neuronal activity. Sensory networks in the brain typically update their connections in response to external stimuli, but artificial neural networks don’t — unless specific data are fed into the algorithms to predict outcomes. The researchers found a simple solution to this problem by tweaking some of learning rules that help the artificial network learn from existing conditions.
Previous learning rules were derived from experimental data, but they were missing one fundamental aspect: prediction. So, Zenke’s team developed learning rules that try to predict future sensory inputs for each neuron. “That’s the key ingredient that seems to change everything about what these networks can do,” Zenke says. The findings could help neuroscientists to make sense of many experimental results obtained in animal models.
In the future, Zenke plans to create larger networks from connected neural circuits — a design principle used by the brain — to investigate how the real organs model the outside world, for example to take decisions or to evaluate other people’s actions.
Combining neural circuits into large networks will give artificial models a rudimentary form of behavior, which would allow Zenke to compare artificial models to experimental findings by other researchers. The predictions generated by AI-powered models could also be tested in living animals, providing neuroscientists with extra tools for exploring how the brain works and encouraging breakthroughs that would otherwise take decades.
“At the FMI and in Basel, we have an excellent circuit neuroscience community that provides a vibrant, collaborative atmosphere,” Zenke says. “It’s a fantastic place to do this type of research.”
Der Zugriff oder die technische Speicherung ist unbedingt für den rechtmäßigen Zweck erforderlich, um die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Abonnenten oder Nutzer ausdrücklich angefordert wurde, oder für den alleinigen Zweck der Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz.
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Voreinstellungen erforderlich, die nicht vom Abonnenten oder Nutzer beantragt wurden.
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt.Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Aufforderung, die freiwillige Zustimmung Ihres Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht zu Ihrer Identifizierung verwendet werden.
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.