Bernstein Conference 2022

Location

TU Berlin

Date

Sep 13 – 16

Abstracts

Keynote Lecture

Sonja Hofer | University College London, UK
Making sense of what you see: cortical and thalamic circuits for vision

Invited Talks

Bing Brunton | University of Washington, USA
Tracking turbulent plumes with deep reinforcement learning

Christine Constantinople | New York University, USA
Distinct controllers for motivation and deliberation

Carina Curto | Pennsylvania State University, USA
Sequences and modularity of dynamic attractors in inhibition-dominated neural networks

Liset M de la Prida | Instituto Cajal, Spain
Understanding hippocampal activities using machine learning and data science tools

Juan Alvaro Gallego | Imperial College London, UK
Understanding the emergence of neural population dynamics underlying behaviour

Mehrdad Jazayeri | Massachusetts Institute of Technology, USA
Timing via counting using attractor networks in the entorhinal cortex

Gaby Maimon | The Rockefeller University, New York, USA
How brains add vectors

Andrew Saxe | University College London, UK
Why learn representations? Abstraction and generalization in a nonlinear deep network

Henning Sprekeler | Technische Universität Berlin, Germany
Top-down models of inhibitory circuits

Carsen Stringer | Janelia Research Campus, USA
Uncovering features of high-dimensional neural and behavioral data

Brains for Brains Young Researcher Award Winner

Simone Azeglio | Institut de l’Audition, Institut Pasteur, France
Activity-driven deep models for learning sound transformations across the auditory pathway

Contributed Talks

David Dahmen | Forschungszentrum Jülich, Germany
Strong recurrency of cortical networks constrains activity in low-dimensional subspaces

Paul Haider | University of Bern, Switzerland
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons

Kanghoon Jung | Johns Hopkins University, USA
Dopamine-mediated cellular programming of heuristic decisions

Ioannis Pisokas | University of Edinburgh, UK
How ants remember their way home

Aviv Ratzon | Technion, Israel Institute of Technology, Israel
Representational Drift As a Result of Implicit Regularization

Hazem Toutounji | University of Nottingham, UK
Selective Attention Aids Rapid Learning in Complex Environments

Sigrid Trägenap | Frankfurt Institute of Advanced Studies, Germany
Experience drives the development of novel, reliable cortical sensory representations from endogenous networks

Ivan Voitov | Sainsbury Wellcome Centre, UK
Cortical feedback loops bind distributed high-dimensional representations of working memory

Katharina Wilmes | University of Bern, Switzerland
Uncertainty-modulated prediction errors in cortical microcircuits

Satellite Workshops

Abstract:

In vitro cultures of neurons are one of the classical model systems in neuroscience that provide a great balance between complexity and experimental convenience. Due to this balance, in vitro cultures have supported the discovery of fundamental properties of neuronal networks, such as synaptic plasticity, E/I balance, and various homeostatic mechanisms. In recent years, major progress in experimental techniques and stem cell technologies has enabled greater control over the genetic and phenotypic heterogeneity of plated cells. Simultaneously, neuro-engineering and organoid technology allowed for unprecedented control over the topological structure of networks.

The goal of this workshop is to bring together experts in experiments, theory, and data analysis to explore insights into collective dynamics, network organization, and pathological pathways of complex neuropsychiatric disorders, which were enabled by recent advances in neural models in vitro. We will also address ongoing experimental obstacles, such as the variability of network dynamics between preparations, discuss recently discovered ways to enrich the complexity of network activity, and provide a perspective for future research that further closes the gap with in vivo systems.

Confirmed speakers:

  • Martina Brofiga
  • Michela Chiappalone
  • Richard Gao
  • Barbara Genocchi
  • Pascal Monceau
  • Elisha Moses
  • Samora Okujeni
  • Liset M de la Prida
  • Jordi Soriano
  • Paul Spitzner
  • Shani Stern
  • Oleg Vinogradov
  • Michael Ziller

Abstract:

Current recording techniques make it possible to simultaneously record from thousands of neurons in multiple brain regions. Recent work exploiting these techniques shows a broadly distributed representation of task variables across the brain (e.g., Steinmetz et al, 2019; Musall et al., 2019). Furthermore, representations within regions appear to be mixed, even at the single-cell level (Fusi et al., 2016). These observations challenge the classical assignment of different computations to specialised brain regions. Instead of a series of compartmentalised computations, neuroscience is developing a new understanding of integrated computations that emerge from brain-wide interactions of different regions within nested feedback loops (Cisek, 2019). Yet, our theoretical understanding of how neural dynamics enable behaviourally relevant computations is still largely limited to individual networks in isolation, but exciting new approaches taking multi-area interactions into account are emerging (Abbott & Svoboda, 2020, for a recent volume of reviews).

This workshop will bring together experts on these emerging experimental and statistical methods, as well as network modelling, that will allow neuroscientists in the coming decade to quantify and analyse interactions between brain regions. It will be divided into six thematic sections:

  • Neural subspaces,
  • Recurrent interactions,
  • Multi-area interactions for flexible behavior,
  • Latent variable modeling,
  • Feedforward interactions,
  • Probing interactions through perturbations

With the help of the audience and the speakers, we would like to address the following questions: What are the computational advantages of engaging multiple areas? What can architecture/structure tell us about a region’s role in distributed computations? What is the basic unit of computation? When are feedforward interactions enough, and what can we learn by studying recurrent interactions? Is there hope for properly estimating causal multi-region interactions statistically (e.g., distinguishing feedforward from feedback)?

Confirmed speakers:

  • Sophie Bagur
  • João Barbosa
  • Guillaume Bellec
  • Olivia Gozel
  • Mehrdad Jazayeri
  • Stephen Keeley
  • Christian Machens
  • Samuel Muscinelli
  • Cristina Savin
  • Maryam Shanechi
  • Joana Soldado-Magraner
  • Heike Stein
  • Ivan Voitov

Abstract:

Information processing in the brain is believed to arise from the coordinated response of neural activities in large populations across multiple areas. Such coordinated responses give rise to correlations in local circuits that reflect not only their internal state, but also the relevant information about the input to be processed. Understanding how such correlations are shaped by the interplay between network states and input properties is therefore essential to understand how sensory information is transformed across the multiple processing stages of the brain. Recent experimental and numerical studies addressed the question of how neural responses are shaped by both network states and input features. This includes work on network models that link computational performance to different dynamical regimes that are controlled by multiple mechanisms, from overall strength, heterogeneity and spatial structure of connections, to excitation-inhibition balance, to specific low-rank connectivity structures or local synaptic motifs.

Likewise, experimental and theoretical work analyzed how neural dynamics are shaped by different features of the input, such as their spectrum or intrinsic dimensionality, identifying unique representations in neural activity. In this workshop, we bring together experts on correlated and coordinated neural dynamics with the goal to engage into a joint discussion on how inputs (re)shape collective neural responses. How does the observed neural activity differ across cortical regions? How is this affected by input? And what does this mean for information processing? We want to address these and other questions, work out possibilities to test theoretical predictions by experiments, and discuss functional implications.

Confirmed speakers:

  • Guillermo Barrios Morales
  • Alex Cayco-Gajic
  • Juan Gallego
  • Matthieu Gilson
  • Chengcheng Huang
  • Matthias Loidolt
  • Alessandro Sanzeni
  • Friedrich Schuessler
  • Carsen Stringer
  • Roxana Zeraati

Abstract:

In this workshop, we focus on the dynamics of global cortical networks. Such a comprehensive perspective on cortical dynamics has become increasingly crucial for the understanding of brain functioning. In fact, it is now overly clear how various aspects of cognition rely on the coordinated (spontaneous and evoked) activation of many structures. The study of coherent interaction of multiple distributed networks poses novel challenges, both experimental and conceptual, which call for the development of innovative approaches and analysis methods.

In this series of talks, we will discuss how recent advances are enabling us to understand how these functional networks form and disband, what their link to anatomical connectivity is, and how they process information. By bringing together various types of expertise and heterogeneous scientific backgrounds, we aim at presenting an overview of the state of the field and at identifying the key open questions lying ahead. In particular, we will focus on bridging the gap between available experimental measures of local and distributed cortical activity and a number of theoretical frameworks offering a model to evaluate the contribution of the state of these networks to neural computations.

Confirmed speakers:

  • Sacha van Albada
  • Joana Cabral
  • Victor Jirsa
  • Majid Mohajerani
  • Federico Stella
  • Mark Woolrich

Abstract:

Neuronal circuits of the cerebral cortex underlie our abilities to perceive and recognize objects, form an internal model of dynamic environments, make decisions and plan and execute complex actions. Although these functions and abilities appear universal, the design even of simple sensory cortical circuits, surprisingly, was repeatedly disrupted by major evolutionary transformations. Recent work has uncovered that the ancestral state of primary visual cortex neither exhibited localized receptive fields nor retinotopic order (1) and that mammalian V1 underwent an all-or-nothing transition at the origin of primates (2). Exciting discoveries in the paleontology and paleobiology of the mammalian brain indicate that cortical circuit evolution has been punctuated by coordinated encephalization bursts (3) in distinct lineages, potentially reflecting key innovations in visual processing strategy (4). The challenge of understanding and reconstructing such transitions adds a novel dimension to the quest of decoding the principles of cortical processing and design.

The workshop “Major transitions in cortical circuit evolution” will bring together leading researchers that pioneered next-generation comparative studies, use predictive and data-driven modelling to identify mode-shifts in visual cortical processing and reconstruct the evolutionary trajectory of cortical evolution. Our workshop is designed to make recent advances and open challenges accessible to a wide audience in computational and systems neuroscience and foster a new wave of theoretical and comparative work in cortical circuit evolution.

(1) Fournier et al. Neuron 2018, (2) Schmidt & Wolf Curr Opinion Neurobiol 2021, (3) Bertrand et al Science 2022, (4) Luongo et al. eLife 2021

Confirmed speakers:

  • Ornella Bertrand
  • David Hansel
  • Suzana Herculano-Houzel
  • Michael Ibbotson
  • Agostina Palmigiano
  • Jonas Rose
  • Kerstin Schmidt
  • Madineh Sedigh-Sarvestani
  • Mark Shein-Idelson
  • Fabian Sinz

Abstract:

The biological machinery that sustains life operates under constantly changing conditions. Organisms must react to these changes to maintain homeostasis, or favorable operational conditions. Such ideal conditions, or set-points, are the valleys to which life gravitates. In this workshop, we will delve into the cellular, circuit, and system mechanisms that facilitate ‘living in the valley’ for the nervous system. In the nervous system, changes can occur across a wide range of spatial and temporal scales. For example, fluctuations in nutrient availability can rapidly alter neural response. The turnover rate of ion channels can change the firing patterns in a matter of hours. Similarly, behavioral states such as hunger can prompt feeding, and sleep can modify network connectivity. At a higher level of abstraction, animals must also learn, retain memory, and plan across their lifespan. We will address how neurons, networks, and the organism as a whole cope with change and build resilience in its anticipation. We will emphasize that neurons are more than computational units – they are living machines.

Confirmed speakers:

  • Chaitanya Chintaluri
  • Kristine Heiney
  • Daniel Levenstein
  • Astrid Prinz
  • Carolina Rezaval
  • Alon Rubin
  • Michael Rule
  • Gina G Turrigiano
  • Inna Slutsky
  • Lee Susman
  • Friedmann Zenke

Abstract:

The Arbor simulator lets users design their neuronal cells and networks in a user friendly way, and because it simulates morphologically detailed cells with arbitrary dynamics, it makes for a interesting new choice in the study of various kinds of plasticity. This workshop is meant to gather researchers and their research questions about plasticity and discover which features are needed from a simulator to answer some of these shared questions. Arbor will be introduced, and some plasticity related studies currently conducted showcased. The attendees are then invited to share and discuss their research questions and distil this in a roadmap that will drive Arbor development in the near future, enabling study of those research questions. We hope the workshop will be the starting point of a plasticity user group around the Arbor simulator, bringing focus to its development and hopefully research questions answered.

Confirmed speakers:

  • Hermann Cuntz
  • Sandra Diaz
  • Thorsten Hater
  • Brent Huisman
  • Han Lu
  • Stefan Rotter
  • Sebastian Schmitt

Abstract:

Surprise and other signals related to surprise, such as novelty and prediction error, are believed to have important roles in the brain: They affect physiological indicators such as pupil diameter and EEG amplitude, trigger the release of neuromodulators such as norepinephrine, and influence behavior through the shift of attention, memory segmentation, and modulation of learning speed. How and why these signals are involved in all these different phenomena have been subjects of a long-lasting debate in neuroscience and psychology. A systematic challenge in addressing the functional role of surprise-related signals is that different studies often refer to different quantities by “surprise” or use it interchangeably with “novelty”, “prediction error”, and “information gain”. In this workshop, we gather experimental and computational researchers from different sub-fields of neuroscience to discuss and review different concepts of surprise and their influence on physiological and behavioral measurements. Our goal is to take a step towards a consensus on the definitions of different surprise-related quantities and their functional roles in the brain.

Confirmed speakers:

  • Franziska Brändle
  • Irene Cogliati Dezza
  • Maria Eckstein
  • Alireza Modirshanechi
  • Sean O’Toole
  • Ilya Monosov

Abstract:

Why does symmetry encoding matter for neuroscience? There exists an intimate relationship between how natural phenomena evolve in time and how we represent the world by measuring it with our senses. On the one hand, the best mathematical model of the world we possess, the physical laws, can be characterised on the basis of invariants, conserved quantities (Noether’s Theorem). On the other hand, in order to perceive and thus interact with the external environment, we need to create robust neural representations from the “data” collected through sensory processing.

It is therefore inevitable that neural representations, as well as the structure and dynamics of neuronal circuits, are affected by the organisational properties dictated by physics. Awareness of this fact and incorporating it in the definition of computational and deep learning models for brain function can allow for more robust learning and provide better generalisation properties. This workshop proposal aims to unify under a common framework, theories and models that consider invariant representations in vision, audition, olfaction, touch, motor control, spatial navigation, and memory.

Confirmed speakers:

  • Michael A. Casey
  • Tamar Flash
  • Joram Keijser
  • David Klindt
  • Justin Lieber
  • Alexandra Libby
  • Sophia Sanborn
  • Thomas Serre
  • Tatyana Sharpee
  • Christian Shewmake
  • Hiba Sheheitli