Bernstein SmartSteps 2021 | Fall Series

Thursday, November 4, 4 pm CET

  • Miriam Henning (Silies lab) | University of Mainz, Germany
    An optimal population code for global motion estimation in local direction-selective cells
  • Barna Zajzon (Morrison lab) | Forschungszentrum Jülich, Germany
    Representation transfer and signal denoising through topographic modularity

Thursday, November 25, 4 pm CET

  • Han Lu (Vlachos lab) | University of Freiburg, Germany
    Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation
  • Golan Karvat (Diester lab) | University of Freiburg, Germany
    Spontaneous activity competes with externally evoked responses in sensory cortex

Thursday, December 9, 4 pm CET

  • Bin Wang (Aljadeff lab) | University of California San Diego, USA
    A nonlinear shot noise model for calcium-based synaptic plasticity
  • Helene Schreyer (Gollisch lab) | University Medical Center Göttingen, Germany
    Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli


An optimal population code for global motion estimation in local direction-selective cells

Global motion processing, convergent evolution, population code,  direction-selectivity, Drosophila

Neuronal computations are matched to optimally encode the sensory information that is available and relevant for the animal. However, the physical distribution of sensory information is often shaped by the animal’s own behavior. One prominent example is the encoding of optic flow fields that are generated during self-motion of the animal and will therefore depend on the type of locomotion. How evolution has matched computational resources to the behavioral constraints of an animal is not known. Here we use in vivo two photon imaging to record from a population of >3.500 local-direction selective cells. Our data show that the local direction-selective T4/T5 neurons in Drosophila form a population code that is matched to represent optic flow fields generated during translational and rotational self-motion of the fly. This coding principle for optic flow is reminiscent to the population code of local direction-selective ganglion cells in the mouse retina, where four direction-selective ganglion cells encode four different axes of self-motion encountered during walking (Sabbah et al., 2017). However, in flies we find six different subtypes of T4 and T5 cells that, at the population level, represent six axes of self-motion of the fly. The four uniformly tuned T4/T5 subtypes described previously represent a local snapshot (Maisak et al. 2013). The encoding of six types of optic flow in the fly as compared to four types of optic flow in mice  might be matched to the high degrees of freedom encountered during flight. Thus, a population code for optic flow appears to be a general coding principle of visual systems, resulting from convergent evolution, but matching the individual ethological constraints of the animal.

Representation transfer and signal denoising through topographic modularity

Information transfer, modular networks, topographic maps, denoising

To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation

ACC, dendritic morphology, depressive disorder, network remodeling, spiking network model

Ever since Bliss and Lømo discovered the phenomenon of long-term potentiation (LTP) in rabbit dentate gyrus in the 1960s, Hebb’s rule—neurons that fire together wire together—gained popularity to explain learning and memory. Accumulating evidence, however, suggests that neural activity is homeostatically regulated. Homeostatic mechanisms are mostly interpreted to stabilize network dynamics. However, recent theoretical work has shown that linking the activity of a neuron to its connectivity within the network provides a robust alternative implementation of Hebb’s rule, although entirely based on negative feedback. In this setting, both natural and artificial stimulation of neurons can robustly trigger network rewiring. We used computational models of plastic networks to simulate the complex temporal dynamics of network rewiring in response to external stimuli. In parallel, we performed optogenetic stimulation experiments in the mouse anterior cingulate cortex (ACC) and subsequently analyzed the temporal profile of morphological changes in the stimulated tissue. Our results suggest that the new theoretical framework combining neural activity homeostasis and structural plasticity provides a consistent explanation of our experimental observations.

Spontaneous activity competes with externally evoked responses in sensory cortex.

LFP, beta-burst, resting-state-network, cortex, somatosensory

The interaction between spontaneously and externally evoked neuronal activity is fundamental for a functional brain. Increasing evidence suggests that bursts of high-power oscillations in the 15-30 Hz beta-band represent activation of resting state networks and can mask perception of external cues. Yet demonstration of the effect of beta power modulation on perception in real-time is missing, and little is known about the underlying mechanism. In this talk I will present the methods we developed to fill this gap together with our recent results. We used a closed-loop stimulus-intensity adjustment system based on online burst-occupancy analyses in rats involved in a forepaw vibrotactile detection task. We found that the masking influence of burst-occupancy on perception can be counterbalanced in real-time by adjusting the vibration amplitude. Offline analysis of firing-rates and local field potentials across cortical layers and frequency bands confirmed that beta-power in the somatosensory cortex anticorrelated with sensory evoked responses. Mechanistically, bursts in all bands were accompanied by transient synchronization of cell assemblies, but only beta-bursts were followed by a reduction of firing-rate. Our closed loop approach reveals that spontaneous beta-bursts reflect a dynamic state that competes with external stimuli.

A nonlinear shot noise model for calcium-based synaptic plasticity

Synaptic plasticity, network model, memory maintenance, Synaptic weight distribution

Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.

Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli

Neural computation, retina, bipolar cells, spatial integration,natural stimuli

Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.