Invited Lectures
James Di Carlo | McGovern Institute for Brain Research at MIT, USA
Reverse engineering human visual intelligence
Brent Doiron | University of Pittsburgh, USA
Old jobs for new inhibition: gain and stability in cortical networks with distinct inhibitory cell classes
Tatiana Engel | Cold Spring Harbour Laboratory, USA
Dynamics of cortical states during selective attention
Surya Ganguli | Stanford University, USA
Emergent elasticity in the neural code for space
Julijana Gjorgjieva | Max Planck Institute for Brain Research, Frankfurt, Germany
Shaping developing circuits by patterned spontaneous and early sensory activity
Vivek Jayaraman | Janelia Research Campus, Ashburn, USA
Towards a mechanistic understanding of navigational neural dynamics
Simon Laughlin | Cambridge University, UK
Pushing the limits
Sukbin Lim | NYU Shanghai, China
Inferring synaptic plasticity rules in cortical circuits from in vivo data
Timothy O’Leary | Cambridge University, UK
Bigger is better but too big is bad: how learning performance scales with neural circuit size
Eric Shea-Brown | University of Washington, Seattle, USA
What makes high-dimensional networks produce low-dimensional activity?
Tatjana Tchumatchenko | Max Planck Institute for Brain Research, Frankfurt, Germany
How to understand neural network dynamics via intracellular dynamics
Valentin Braitenberg Award Winner
Wulfram Gerstner | École Polytechnique Fédéral de Lausanne, Switzerland
Contributed Talks
Armin Bahl | Harvard University, USA
Neuronal mechanisms of evidence accumulation and decision making in the larval zebrafish
Alon Rubin | Weizmann Institute of Science, Rehovot, Israel
Revealing neural correlates of behavior without behavioral measurements
Louis Kang | University of California, Berkeley, USA
Replay arises naturally as a traveling wavefront in an entorhinal attractor network
Evelyn Tang | Max Planck Institute for Dynamics and Self-Organization, Göttingen
Effective learning is accompanied by high dimensional and efficient representations of neural activity
Johannes Zierenberg | Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
Homeostatic plasticity and external input explain difference in neural spiking activity in vitro and in vivo
Giulio Bondanelli | École Normale Supérieure de Paris, France
Coding with transient trajectories in recurrent neural networks
Genis Prat-Ortega | Institut d’investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain)
Flexible categorization in perceptual decision making
Satellite Workshops
Sensory neurons: ‘predictive coding’ or ‘coding for predictions’?
Organizers: Matthew Chalk
Description:
The notion of sensory prediction has a long history in theories of neural coding. For example, the influential ‘predictive coding’ hypothesis posits that to spare resources, rather than encoding all sensory inputs, neurons encode a prediction error, equal to the difference between their received and expected sensory inputs. An alternative, recent, idea is that sensory neurons instead ‘code for predictions’, by preferentially encoding sensory signals that are informative about the future while discarding other, non-predictive, signals.
Despite decades of research on predictive coding, many questions remain unanswered. For example, are sensory predictions important in determining what is encoded (i.e. do neurons selectively encode ‘predictive’ stimuli?). In addition, do sensory predictions play a role in determining how sensory signals are encoded (e.g. do neurons encode a ‘prediction error’ signal?). Finally, how, and over what timescales, do sensory circuits adjust their predictions based on experience and changes in the environment?
This workshop will bring together a broad group of theoretical and experimental researchers to discuss and debate the role of sensory predictions in neural coding. It will aim to build bridges between current theories of sensory prediction, clarifying what they have in common, and when they are opposed. Further, it will address how these different theories are supported by experiments. It is hoped that the resulting discussion will stimulate new ideas about how to test these theories, to uncover how sensory predictions are used by the brain to shape neural coding.
Speakers:
- Audrey Sederberg
- Bernhard Englitz
- Nicol Harper
- Wiktor Mlynarski
- George Keller
- Dirk Jancke
- Michael Berry
Offline hippocampal activity - Neural sequences and sharp-wave ripples
Organizers: Sen Cheng, José Donoso
Description:
The replay of neural sequences during hippocampal sharp waves/ripples (SWR) has been implicated in several cognitive functions (e.g., memory consolidation, working memory, navigation, planning, etc.), but the mechanisms by which the hippocampal network can acquire, sustain, and regenerate those sequences are still not clear.
With a few exceptions, both theoretical and experimental studies addressing this question focus on one of two aspects of the phenomenon, namely the local field potential (LFP) signature (i.e., the SWR complexes), or the underlying sequential activity. However, since different classes of LFP models impose different constraints on the mechanisms of replay, and vice-versa, it is important to address these two aspects of the phenomenon within a common ground. Thus, the main purpose of this workshop is to bring together the dynamical and computational aspects of off-line hippocampal activity, and thereby provide a common framework to study the phenomenon from a wider perspective. In particular, we aim at understanding the relationship between the oscillatory behavior of the hippocampal network and the different types of sequential activities it can support.
Speakers:
- Ole Paulsen
- Attila Gulyás
- Paola Malerba
- David Foster
- Amir Azizi
- José R. Donoso
- Joszef Csicsvari
The diversity of dynamical states in recurrent neural circuits
Organizers: David Dahmen, Viola Priesemann, Moritz Helias, Rainer Engelken
Description:
Experiments suggest a wealth of dynamical states in the brain. These range from asynchronous irregular activity to synchronizations, oscillations, activity waves or avalanches. Understanding the mechanisms that can give rise to such a diversity of network states in the healthy and diseased brain is a challenge both for experimentalists and theoreticians.
Also the implications of these different operating regimes of the network dynamics on the ability to encode, process and transmit information is not well understood. In recurrent network models, the sensitivity of neural dynamics to small perturbations or noise can reveal features that are governing the microscopic phase space organization. Optimal computational performance of neuronal networks was hypothesized to be found close to phase transitions, where the dynamics exhibits universal behavior that is characterized by strong concerted fluctuations between neurons. The diversity of possible states and state transitions in a high dimensional system such as cortex, however, permits a multitude of hypotheses on the “ground state” of different cortical regions.
In this workshop, we bring together experts working on theories to characterize the different dynamical states of recurrent neural networks and identify synaptic, neuronal, and network properties that shape the collective dynamics. We want to relate dynamical states to features of observed neural activity in different cortical regions, work out possibilities to test theoretical predictions by experiments, and discuss functional implications of the dynamics.
Speakers:
- Doiron B.
- Logiaco L.
- Brinkman B.
- Mastrogiuseppe F.
- Kadmon J.
- Pereira Obilinovic U.
- Shriki O.
- Mante V.
- di Santo S.
- Kushnir L.
- Kriener B.
Practical approaches to research data management and reproducibility
Organizers: Michael Denker, Thomas Wachtler
Description:
Advances in neuroscience technology and methodology have dramatically increased our abilities to generate data with unprecedented volume and complexity, and increasing complexity of experimental paradigms or models pose increasing demands on data management to ensure reproducibility. As we use more and more powerful experimental, analytical, and modeling techniques, we also require sophisticated methods supporting data handling, reproducibility, and collaboration. Although various tools have started to emerge that address some of these challenges, we must ask how these tools are best combined synergistically to form complete digitized and documented workflows for data acquisition and analysis.
This workshop will present practical examples of methods and tools that enable the researcher to keep track and keep in control of their data and analysis workflows. Lyuba Zehl and Hiroaki Wagatsuma will demonstrate how metadata and data of highly complex experiments can be organized and integrated to enable automated and reproducible data processing. Andrew Davison, Julia Sprenger, Johannes Koester, and Sharon Crook will present tools for reproducible analysis and modeling workflows. Solutions for data access, collaborative sharing and data publication will be presented by Roman Moucek, Michael Hanke and Christian Garbers. Finally, in a joint session several of the presenters will give a hands-on tutorial of combining tools for reproducible workflows and efficient collaboration, and provide the opportunity for workshop participants to directly explore possibilities how the tools can benefit their own work.
Speakers:
- Zehl L.
- Wagatsuma H.
- Legouée E.
- Sprenger J.
- Crook S.
- Koester J.
- Moucek R.
- Hanke M.
- Garbers C.
Adaptivity and Inhomogeneity in Neuronal Networks
Organizers: Ulrich Egert, Stefan Rotter
Description:
We hereby propose a workshop to discuss recent experimental findings and new theoretical concepts concerning the interaction between activity-dependent growth processes and structural inhomogeneity in neuronal networks of the brain.
A prevailing property of neuronal networks in the brain is that they are inhomogeneous in structure and in composition. However, what appears homogeneous on one scale of observation may appear inhomogeneous on another: parameters like neuron density, patchy connectivity patterns, inhomogeneous distribution of neuron types, synaptic connections between them, etc., can be organized on local or global scales (e.g. Schmidt et al. ‘18, Brain Struct Funct. 223:1409; Okujeni et al. ‘17, J. Neurosci. 37:3972, Ocker et al. ‘17, PLoS Comp. Bio. 13:e1005583). No doubt do such inhomogeneities have an impact on network activity dynamics (Litwin-Kumar & Doiron ‘12, Nat Neurosci 15:1498; Pernice et al. ‘11, PLOS Comp Bio 7:e100’59; Pernice et al. ‘13, Front Comput Neurosci 7:72) and, consequently, on the interpretation of experimental data. In addition to unavoidable statistical variability, pathological conditions, be it after peripheral amputations, stroke, dysplasia, epilepsy, etc., induce further changes to network structure. These changes, in turn, provoke adaptive responses on many levels, from synaptic plasticity to neurogenesis (Janz et al. ‘17, Cereb Cortex 27:2348). Pathological changes inherently have local aspects, which may lead to another type of inhomogeneity in the form of gradients. Examples are the periphery of the infarct zone, glial scars, or borders between sclerotic and healthy areas round epileptic foci (Häussler et al. ‘12, Cereb Cortex 22:26).
As a result, the dynamics of internal and external interaction as well as overall function may diverge considerably across networks of a particular general type with the same average properties.
Such inhomogeneities would have significant consequences from several perspectives. The statistical distribution of neuron types, synapses, connectivity motifs and recurrent connectivity would become highly variable across space. Synaptic plasticity and homeostatic processes would lead to inhomogeneous distributions of synaptic weights, excitability, excitation/inhibition balance and structural dynamics. These would further affect information processing and network stability (Gallinaro & Rotter ‘18, Sci Rep 8:3754; Landau et al. ‘16, Neuron 92:1106; Ostojic ‘14; Nat Neurosci 17:594; Teller et al. ‘14, PLoS Comp Bio 10:e1003796; Pernice et al. ‘11, ‘13; Jarvis et al ‘10, Neuroinform 4:11, Effenberger et al. ‘15, PLoS Comp Bio 11:e10044’; Nagler et al. ’11, Nat. Physics 7:265). On the other hand, adaptive processes may counteract inhomogeneity (Okujeni et al. ‘17). If this is the case, whether it is an advantageous goal, or under which boundary conditions it might be successful, is currently not known.
Speakers:
- Okujeni S.
- Levina A.
- Gallinaro J.
- Soriano J.
- Safavieh E.
- Hoffmann F.
- Häussler U.
- Ostojic S.
- Litwin-Kumar A.
- Merkt B.
- van Albada S.
Learning and recalling sequences of actions at the neuronal level and beyond
Organizers: Daniel Miner, Christian Tetzlaff
Description:
Living organisms rely on repeated sequences of actions to execute many of the important functions of their day-to-day lives. Some are stereotyped over nearly the entire lifespan of organisms, others are learned and recalled on a much more rapid timescale. Numerous studies of the ways in which basic neural network models can learn and recall simple sequences of stimuli or patterns of activity exist, as do behavioral and applied studies examining the psychophysical process of learning sequences of actions. However, the gap between these two levels has not yet been bridged. We aim to bring together experts on neural network modeling, neural plasticity phenomenology, and analysis and implementation of this sort of learning in applied scenarios to begin a discussion that will help to close this gap in levels of abstraction.
Specifically, we aim to first examine the mechanisms of neural plasticity (and associated homeostatic processes) and network structures that allow sequences of stimuli or actions to be learned. We will then examine the network dynamics that allow the recall and executions of such patterns of action. We will then consider larger-scale designs that can integrate this in more than a simplest-test-case scenario. Subsequently, we will engage with sicentists engaged in neuro-inspired robotics who are beginning to integrate such systems into their platforms, potentially in a closed-loop fashion, in order to form a cohesive chain of how these concepts arise from the most fundamental to the fully applied levels.
Speakers:
- Clopath C.
- Gerstner W.
- Triesch J.
- Miner D.
- Herpich S.
- Kempter R.
- Larsen J.
- Del Papa B.
- Sandamirskaya Y.
- Tani J.
Internally generated network dynamics: experiment and theory
Organizers: Anton Sirota, Arne Meyer
Description:
Many brain functions rely upon coordinated activity in neural circuits and even across multiple brain areas. Yet, how neurons and brain areas communicate and interact to represent sensory input, perform computations, and guide behaviour remains poorly understood. Recent advances in experimental tools allow monitoring and manipulation of neural activity at large scale and with an unprecedented level of detail in animals performing complex behaviors. One of the major challenges in computational neuroscience is to unravel the dynamical computations performed by neural circuits manifested in the emergent collective dynamics.
The workshop seeks to bring together experimental and computational work to discuss recent advances and to promote interaction between the two fields. Specifically, the workshop will focus on (1) experimental work combining large scale recordings and optogenetic perturbations to investigate internally-generated activity patterns, (2) data-driven computational models of low-dimensional neural dynamics, and (3) biologically-grounded models. We will discuss the utility and the challenges of the modern approaches to internal dynamics associated with motor-related and memory-related processes assessed using large scale neural population activity or field potential recordings. What are the biological insights that can be gained from data-driven models? How can the dynamics of a network-based model be compared to recorded brain activity? And how can we apply perturbations to uncover properties of the internal neural dynamics? These are some of the questions that we aim to address during the workshop.
Speakers:
- Sirota A.
- Einevoll G.
- Schaefer A.
- Luczak A.
- Barry C.
- Stark E.
- Echeveste R.
- Durstewitz D.
- Duncker L.
- Jazayeri M.
Emergent function in non-random neural networks
Organizers: Friedemann Zenke, Guillaume Hennequin, Tim Vogels
Description:
Computation in the brain occurs through complex interactions in highly structured, non-random networks. Moving beyond traditional approaches based on statistical physics, engineering-based approaches are bringing new vistas on circuit computation, by providing novel ways of i) building artificial yet fully functional model circuits, ii) dissecting their dynamics to identify new circuit mechanisms, and iii) reasoning about population recordings made in diverse brain areas across a range of sensory, motor, and cognitive tasks. Thus, the same “science of real-world problems” that is behind the accumulation of increasingly rich neural datasets is now also being recognized as a vast and useful set of tools for their analysis.
This workshop aims at bringing together researchers who build and study structured network models, spiking or otherwise, that serve specific functions. Our speakers will present their neuroscientific work at the confluence of machine learning, optimization, control theory, dynamical systems, and other engineering fields, to help us understand these recent developments, critically evaluate their scope and limitations, and discuss their use for elucidating the neural basis of intelligent behaviour.
Speakers:
- Kraynyukova N.
- Stroud J.
- Mejias J.
- Ostojic S.
- Stock C.
- Hennequin G.
- Gilra A.
- Guetig R., Clopath C.
- Zenke F.
- Marton C.
Neural computation of behaviorally relevant stimuli
Organizers: Jan Benda, Rüdiger Krahe
Description:
One influential contribution of computational neuroscience has been to emphasize the connection between the statistics of sensory stimuli and the design of the respective sensory systems. Although the importance of using natural stimuli for probing neural responses is well recognized, the term is often used too broadly to include all stimuli that are not entirely artificial. On a closer look, the specific way an animal moves and interacts with its specific environment creates natural stimuli with statistics that are species-specific. Obviously, only a subset of stimuli of the full species-specific stimulus set is relevant for behavioral decisions of the animal. Determining what parts of the full stimulus set are behaviorally relevant is a highly non-trivial task that requires interrogating the animals under natural conditions. The results of this quest can then be used to probe the nervous system and try to understand mechanisms of sensory processing as well as their evolution.
Our symposium showcases a number of different approaches and sensory systems in which behaviorally relevant natural stimuli have been quantified, and their implications for neural processing have been investigated. The three speakers of the first block address behaviors related to object detection. Paul Szyszka introduces a rapid coding scheme used by insect olfactory systems for tracking odor plumes. Jacob Engelmann shows how sensory flow in the context of active electrolocation can be used to optimize stimulus detection. Active echolocation is the topic of the contribution by Yossi Yovel. He investigates how bats sample their environment prior to complex flight maneuvers. After the coffee break, we continue with communication signals. Julie Elie sheds light on the meaning and processing of the different song elements of zebra finches. Jörg Henninger obtained data on electrosensory scenes experienced by courting weakly electric fish in their natural habitats in the Central American rainforest that raise disturbing questions about our understanding of sensory processing, not only in this well-studied system.
Speakers:
- Szyszka P.
- Yovel Y.
- Stöckl A.
- Engelmann J.
- Elie J.
- Henninger J.
Representational dynamics - How can we understand the temporal evolution of distributed brain activity patterns?
Organizers: Tim Kietzmann, Niko Kriegeskorte
Description:
Brain information processing is inherently multivariate and highly dynamic. Perception, cognition, and motor control all rely on rapid recurrent computations, with representations emerging, transmuting, and waning according to the brain’s own rhythm as information flows continually and bidirectionally between interacting areas. The field is increasingly acquiring multichannel measurements with high temporal resolution in humans (using MEG, EEG and ECog) and animals (using multi-electrode recordings and ECog). Advances in technologies for measuring brain activity will further increase the spatial and temporal resolution at which we can observe brain activity. The challenge is how to make sense of the detailed signatures of brain information processing that lie latent in such data. A number of studies have engaged the complexity of spatiotemporal brain-activity patterns by analysing patterns of activity as a function of time. Established methods include temporal-window pattern decoding and representational similarity analysis, as well as visualisations of dynamic representational trajectories using dimensionality reduction methods. Dynamic multivariate analysis is going to be a key element of systems neuroscience in humans and nonhuman models. However, it is unclear how to best characterise and visualise dynamic representations, what features to focus on (evoked, induced, frequency-dependent), how to analyse causal interactions and information exchange between brain areas, and how to perform inference comparing alternative models of brain information processing. This workshop brings together leading researchers in the field for a set of highly interactive talks that communicate particular novel neuroscientific insights along with a detailed explanation of the more generally applicable analyses that enabled the insights.
Speakers:
- Woolgar A.
- Cichy R.
- Isik L.
- Kietzmann T.
- DiCarlo J.
- Ganguli S.
Neural dynamics underlying cognitive processing in humans
Organizers: Johannes Sarnthein, Bryan Strange
Description:
Human brains can perform well in complex tasks. Fine-grained electrophysiological correlates of cognitive performance have become available with the help of patients that were implanted with electrodes. The electrodes record activity ranging from single neuron action potentials to neuronal assembly activity in local field potentials. Compared to non-invasive methods, these recordings provide a much more detailed picture of neuronal computations.
In our workshop, Florian Mormann will first present how long-term memory of complex items is reflected in long-term recordings of concept cells in the hippocampus. Bryan Strange will present how neuronal assembly activity in the amygdala explains emotional processing. Johannes Sarnthein will show how verbal working memory is mediated by both hippocampal neuronal spiking as well as long-range synchrony in hippocampal-cortical oscillations. Finally, Leila Reddy will present how single neurons in the human hippocampus encode associations between related stimuli.
Speakers:
- Mormann F.
- Strange B.
- Sarnthein J.
- Reddy L.
Dimensions of Neural Coding, Computation and Communication
Organizers: Andreas Herz, Alon Rubin
Description:
Recent advances in multi-electrode and optical imaging technologies enable simultaneous recordings of hundreds or even thousands of neurons. Since neural coding, computation, and communication likely rely on coordinated activity patterns across large cell populations, such data facilitate the study of the global structure of neural function, which cannot be revealed by analyzing functional attributes at the single-neuron level.
While the activity of a population with N neurons can be pictured in an N-dimensional space, their behaviorally relevant dynamics under natural conditions may reside within much lower-dimensional manifolds. Methods for dimensionality estimation and dimensionality reduction can help to identify these manifolds, study the collective neural dynamics and understand population codes and their intrinsic correlation structures and trial-to-trial variability. This approach is relevant for exploratory research and when testing hypotheses, from neural connectivity to overall neural functionality.
As concepts and methods related to this topic are rapidly developing, this workshop cannot (and should not try to) provide a single polished view but will rather present complementary views about how to define effective spaces of neural activity and how to interpret the experimentally observed phenomena. We will do so from various angles and for different (sensory, motor, and cognitive) neuronal systems – and hope to trigger discussions amongst and between theorists and experimentalists.
Speakers:
- Bittner S.
- Rubin A.
- Lewallen S.
- Low R.
- Engel T.
- Machens C.
Resonance in neurons and neural networks: theoretical and experimental approaches
Organizers: Antonio Carlos Roque, Rodrigo Felipe de Oliveira Pena
Description:
Neurons can exhibit subthreshold voltage resonance to oscillatory input current. For specific frequencies of input current oscillations, the response membrane voltage displays enhanced amplitude of oscillations. This can have functional implications for neuronal selectivity in cases of resonant neurons embedded in networks that support oscillatory states of different frequencies. The resonance properties of a neuron depend on its intrinsic passive and active characteristics, e.g. morphology and ionic current properties. To unveil the roles of these characteristics on resonance and their impact at network level, several theoretical and experimental studies are being currently undertaken.
The objective of this workshop is to gather theoreticians and experimentalists who work on this subject to present and discuss their recent work. The range of topics will cover from dynamical systems-based mathematical analysis, through biophysical neuron modeling, and neurophysiological experiments. The workshop contents will involve: role of specific ionic currents, how to determine resonance by means of physiologically measurable parameters, stochastic resonance, relation between subthreshold resonance and spiking characteristics, bifurcation diagrams, and influence of single neuron resonance on network behavior.
Speakers:
- Pena R.F.O.
- Lindner B.
- Roth A.
- Canavier C.
- Rotstein H.G.
- Nadim F.
- Palmigiano A.
Deciphering neural circuits for innate behaviors: experimental and theoretical approaches in model organisms
Organizers: Marina Wosniack
Description:
Model organisms, such as fruit flies and nematode worms, have been essential to our understanding of nervous system function. Their relatively small and simple nervous system has allowed us to identify neurons and circuits across animals. Additionally, a wealth of genetic tools is available to label specific sets of neurons and manipulate them in intricate ways. In combination with cutting-edge imaging and electrophysiology techniques, we can examine the computations performed by individual neurons and neuronal populations. Thus, it is not a surprise that works on model organisms have unraveled milestone discoveries in nervous system development and function.
Understanding the neural circuit basis of behavior is a challenging goal in modern neuroscience. In fact, such a complex problem requires a multidisciplinary approach involving genetics, molecular biology, optics, ethology, neurobiology, and mathematical modeling. This strategy is most efficient when using model organisms, as they can produce complex motor behaviors and sophisticated imaging techniques can now record neuronal activity and the individual behavior simultaneously.
Speakers:
- Berni J.
- Jayaraman V.
- Hermundstad, A.
- Straw A.
- Wosniack M.
- Silies M.
Neuronal Intelligence: Narrowing the gap between neuroscience and AI
Organizers: Fabian Sinz, Matthias Bethge
Description:
While advances in deep learning methods have enabled impressive strides in artificial intelligence (AI) and changed our everyday lives, they still lack a fundamental feature of biological intelligence: robustness and generalization beyond the immediate data it was trained on. Current models in AI derive their power from the ability to fit almost any arbitrary function. However, the capability for universal approximation is as much a blessing as it is a curse, since it is hard to control the behavior of the models outside the domain of training examples the model was exposed to. In stark contrast, most vertebrate brains operate well under extreme changes in signal reliability (e.g., night vs. day) and statistics (e.g., rainforest vs. desert). What are the implicit assumptions and computational principles that the brain uses to achieve this level of robustness?
Speakers:
- Sinz, F.
- Reimer, J.
- Ecker, A.
- Funke, C.
- Vasudeva Raju R.
- Tolias, A.





