Bridging RNNs and data: Hypothesis-testing of network dynamics against neural recordings

Organizers

Isabel M. Cornacchia | University of Edinburgh, UK
Arthur Pellegrino | University of Edinburgh, UK

Abstract

Recurrent neural networks (RNNs) have become an increasingly popular tool for modelling the neural computations underlying behavioural tasks. By varying the task they are trained on, their architecture, or their learning rule, a myriad of different models can be created, each corresponding to specific assumptions regarding the neural circuit solving the task. It is however not always clear how to test the hypotheses generated by different RNNs against experimental data. Recently, dimensionality reduction methods have been highlighted as candidate tools to compare the neural activity of network models to large-scale neural recordings, opening a window into bridging artificial and biological network dynamics. The present workshop aims to foster discussion between theorists working on neural network models and experimentalists working on neural population recordings to make a step towards systematic hypothesis-testing of the neural computations performed by RNN models against neural data.

Schedule (CEST)

Sunday, Sep 29

14:00

Opening remarks

14:15

Mark Churchland | Columbia University, USA
From Spikes to Factors: Understanding Large-Scale Neural Computations

14:45

Harsha Gurani | University of Washington, USA
Feedback control of recurrent dynamics constrains motor adaptation

15:15

Srdjan Ostojic | Ecole Normale Supérieure, France
Generalisation through neural dynamics on non-linear manifolds

16:00

Coffee break

16:30

Juan Gallego | Imperial College London, UK
Planning under uncertainty uncovers a functional gradient in frontal cortex

17:00

Arthur Pellegrino and Isabel Cornacchia | University of Edinburgh, UK
Probing the relationship between the curved manifolds of RNNs and neural data

17:30

Panel discussion

18:30

End of first day

Monday, Sep 30

08:30

Kaushik Lakshminarasimhan | Columbia University, USA
Specific connectivity optimizes learning in thalamocortical loops

09:00

Laura Driscoll | Allen Institute, USA
Attractor reuse in artificial and biological networks

09:30

Valerio Mante | UZH and ETH Zurich, Switzerland
Modular and distributed multi-area computations of decisions and actions

10:00

Coffee break

10:30

Alex Cayco-Gajic | Ecole Normale Supérieure, France
Distributed learning across regions with feedforward-recurrent neural networks

11:00

Tatiana Engel | Princeton University, USA
The dynamics and geometry of choice in premotor cortex

11:30

Panel discussion

12:30

End