Biologically plausible learning in artificial neural networks
Organizers
Mattia Della Vecchia | Ecole Normale Supérieure, France
Leonardo Agueci | Ecole Normale Supérieure, France
Abstract
Artificial neural networks have been a crucial tool in neuroscience to understand brain functions, and major advances in artificial intelligence have led to methods that excel in a wide range of specific tasks, sometimes outperforming animal capabilities. Yet these systems do not show characteristic elements of animal intelligence, such as adaptability to new situations, transfer of knowledge across different tasks, or generalization from limited observations. On the other hand, our understanding of how the brain acquires these mechanisms is still limited, and this makes unclear the direction that should be taken moving forward. Some researchers argue that it is important to enrich artificial neural networks with biological components to study the underlying processes in the brain, while others are skeptical, highlighting the differences between the two. The debate revolves around questions like what levels of biological abstraction are possible, why we should take biological plausibility in consideration, or which biological elements are the most important to include.
The goal of the workshop is to bring together experts that work on expanding our comprehension of neural computations and learning mechanisms, in order to foster a discussion on how biological constraints in computational models can provide neuroscientists with insights into brain functions, and how these insights could influence future developments of artificial neural networks. The workshop will be divided into two mini-sessions, each one followed by a panel discussion, which themes will be:
- neural circuits models and plasticity mechanisms
- prediction error and prediction error-related plasticity
- biological plausible components in neural network modeling