TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
Karel Svoboda
Allen Institute
October 30, 2024
VVTNS Fifth Season Opening Lecture​
​
Illuminating synaptic learning​​​
How do synapses in the middle of the brain know how to adjust their weight to advance a behavioral goal (i.e. learning)? This is referred to as the synaptic ‘credit assignment problem’. A large variety of synaptic learning rules have been proposed, mainly in the context of artificial neural networks. The most powerful learning rules (e.g. back-propagation of error) are thought to be biologically implausible, whereas the widely studied biological learning rules (Hebbian) are insufficient for goal-directed learning. I will describe ongoing work focused on understanding synaptic learning rules in the cortex in a brain-computer interface task.
Hannah Choi
Georgia Tech
November 6, 2024
Unraveling information processing through functional networks
While anatomical connectivity changes slowly through synaptic learning, the functional connectivity of neurons changes rapidly with ongoing activity of neurons and their functional interactions. Functional networks of neurons and neural populations reflect how their interactions change with behaviors, stimulus types, and internal states. Therefore, the information propagation across a network can be analyzed through the varying topological properties of the functional networks. Our study investigates the functional networks of the visual cortex at both the single-cell and population levels. Our analyses of functional connectivity of single neurons, constructed from spiking activity in neural populations of the visual cortex, reveal local and global network structures shaped by stimulus complexity. In addition, we propose a new method for inferring functional interactions between neural populations that preserves biologically constrained anatomical connectivity and signs. Applying our method to 2-photon data from the mouse visual cortex, we uncover functional interactions between cell types and cortical layers, suggesting distinct pathways for processing expected and unexpected visual information.
Friedemann Zenke
University of Basel
November 13, 2024
Learning invariant representations through prediction
Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains perform this processing in deep sensory networks shaped through plasticity. However, our understanding of the underlying plasticity mechanisms remains rudimentary. I will introduce Latent Predictive Learning (LPL), a plasticity model prescribing a local learning rule that combines Hebbian elements with predictive plasticity. I will show that deep neural networks equipped with LPL develop disentangled object representations without supervision. The same rule accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Finally, our model generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity (STDP). LPL thus constitutes a plausible normative theory of representation learning in the brain while making concrete testable predictions.
The algebra of cognition
Chris Eliasmith
University of Waterloo November 20, 2024
In recent years, my lab and others have demonstrated the value of vector symbolic algebras (VSAs) for capturing a wide variety of neural and behavioural results. In this talk I discuss the surprising and compelling variety of tasks and styles of reasoning that are well-suited to descriptions using a specific VSA. These tasks include path integration, navigation, Bayesian reasoning, sampling, memorization, and logical inference. The resulting spiking neural network models capture various hippocampal cell types (grid, place, border, etc.), behavioural errors, and a variety of observed neural dynamics.
Memming Park
Champalimaud Foundation
November 27, 2024
Back to the Continuous Attractor
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general---they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the existence of a persistent slow manifold that survives the seemingly destructive bifurcation, relating the flow within the manifold to the size of the perturbation. Moreover, this allows the bounding of the memory error of these approximations of continuous attractors. Finally, we train recurrent neural networks on analog memory tasks to support the appearance of these systems as solutions and their generalization capabilities. Therefore, we conclude that continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.
Task structure shapes underlying dynamical systems
that implement computation
The talk will be in two parts. 1) First, I will present published work on multitasking artificial recurrent neural networks that revealed "dynamical motifs". Dynamical motifs are recurring patterns of network activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations. Motifs are reused across tasks and reflect the modular subtask structure of commonly studied cognitive tasks. We believe this compositional structure to be a feature of most complex cognitive tasks. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. 2) Building on these insights, we are now investigating whether shared dynamical motifs can explain the effectiveness of curriculum learning in animal behavior. Also known as animal shaping, curriculum learning is a training approach where complex tasks are learned in stages through a series of subtasks. Our work provides a novel framework for analysis of neural and behavioral data and has the potential to guide the design of optimal training protocols for both artificial systems and for animals.
Laura Driscoll
Allen Institute
December 4, 2024
Noam Sadon-Grosman
ELSC, The Hebrew University of Jerusalem​
December 11, 2024
Somatomotor to Higher Order Cognition -
The Detailed Organization of the Human Cerebellum
The cerebellum is long known for its somatomotor functions. Recent evidence has converged to suggest that major portions of the human cerebellum are linked to cognitive and affective functions. In this talk, I will present new insights into the functional organization of the human cerebellum. Our findings reveal three distinct somatomotor representations, including a newly identified third map that is spatially dissociated from the two well-established body representations. Between these body representations, a large megacluster extending across Crus I/II was consistently found with subregions linked to higher-order cerebral association networks. Within this megacluster, specific regions responded to domain-flexible cognitive control, while juxtaposed regions differentially responded to language, social, and spatial/episodic task demands. Similarly organized clusters also exist in the caudate consistent with the presence of multiple basal ganglia–cerebellar–cerebral cortical circuits that maintain functional specialization across their entire distributed extents.
Claire Meissner-Bernard
Friedrich Miescher Institute
for biomedical research
Basel
December 18, 2024
Properties of memory networks with excitatory-inhibitory assemblies​​
Classical views suggest that memories are stored in assemblies of excitatory neurons that become strongly interconnected during learning. However, recent experimental and theoretical results have challenged this view, leading to the hypothesis that memories are encoded in assemblies containing both excitatory (E) and inhibitory (I) neurons. Understanding the effects of these E-I assemblies on memory function is therefore essential. Using a biologically constrained model of an olfactory memory network, I will first describe how introducing E-I assemblies reorganizes odor-evoked activity patterns in neural state space. Indeed, the “geometry” of neural activity provides valuable insights about the computational properties of neural networks. I will then describe the behavior of networks with E-I assemblies upon partial manipulation of inhibitory neurons. Finally, I will discuss recent experimental data supporting predictions of the model.
​
December 25, 2024
Merry Christmas
​
January 1, 2025
Happy New Year