top of page
image.png
WWTNS (1).png

On Wednesday, 11 am ET

 

Organized by David Hansel, Ran Darshan

& Carl van Vreeswijk (1962-2022) 

​

​

​

About Us

About the Seminar

VVTNS  is a weekly digital seminar on Zoom targeting the theoretical neuroscience community. Created as the World Wide Neuroscience Seminar (WWTNS) in November 2020 and renamed in homage to Carl van Vreeswijk in Memoriam (April 20, 2022), its aim is to be a platform to exchange ideas among theoreticians. Speakers have the occasion to talk about theoretical aspects of their work which cannot be discussed in a setting where the majority of the audience consists of experimentalists. The seminars  are 45 min long followed by a discussion and are held on Wednesdays at 11 am ET. The talks are recorded with authorization of the speaker and are available to everybody on our YouTube channel.

 

To participate in the seminar you need to fill out a registration form after which you will

receive an email telling you how to connect.

​

​

  • Twitter
  • YouTube
thumbnail.jpeg

Lior Fox

Gatsby Computational Neuroscience Unit

March 4, 2026

Unsupervised representation learning by amortised neural

message-passing

Useful internal representations should explain the patterns of regularities and dependencies among observations. Probabilistic graphical models promise a principled way to uncover latent factors as such, but they are hard to scale to  handle high-dimensional sensory observations and complicated  dependencies structures. Neural-networks, on the other hand, excel at  approximating complicated high-dimensional functions, but their internal  representations do not easily lend themselves to a probabilistic interpretation.  Despite some successes, a general unified approach is still missing for integrating the two approaches. I will describe a novel approach towards merging adaptive neural-network components into a probabilistic framework, based on three core ideas. The first is to train a set of networks to collectively perform inference, leveraging the ability of pattern-recognition methods to amortise complicated transformations. The second is to constrain the way in which the outputs of these networks are interpreted, transformed, and combined together. These constraints, together with the learning objective itself, are derived directly from probabilistic considerations encoded in a graphical model. Finally, the third core idea is that of recognition-parametrisation, allowing the inference ("recognition") procedure to directly define the model itself, without requiring an explicit "generative" decoder.

Organizers

davidhansel.jpg
carl1.jpg

David Hansel

I am a theoretical neuroscientist at the National Center for Scientific Research in Paris, France and visiting professor at The Hebrew University in Jerusalem, Israel. I am mainly interested in the recurrent dynamics in the cortex and 

basal ganglia.

Carl van Vreeswijk *

I am a theoretical neuroscientist working at the National Center for Scientific Research in Paris, France. My main interest is the dynamics of recurrent networks of neurons in the sensory system.

*deceased

Ran Darshan

 I am a theoretical neuroscientist working at the Faculty of Medicine, the Sagol School of Neuroscience & the School of Physics and Astronomy at Tel Aviv University, Israel. I am interested in learning and dynamics of neural networks. My main goal is to achieve a mechanistic understanding of brain functions.

image.png

©2020 by WWTNS

bottom of page