top of page
memmesheimer_bfro_004.jpeg

Raoul-Martin Memmesheimer 

University of Bonn

June, 29, 2022

Drifting assemblies for persistent memory:

Neuron transitions and unsupervised compensation

Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.

sm_lmate_phpiCa4it.jpeg

Maté Lengyel

University of Cambridge

June, 22, 2022

Optimal information loading into working memory

in prefrontal cortex

Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal information loading involves inputs that are largely orthogonal, rather than similar, to the persistent activities observed during memory maintenance. Using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading and we find that such dynamics emerge naturally as a dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC.

Work lead by Jake Stroud, in collaboration with Kei Watanabe, Takafumi Suzuki, and Mark G. Stokes

rava.jpeg

Rava Azeredo

da Silveira 

CNRS, Paris

June, 15, 2022

  • YouTube

Efficient Random Codes in a Shallow Neural Network

Efficient coding has served as a guiding principle in understanding the neural code. To date, however, it has been explored mainly in the context of peripheral sensory cells with simple tuning curves. By contrast, ‘deeper’ neurons such as grid cells come with more complex tuning properties which imply a different, yet highly efficient, strategy for representing information. I will show that a highly efficient code is not specific to a population of neurons with finely tuned response properties: it emerges robustly in a shallow network with random synapses. Here, the geometry of population responses implies that optimality obtains from a tradeoff between two qualitatively different types of error: ‘local’ errors (common to classical neural population codes) and ‘global’ (or ‘catastrophic’) errors. This tradeoff leads to efficient compression of information from a high-dimensional representation to a low-dimensional one. After describing the theoretical framework, I will use it to re-interpret recordings of motor cortex in behaving monkey. Our framework addresses the encoding of (sensory) information; if time allows, I will comment on ongoing work that focuses on decoding from the perspective of efficient coding.

parga.jpeg

Nestor Parga

Univ. Autonoma de Madrid

June 8, 2022

Not recorded

An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals

Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks. 

morrison.jpeg

Abigail Morrison

Jülich Research Center

& RWTH, Aachen

June, 1, 2022

  • YouTube

Heterogeneity and non-random connectivity in reservoir computing 


Reservoir computing is a promising framework to study cortical computation, as it is based on continuous, online processing and the requirements and operating principles are compatible with cortical circuit dynamics. However, the framework has issues that limit its scope as a generic model for cortical processing. The most obvious of these is that, in traditional models, learning is restricted to the output projections and takes place in a fully supervised manner. If such an output layer is interpreted at face value as downstream computation, this is biologically questionable. If it is interpreted merely as a demonstration that the network can accurately represent the information, this immediately raises the question of what would be biologically plausible mechanisms for transmitting the information represented by a reservoir and incorporating it in downstream computations. Another major issue is that we have as yet only modest insight into how the structural and dynamical features of a network influence its computational capacity, which is necessary not only for gaining an understanding of those features in biological brains, but also for exploiting reservoir computing as a neuromorphic application. In this talk, I will first demonstrate a method for quantifying the representational capacity of reservoirs without training them on tasks. Based on this technique, which allows systematic comparison of systems, I then present our recent work towards understanding the roles of heterogeneity and connectivity patterns in enhancing both the computational properties of a network and its ability to reliably transmit to downstream networks. Finally, I will give a brief taster of our current efforts to apply the reservoir computing framework to magnetic systems as an approach to neuromorphic computing.

rubin.jpeg

Jonathan Rubin

University of Pittsburgh

May, 25, 2022

  • YouTube

Learning in/about/from the basal ganglia

The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning.  In this talk, I will discuss a framework for how this function might be performed and computational results using an upward mapping to identify putative low-dimensional control ensembles that may be involved in tuning decision policy.  I will also present some recent experimental results and theory – related to effects of extracellular ion dynamics -- that run counter to the classical view of basal ganglia pathways and suggest a new interpretation of certain aspects of this framework.  For those not so interested in the basal ganglia, I hope that the upward mapping approach and impact of extracellular ion dynamics will nonetheless be of interest!

Yulia Timofeeva

University of Warwick

May, 18, 2022

YT.png
  • YouTube

Computational modelling of neurotransmitter release

Synaptic transmission provides the basis for neuronal communication. When an action-potential propagates through the axonal arbour, it activates voltage-gated Ca2+ channels located in the vicinity of release-ready synaptic vesicles docked at the presynaptic active zone. Ca2+ ions enter the presynaptic terminal and activate the vesicular Ca2+ sensor, thereby triggering neurotransmitter release. This whole process occurs on a timescale of a few milliseconds. In addition to fast, synchronous release, which keeps pace with action potentials, many synapses also exhibit delayed asynchronous release that persists for tens to hundreds of milliseconds. In this talk I will demonstrate how experimentally constrained computational modelling of underlying biological processes can complement laboratory studies (using electrophysiology and imaging techniques) and provide insights into the mechanisms of synaptic transmission.

ACompte.jpeg

Albert Compte

IDIPAPS

Barcelona

May, 11, 2022

  • YouTube

Neural circuits of visuospatial working memory

One elementary brain function that underlies many of our cognitive behaviors is the ability to maintain parametric information briefly in mind, in the time scale of seconds, to span delays between sensory information and actions. This component of working memory is fragile and quickly degrades with delay length. Under the assumption that behavioral delay-dependencies mark core functions of the working memory system, our goal is to find a neural circuit model that represents their neural mechanisms and apply it to research on working memory deficits in neuropsychiatric disorders. We have constrained computational models of spatial working memory with delay-dependent behavioral effects and with neural recordings in the prefrontal cortex during visuospatial working memory. I will show that a simple bump attractor model with weak inhomogeneities and short-term plasticity mechanisms can link neural data with fine-grained behavioral output in a trial-by-trial basis and account for the main delay-dependent limitations of working memory: precision, cardinal repulsion biases and serial dependence. I will finally present data from participants with neuropsychiatric disorders that suggest that serial dependence in working memory is specifically altered, and I will use the model to infer the possible neural mechanisms affected.

levina.jpeg

Anna Levina

Universität Tübingen

May, 4, 2022

  • YouTube

Timescales of neural activity: their inference, control,

and relevance.

Timescales characterize how fast the observables change in time. In neuroscience, they can be estimated from the measured activity and can be used, for example, as a signature of the memory trace in the network. I will first discuss the inference of the timescales from the neuroscience data comprised of the short trials and introduce a new unbiased method. Then, I will apply the method to the data recorded from a local population of cortical neurons from the visual area V4. I will demonstrate that the ongoing spiking activity unfolds across at least two distinct timescales - fast and slow - and the slow timescale increases when monkeys attend to the location of the receptive field. Which models can give rise to such behavior? Random balanced networks are known for their fast timescales; thus, a change in the neurons or network properties is required to mimic the data. I will propose a set of models that can control effective timescales and demonstrate that only the model with strong recurrent interactions fits the neural data. Finally, I will discuss the timescales' relevance for behavior and cortical computations.

Ahmadian_Profile-Pic.jpeg

Yashar Ahmadian 

Cambridge, UK

April, 27, 2022

  • YouTube

The balance of excitation and inhibition

and a canonical cortical computation

Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. The balanced network model provides a self-consistent account of this observation: population rates dynamically adjust to yield a state in which all neurons are active at biological levels, with their E & I inputs tightly balanced. But global tight E/I balance predicts population responses with linear stimulus-dependence and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which only a localized subset of neurons are active and have balanced inputs. We analytically show that in networks of neurons with different stimulus selectivities, the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), which predicts a regime of loose, rather than tight, E/I balance. However, an understanding of the causal relationship between E/I balance and normalization in SSN and conditions under which SSN yields significant sublinear integration are lacking. For weak inputs, SSN integrates inputs supralinearly, while for very strong inputs it approaches a regime of tight balance. We show that when this latter regime is globally balanced, SSN cannot exhibit strong normalization for any input strength; thus, in SSN too, significant normalization requires localized balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation. Time allowing I will also cover our work extending a normative theoretical account of normalization which explains it as an example of efficient coding of natural stimuli. We show that when biological noise is accounted for, this theory makes the same prediction as the SSN: a transition to supralinear integration for weak stimuli.

April 20, 2022

Seminar Cancelled

carl1.jpg

Homage to Carl van Vreeswijk (1962-2022)

April 27, 2022

Carl van Vreeswijk passed away on the 13th of April, 2022, at him home in Paris, as he was getting ready to go to the lab for the WWTNS of the week.

 

Carl was an exceptionally gifted theoretical neuroscientist. With his deep understanding of theoretical tools, his curiosity and collaborations with experimentalists, he introduced and developed many pioneering concepts and techniques which have shaped our current understanding of recurrent neuronal networks and cortical dynamics.

 

Beyond his prolific scientific wisdom and creativity, Carl was an inspiration to many, a generous and beloved friend and collaborator, and an extraordinarily caring mentor. He was keen on teaching. He spent ample time bringing his exceptional knowledge to many young researchers over many summer schools, workshops, and conferences.

 

The World Wide Theoretical Neuroscience Seminar (WWTNS) series, that Carl and David Hansel  founded as a space where theoreticians can present their work in-depth, including equations and mathematical tools will be renamed the "van Vreeswijk Theoretical Neuroscience Seminar" (VVTNS) series to honour his memory. 

  • YouTube
HGR_Aug-2009.jpeg

Horacio Rotstein

New Jersey Institute

of Technology

April, 13, 2022

  • YouTube

Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems

Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type ofresonance by qualitative different mechanisms.

rodica.jpeg

Rodica Curtu

UIOWA

April, 6, 2022

Not recorded

Unravelling bistable perception from human intracranial recordings

Discovering dynamical patterns from high fidelity timeseries is typically a challenging task. In this talk, the timeseries data consist of neural recordings taken from the auditory cortex of human subjects who listened to sequences of repeated triplets of tones and reported their perception by pressing a button. Subjects reported spontaneous alternations between two auditory perceptual states (1-stream and 2-streams). We discuss a data-driven method, which leverages time-delayed coordinates, diffusion maps, and dynamic mode decomposition, to identify neural features that correlated with subject-reported switching between perceptual states. 

11126-ruben-coencagli.jpeg

Ruben Coen-Cagli 

Albert Einstein College

of Medicine

March, 30, 2022

  • YouTube

Probabilistic computation in natural vision

A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation. 

March 16 & March 23

 

Eve and following day Cosyne22

Guetig_Robert_head.jpg

Robert Guetig

Charité – Universitätsmedizin Berlin & BIH

March, 9, 2022

  • YouTube

Turning spikes to space: The storage capacity of tempotrons

with plastic synaptic dynamics

Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.

goldman.jpeg

Mark Goldman

UC Davis

March 2, 2022

Integrators in short- and long-term memory

The accumulation and storage of information in memory is a fundamental computation underlying animal behavior. In many brain regions and task paradigms, ranging from motor control to navigation to decision-making, such accumulation is accomplished through neural integrator circuits that enable external inputs to move a system’s population-wide patterns of neural activity along a continuous attractor.  In the first portion of the talk, I will discuss our efforts to dissect the circuit mechanisms underlying a neural integrator from a rich array of anatomical, physiological, and perturbation experiments. In the second portion of the talk, I will show how the accumulation and storage of information in long-term memory may also be described by attractor dynamics, but now within the space of synaptic weights rather than neural activity. Altogether, this work suggests a conceptual unification of seemingly distinct short- and long-term memory processes.

  • YouTube
d20351bf-294b-49ce-81bb-1ec6f38ebea3-small.jpeg

Rainer Engelken Columbia University 

February, 23, 2022

  • YouTube

Taming chaos in neural circuits

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

téléchargement (2).jpeg

Christian Machens

Champalimaud Center, Lisboa

February 16, 2022

Robustness in spiking networks: a geometric perspective

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.

  • YouTube
img_6562e lo-res.jpeg

Carmen Canavier

LSU Health Sciences Center, New Orleans

February 9, 2022

  • YouTube

NaV Long-term Inactivation Regulates Adaptation in Place Cells and Depolarization Block in Dopamine Neurons

In behaving rodents, CA1 pyramidal neurons receive spatially-tuned depolarizing synaptic input while traversing a specific location within an environment called its place. Midbrain dopamine neurons participate in reinforcement learning, and bursts of action potentials riding a depolarizing wave of synaptic input signal rewards and reward expectation. Interestingly, slice electrophysiology in vitro shows that both types of cells exhibit a pronounced reduction in firing rate (adaptation) and even cessation of firing during sustained depolarization. We included a five state Markov model of NaV1.6 (for CA1) and NaV1.2 (for dopamine neurons) respectively, in computational models of these two types of neurons. Our simulations suggest that long-term inactivation of this channel is responsible for the adaptation in CA1 pyramidal neurons, in response to triangular depolarizing current ramps. We also show that the differential contribution of slow inactivation in two subpopulations of midbrain dopamine neurons can account for their different dynamic ranges, as assessed by their responses to similar depolarizing ramps. These results suggest long-term inactivation of the sodium channel is a general mechanism for adaptation.

Alex-Roxin.jpeg

Alex Roxin

CRM, Barcelona

February 2, 2022

  • YouTube

Network mechanisms underlying representational drift

in area CA1 of hippocampus.

Recent chronic imaging experiments in mice have revealed that the hippocampal code exhibits non-trivial turnover dynamics over long time scales. Specifically, the subset of cells which are active on any given session in a familiar environment changes over the course of days and weeks. While some cells transition into or out of the code after a few sessions, others are stable over the entire experiment. The mechanisms underlying this turnover are unknown. Here we show that the statistics of turnover are consistent with a model in which non-spatial inputs to CA1 pyramidal cells readily undergo plasticity, while spatially tuned inputs are largely stable over time. The heterogeneity in stability across the cell assembly, as well as the decrease in correlation of the population vector of activity over time, are both quantitatively fit by a simple model with Gaussian input statistics. In fact, such input statistics emerge naturally in a network of spiking neurons operating in the fluctuation-driven regime. This correspondence allows one to map the parameters of a large-scale spiking network model of CA1 onto the simple statistical model, and thereby fit the experimental data quantitatively. Importantly, we show that the observed drift is entirely consistent with random, ongoing synaptic turnover. This synaptic turnover is, in turn, consistent with Hebbian plasticity related to continuous learning in a fast memory system.

SueYeon.jpeg

SueYeon Chung

Flatiron Institute/NYU

January 26, 2022

  • YouTube

Structure, Function, and Learning in Distributed Neuronal Networks

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules.

First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties.

Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.

Nicolas Brunel.jpeg

Nicolas Brunel

Duke University

January 19, 2022

Response of cortical networks to optogenetic stimulation:

Experiment vs. theory

Optogenetics is a powerful tool that allows experimentalists to perturb neural circuits. What can we learn about a network from observing its response to perturbations? I will first describe the results of optogenetic activation of inhibitory neurons in mice cortex, and show that the results are consistent with inhibition stabilization. I will then move to experiments in which excitatory neurons are activated optogenetically, with or without visual inputs, in mice and monkeys. In some conditions, these experiments show a surprising result that the distribution of firing rates is not significantly changed by stimulation, even though firing rates of individual neurons are strongly modified. I will show in which conditions a network model of excitatory and inhibitory neurons can reproduce this feature.

  • YouTube
Brenner.jpeg

Naama Brenner

Technion, Haifa

January 12, 2022

Exploratory learning outside the brain

Learning entails self-modification of a system under closed-loop dynamics with its environment. Not only the system's components may change, but also the way they interact with one another - like synapses during learning in the brain, that modify interactions between neurons. Such processes, however, are not limited to the brain but can be found also in other areas of biology. I will describe a framework for a primitive form of learning that takes place within the single cell. This type of learning is composed of random modifications guided by global feedback. The capacity to utilize exploratory dynamics, improvisational in nature, provide cells with the plasticity required to overcome extreme challenges and to develop novel phenotypes.

  • YouTube
téléchargement (1).jpeg

Misha Tsodyks

Weizmann Institute

Institute for Advanced Study

January, 5, 2022

Human memory: mathematical models and experiments

I will present my recent work on mathematical modeling of human memory. I will argue that memory recall of random lists of items is governed by the universal algorithm resulting in the analytical relation between the number of items in memory and the number of items that can be successfully recalled. The retention of items in memory on the other hand is not universal and differs for different types of items being remembered, in particular retention curves for words and sketches is different even when sketches are made to only carry information about an object being drawn. I will discuss the putative reasons for these observations and introduce the phenomenological model predicting retention curves.    

December 29, 2021

New Year Break 

pCt1zg0H.png

Alex Hyafil

CRM, Barcelona

December 22, 2021

  • YouTube

Does human perception rely on probabilistic message passing?

The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.

E6cNI5tVoAABCeR.jpeg

Dina Obeid

Harvard University

December 15, 2021

  • YouTube

Wiring Minimization of Deep Neural Networks Reveal Conditions in which Multiple Visuotopic Areas Emerge

The visual system is characterized by multiple mirrored visuotopic maps, with each repetition corresponding to a different visual area. In this work we explore whether such visuotopic organization can emerge as a result of minimizing the total wire length between neurons connected in a deep hierarchical network. Our results show that networks with purely feedforward connectivity typically result in a single visuotopic map, and in certain cases no visuotopic map emerges. However, when we modify the network by introducing lateral connections, with sufficient lateral connectivity among neurons within layers, multiple visuotopic maps emerge, where some connectivity motifs yield mirrored alternations of visuotopic maps–a signature of biological visual system areas. These results demonstrate that different connectivity profiles have different emergent organizations under the minimum total wire length hypothesis, and highlight that characterizing the large-scale spatial organizing of tuning properties in a biological system might also provide insights into the underlying connectivity.

téléchargement.jpeg

Taro Toyoizumi

RIKEN

December 8, 2021

  • YouTube

An economic decision-making model of anticipated surprise

with dynamic expectation

When making decision under risk, people often exhibit behaviours that classical economic theories cannot explain. Newer models that attempt to account for these ‘irrational’ behaviours often lack neuroscience bases and require the introduction of subjective and problem-specific constructs. Here, we present a decision-making model inspired by the prediction error signals and introspective neuronal replay reported in the brain. In the model, decisions are chosen based on ‘anticipated surprise’, defined by a nonlinear average of the differences between individual outcomes and a reference point. The reference point is determined by the expected value of the possible outcomes, which can dynamically change during the mental simulation of decision-making problems involving sequential stages. Our model elucidates the contribution of each stage to the appeal of available options in a decision-making problem. This allows us to explain several economic paradoxes and gambling behaviours. Our work could help bridge the gap between decision-making theories in economics and neurosciences.

img_0213.jpeg

Tahra Eissa

University of Colorado Boulder

December 1, 2021

  • YouTube

Suboptimal human inference inverts the bias-variance trade-off for decisions with asymmetric evidence

Solutions to challenging inference problems are often subject to a fundamental trade-off between bias (being systematically wrong) that is minimized with complex inference strategies and variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to the frequently suboptimal inference strategies used by humans. We examined inference problems involving rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that were suboptimal relative to the Bayesian ideal observer. These suboptimal strategies reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but high bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but displayed weaker, near-normative bias. Our results yield new insights into the principles that govern individual differences in behavior that depends on rare-event inference, and, more generally, about the information-processing trade-offs that are sensitive to not just the complexity, but also the optimality of the inference process.

s200_ariel.furstenberg.jpg

Ariel Furstenberg

The Hebrew University

November 24, 2021

  • YouTube

Change of mind in rapid free-choice picking scenarios

In a famous philosophical paradox, Buridan's ass perishes because he is equally hungry and thirsty, and cannot make up his mind whether to first drink or eat. We are faced daily with the need to pick between alternatives that are equally attractive (or not) to us. What are the processes that allow us to avoid paralysis and to rapidly select between such equal options when there are no preferences or rational reasons to rely on? One solution that was offered is that although on a higher cognitive level there is symmetry between the alternatives, on a neuronal level the symmetry does not maintain. What is the nature of this asymmetry of the neuronal level? In this talk I will present experiments addressing this important phenomenon using measures of human behavior, EEG, EMG and large scale neural network modeling, and discuss mechanisms involved in the process of intention formation and execution, in the face of alternatives to choose from. Specifically, I will show results revealing the temporal dynamics of rapid intention formation and, moreover, ‘change of intention’ in a free choice picking scenario, in which the alternatives are on a par for the participant. The results suggest that even in arbitrary choices, endogenous or exogenous biases that are present in the neural system for selecting one or another option may be implicitly overruled; thus creating an implicit and non-conscious ‘change of mind’. Finally, the question is raised: in what way do such rapid implicit ‘changes of mind’ help retain one’s self-control and free-will behavior?     

farzada_WWTNS (1).jpg

Farzada Farkhooi

Humboldt  University

 Berlin

November 17, 2021

Noise-induced properties of active dendrites

Neuronal dendritic trees display a wide range of nonlinear input integrations due to their voltage-dependent active calcium channels. We reveal that in vivo-like fluctuating input enhances nonlinearity substantially in a single dendritic compartment and shifts the input-output relation to exhibiting nonmonotonous or bistable dynamics. In particular, with the slow activation of calcium dynamics, we analyze noise-induced bistability and its timescales. We show bistability induces long-timescale fluctuation that can account for observed dendritic plateau potentials in vivo conditions. In a multicompartmental model neuron with realistic synaptic input, we show that noise-induced bistability persists in a wide range of parameters. Using Fredholm's theory to calculate the spiking rate of multivariable neurons, we discuss how dendritic bistability shifts the spiking dynamics of single neurons and its implications for network phenomena in the processing of in vivo–like fluctuating input.

  • YouTube

Tchumachenko-011_Foto (Large)_Portrait2.jpg.2776539.jpg

Tatjana Tchumatcheko

University of Bonn

November 10, 2021

Not recorded

Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models

The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.

Vijay.jpeg

Vijay Balasubramanian

University of Pennsylvania

November 3, 2021

  • YouTube

Becoming what you smell: adaptive sensing in the olfactory system

 I will argue that the circuit architecture of the early olfactory system provides an adaptive, efficient mechanism for compressing the vast space of odor mixtures into the responses of a small number of sensors.  In this view, the olfactory sensory repertoire employs a disordered code to compress a high dimensional olfactory space into a low dimensional receptor response space while preserving distance relations between odors.  The resulting representation is dynamically adapted to efficiently encode the changing environment of volatile molecules.  I will show that this adaptive combinatorial code can be efficiently decoded by systematically eliminating candidate odorants that bind to silent receptors.  The resulting algorithm for "estimation by elimination" can be implemented by a neural network that is remarkably similar to the early olfactory pathway in the brain.   Finally, I will discuss how diffuse feedback from the central brain to the bulb, followed by unstructured projections back to the cortex, can produce the convergence and divergence of the cortical representation of odors presented in shared or different contexts.  Our theory predicts a relation between the diversity of olfactory receptors and the sparsity of their responses that matches animals from flies to humans.  It also predicts specific deficits in olfactory behavior that should result from optogenetic manipulation of the olfactory bulb and cortex, and in some disease states

Carsen_Stringer_in_bubble.jpg

Carsen Stringer
HHMI
Janelia Research Campus
October, 27, 2021

 

  • YouTube

Rastermap: Extracting structure from high dimensional neural data

 Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.

October 20, 2021 - 10 am to 12:15 pm (EDT)

 In Memoriam of Naftali Tishby  (1952-2021)

tali.jpeg
mypic2.jpeg

Amir Globerson

Tel Aviv University

 On the implicit bias of SGD in deep learning.

Tali's work emphasized the tradeoff between compression and information preservation. In this talk I will explore this theme in the context of deep learning. Artificial neural networks have recently revolutionized the field of machine learning. However, we still do not have sufficient theoretical understanding of how such models can be successfully learned. Two specific questions in this context are: how can neural nets be learned despite the non-convexity of the learning problem, and how can they generalize well despite often having more  parameters than training data. I will describe our recent work showing that gradient-descent optimization indeed leads to "simpler" models, where simplicity is captured by lower weight norm and in some cases clustering of weight vectors. We demonstrate this for several teacher and student architectures, including learning linear teachers with ReLU networks, learning boolean functions and learning convolutional pattern detection architectures.

  • YouTube
EliNelken-200x300.jpeg

Eli Nelken

The Hebrew University of

Jerusalem

Through the bottleneck: my adventures with the 'Tishby program'

One of Tali's cherished goals was to transform biology into physics. In his view, biologists were far too enamored by the details of the specific models they studied, losing sight of the big principles that may govern the behavior of these models. One such big principle that he suggested was the 'information bottleneck (IB) principle'. The iIB principle is an information-theoretical approach for extracting the relevant information that one random variable carries about another. Tali applied the IB principle to numerous problems in biology, gaining important insights in the process. Here I will describe two applications of the IB principle to neurobiological data. The first is the formalization of the notion of surprise that allowed us to rigorously estimate the memory duration and content of neuronal responses in auditory cortex, and the second is an application to behavior, allowing us to estimate 'optimal policies under information constraints' that shed interesting light on rat behavior.

  • YouTube
s200_ila.fiete.jpg

Ila Fiete

MIT

October 13, 2021

Pre-structured scaffolds for memory: an architecture for robust high-capacity associative memory that can tradeoff pattern number and richness

  • YouTube
longtin.jpeg

André Longtin

University of Ottawa

October 6, 2021

  • YouTube

 Adaptation-driven sensory detection and sequence memory

Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds.  We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus. 

bottom of page