TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 25th, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
TBA
November, 18, 2020
TITLE
Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre contenu. C'est le moyen idéal pour tenir vos clients informés. Vous souhaitez personnaliser ce texte à votre image ? Glissez-déposez simplement vos éléments comme des images, des liens ou du texte ou connectez les données de votre collection. Donnez plus de détails sur vos services. Utilisez cette mise en page répétitive pour afficher votre
Maneesh Sahani
UCL, London
July, 7, 2021
Perceptual Inference, Uncertainty and Representation
To act effectively and flexibly in an imperfectly predictable environment with only incomplete and unreliable sensory information, animals must learn to form and compute with internal representations that reflect their necessarily uncertain beliefs about the state of the world. The optimal approach to handling uncertainty is rooted in Bayesian probability, and indeed humans and other animals often approach Bayes optimality with a degree of robustness and flexibility that continues to evade artificial systems. However, the question of how neural circuits organise to achieve this performance remains one of the fundamental mysteries of neuroscience.
I will discuss a series of models built around the idea that distributional information is naturally encoded in a distributed fashion by neural population firing rates that converge on the mean values of non-linear functions of state. We will see that such representations emerge naturally in task-optimised systems, and also provide a simple and effective substrate for unsupervised learning. Finally, I will sketch ongoing work that links the emergence of such
representations to the architecture of recurrent neural circuits.
Stephen Coombes
The University of Nottingham
June, 30, 2021
Pattern formation in biological neural networks
with rebound currents
Waves and patterns in the brain are well known to subserve natural computation. Much attention in the theoretical neuroscience community has been devoted to analysing networks of relatively simple spiking neurons (IF type) or firing rate models (Wilson-Cowan type) and to great effect! Indeed, the understanding of how spatio-temporal patterns of neural activity may arise in the cortex has advanced significantly with the development and analysis of such models. To replicate this success for sub-cortical tissues requires an extension to include relevant ionic currents that can further shape firing response. Here I will advocate for two complementary approaches: i) that augments the approach for IF networks to include piecewise linear caricatures of gating dynamics for nonlinear ionic current models, ii) firing rate reductions for systems where the nonlinear ionic currents are slow. By way of illustration, I will show how to construct spatially periodic waves and patterns in i) a simple spiking tissue model of medial enthorinal cortex (with an I_h current), ii) a firing rate model of thalamus (with an I_T current). The biological commonality between these two models is that both express local 'rebound' currents that can usefully shape global tissue response. The mathematical commonality is the use of tools from non-smooth dynamical systems theory to make analytical progress in determining patterns and their stability.
Carina Curto
The Pennsylvania State University
June, 23, 2021
Ten theorems about threshold-linear networks
Threshold-linear networks (TLNs) are popular firing rate models of recurrent networks. They have been used to model associative memory, decision-making, and position coding in cortical and hippocampal networks. Unlike rate models with other choices of nonlinearity, TLNs are piecewise linear, making them more amenable to mathematical analysis. In this talk I will present ten theorems about TLNs from the past five years. Many of these theorems connect the fixed points of a network to the structure of an underlying connectivity graph. These results have enabled us to develop graph rules to predict both static and dynamic attractors from network motifs. The theorems will be complemented with examples that illustrate how the mathematical results can be used to analyze and design recurrent networks that support a rich variety of computations and dynamics. Examples include internally-generated sequences, neural integrators, and central pattern generator circuits.
Vincent Hakim
CNRS, Paris
June, 16, 2021
What is the mechanical basis of traveling waves in the motor cortex?
Oscillatory activity with different characteristic frequencies is recorded in different neural areas. Beta (13-30Hz) oscillations are prominent in the motor cortex during movement preparation. Moreover, in several experiments, this oscillatory activity has been reported to organize into a variety of traveling wave types. I will discuss how these waves could arise in local excitatory-inhibitory modules coupled by long-range excitation. First, I will describe the synchronization properties of such a system that we recently reinvestigated, following several previous works. I will then try to precisely compare the modeling to electrophysiological datasets recorded in the primary motor cortices of macaque monkeys during an instructed delayed reach-to-grasp task. Close agreement between the model and the experimental data is obtained in the presence of stochastic local entries that vary on a long-time scale (200ms) and mimick inputs to the motor cortex from other neural areas. The results suggest that both time-varying external entries and intrinsic network architecture shape the dynamics of the motor cortex.
Ken Miller
Columbia University
June, 9, 2021
(Two or) three easy pieces
(1) We (Grace Lindsay) used convolutional neural nets to model attention, by scaling the input/output function of neurons in an imagenet-trained network according to their selectivity for the feature or object category being attended. While this was effective in improving performance on difficult tasks, it was far less effective in earlier than in later layers. This indicated that neurons selective for a feature in earlier layers did not necessarily drive neurons selective for that feature in later layers. In contrast, applying attention according to the gradient for improving task performance worked well in early as well as late layers. This raises the question whether biological attentional modulation might reflect task requirements and not only the features of the stimuli to be attended. We suggest a simple experiment to answer this question, which we hope to convince an appropriate lab to carry out. (2) In E/I networks, a "paradoxical" response to stimulation has been shown: If the excitatory neurons would be unstable by themselves, but are stabilized by feedback inhibition (an "inhibition-stabilized network", or ISN), then, in response to addition of excitatory input to inhibitory neurons, their steady-state firing rates paradoxically decrease. In circuits with multiple inhibitory cell types, this has been generalized: in an ISN, if there is an added stimulus only to inhibitory cells, there will be a paradoxical change in the net inhibition received by excitatory cells -- e.g., if excitatory firing rates increase, so too will the net inhibition they receive. This does not imply that the firing rates of any particular inhibitory cell type will change paradoxically. Here we (Agostina Palmigiano along with Francesco Fumarola, and experimental work of Dan Mossing in the Adesnik lab) generalize the conditions for a paradoxical firing rate response, including in responses to partial as well as full perturbation of the neurons of a given cell type. We work in the context of the circuit with three inhibitory cell types (PV, SOM, VIP) in mouse V1. We show that, if a given cell type shows a paradoxical response to its own full stimulation, then the circuit without that cell type is unstable. This and experimental results to date, as well as our models fitted to data, suggest that PV but not SOM interneurons stabilize the circuit of layer 2/3 of mouse V1, at least for smaller visual stimulus sizes. For partial perturbations of a fraction f of a cell type that responds paradoxically to a full perturbation, there is a "fractional paradoxical effect": the proportion of all the cells of that type, stimulated and unstimulated, that respond opposite to the stimulation (i.e. negative response to excitation), changes non-monotonically, approaching 1 for f->0, decreasing with increasing f, and then increasing again to again approach 1 as f->1. I'll explain the origins of this behavior.3) We (Mario Dipoppa, in collaboration with the experimental work of Andy Keller and Morgane Roth from the Scanziani lab) have studied the E-PV-SOM-VIP circuit underlying contextual modulation in layer 2/3 of mouse V1. Experiments showed that E, PV, and VIP are suppressed by a surround stimuus that has the same orientation as, but not by one orthogonal to, the center stimulus. SOM neurons show the opposite behavior, being suppressed by an orthogonal but much less by a parallel surround. A combination of theory and optogenetic experiments show that the disinhibitory circuit -- VIP inhibits SOM, which inhibits E -- modulate responses between the two conditions. However, it does so, as part of the recurrent circuit, primarily by changing the recurrent excitation E cells receive, rather than by directly changing the inhibition received, in a manner reminiscent of the paradoxical response.
Lai-Sang Young
Courant Institute
June, 2, 2021
In the past several years, I have been involved in building a biologically realistic model of the monkey visual cortex. Work on one of the input layers (4Ca) of the primary visual cortex (V1) is now nearly complete, and I would like to share some of what I have learned with the community. After a brief overview of the model and its capabilities, I would like to focus on three sets of results that represent three different aspects of the modeling. They are: (i) emergent E-I dynamics in local circuits; (ii) how visual cortical neurons acquire their ability to detect edges and directions of motion, and (iii) a view across the cortical surface: nonequilibrium steady states (in analogy with statistical mechanics) and beyond.
A dynamical model of the visual cortex
Ran Darshan
Janelia Research Campus
May, 26, 2021
Manifold attractors without symmetry
Encoding by manifold attractors is one of the dominant paradigms in understanding neural computations involving continuous variables, such as parametric and spatial working memory or path integration. In this framework, a persistent neuronal representation of a continuous variable is often attributed to a symmetry principle, both in the representation itself and in the underlying synaptic connectivity. It is thus unclear if the concept of manifold attractors applies to real biological systems in which imperfections are inevitable and perfect symmetry is implausible. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. We show that a continuous neuronal representation of the feature emerges from a small set of stimuli used for training. Furthermore, we find that network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation leads to manifold's destabilization. Our framework shows that continuous features can be represented in the recurrent dynamics of heterogeneous networks without unrealistic symmetry assumptions. It suggests a general principle for how the static internal representation of continuous features predict the dynamics in putative manifold attractors in the brain.
Computational frameworks for integrating large scale
neural dynamics, connectivity and behaviour
Modern neurotechnologies generate high-resolution maps of the brain-wide neural activity and anatomical connectivity. However, theoretical frameworks are missing to explain how global activity arises from connectivity to drive animal behaviors. I will present our recent work developing computational frameworks for modeling global neural dynamics, which utilize anatomical connectivity and predict rich behavioral outputs. First, we took advantage of recently available large-scale datasets of neural activity and connectivity to construct a model of mesoscopic functional dynamics across the mouse cortex. We found that global activity is restricted to a low-dimensional subspace spanned by a few cortical areas and explores different parts of this subspace in different behavioral contexts. Our framework provides an interpretable dimensionality reduction of cortex-wide neural activity grounded on the connectome, which generalizes across animals and behaviors. Second, we developed a circuit reduction method for inferring interpretable low-dimensional circuit mechanisms of cognitive computations from high-dimensional neural activity data. Our method infers the structural connectivity of an equivalent low-dimensional circuit that fits projections of high-dimensional neural activity data and implements the behavioral task. Our computational frameworks make quantitative predictions for perturbation experiments.
Tatyana Engel
Cold Spring Harbor Lab
May,12, 2021
Ran Darshan
Janelia Research Campus
May, 26, 2021
Design principles of adaptable neural codes
.Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Ann Hermunstad
Janelia Research Campus
May 5, 2021
Ran Darshan
Janelia Research Campus
May, 26, 2021
Specialized and spatially organized dopamine signals
I will describe our work showing surprising heterogeneity at the single-cell level in the dopamine system, contradicting a classic view of a homogenous reinforcement learning signal. Next, I will discuss new work attempting to reconcile this observed heterogeneity with classic models regarding the neural instantiation of reinforcement learning. Finally, I will discuss future directions aiming to extend these findings of within-subject dopamine variability to the question of cross-subject variability, with an eye to understanding potential consequences for individual differences in learned behavior.
Ilana Witten
Princeton University
April 28, 2021
Ran Darshan
Janelia Research Campus
May, 26, 2021
A neuronal model for learning to keep a rhythmic beat
When listening to music, we typically lock onto and move to a beat (1-6 Hz). Behavioral studies on such synchronization (Repp 2005) abound, yet the neural mechanisms remain poorly understood. Some models hypothesize an array of self-sustaining entrainable neural oscillators that resonate when forced with rhythmic stimuli (Large et al. 2010). In contrast, our formulation focuses on event time estimation and plasticity: a neuronal beat generator that adapts its intrinsic frequency and phase to match the extermal rhythm. The model quickly learns new rhythms, within a few cycles as found in human behavior. When the stimulus is removed the beat generator continues to produce the learned rhythm in accordance with a synchronization continuation task.
John Rinzel
New York University
April 21, 2021
Ran Darshan
Janelia Research Campus
May, 26, 2021
Coordinated hippocampal-thalamic-cortical communication crucial for engram dynamics underneath systems consolidation
Systems consolidation refers to the reorganization of memory over time across brain regions. Despite recent advancements in unravelling engrams and circuits essential for this process, the exact mechanisms behind engram cell dynamics and the role of associated pathways remain poorly understood. Here, we propose a computational model to address this knowledge gap that consists of a multi-region spiking recurrent neural network subject to biologically-plausible synaptic plasticity mechanisms. By coordinating the timescales of synaptic plasticity throughout the network and incorporating a hippocampus-thalamus-cortex circuit, our model is able to couple engram reactivations across these brain regions and thereby reproduce key dynamics of cortical and hippocampal engram cells along with their interdependencies. Decoupling hippocampal-thalamic-cortical activity disrupts engram dynamics and systems consolidation. Our modeling work also yields several testable predictions: engram cells in mediodorsal thalamus are activated in response to partial cues in recent and remote recall and are crucial for systems consolidation; hippocampal and thalamic engram cells are essential for coupling engram reactivations between subcortical and cortical regions; inhibitory engram cells have region-specific dynamics with coupled reactivations; inhibitory input to mediodorsal thalamus is critical for systems consolidation; and thalamocortical synaptic coupling is predictive of cortical engram dynamics and the retrograde amnesia pattern induced by hippocampal damage. Overall, our results suggest that systems consolidation emerges from concerted interactions among engram cells in distributed brain regions enabled by coordinated synaptic plasticity timescales in multisynaptic subcortical-cortical circuits.
Claudia Clopath
Imperial College London
April 14, 2021
Yonatan Loewenstein
The Hebrew University
April, 7, 2021
Choice engineering and the modeling of operant learning
Organisms modify their behavior in response to its consequences, a phenomenon referred to as operant learning. Contemporary modeling of this learning behavior is based on reinforcement learning algorithms. I will discuss some of the challenges that these models face, and proposed a new approach to model-selection that is based on testing their ability to engineer behavior. Finally, I will present the results of The Choice Engineering Competition – an academic competition that compared the efficacies of qualitative and quantitative models of operant learning in shaping behavior.
Adrienne Fairhall
University of Washington
March, 31, 2021
Variability, maintenance and learning in birdsong
The songbird zebra finch is an exemplary model system in which to study trial-and-error learning, as the bird learns its single song gradually through the production of many noisy renditions. It is also a good system in which to study the maintenance of motor skills, as the adult bird actively maintains its song and retains some residual plasticity. Motor learning occurs through the association of timing within the song, represented by sparse firing in nucleus HVC, with motor output, driven by nucleus RA. Here we show through modeling that the small level of observed variability in HVC can result in a network which is more easily able to adapt to change, and is most robust to cell damage or death, than an unperturbed network. In collaboration with Carlos Lois’ lab, we also consider the effect of directly perturbing HVC through viral injection of toxins that affect the firing of projection neurons. Following these perturbations, the song is profoundly affected but is able to almost perfectly recover. We characterize the changes in song acoustics and syntax, and propose models for HVC architecture and plasticity that can account for some of the observed effects. Finally, we suggest a potential role for inputs from nucleus Uva in helping to control timing precision in HVC.
Subkin Lim
NYU Shanghai
March, 24, 2021
Hebbian learning, its inference, and brain oscillation
Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.
Sara Solla
Northwestern University
March, 17, 2021
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics, and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.
Maoz Shamir
Ben Gurion University
March, 10 2021
STDP and the transfer of rhythmic signals in the brain
Rhythmic activity in the brain has been reported in relation to a wide range of cognitive processes. Changes in the rhythmic activity have been related to pathological states. These observations raise the question of the origin of these rhythms: can the mechanisms responsible for generation of these rhythms and that allow the propagation of the rhythmic signal be acquired via a process of learning? In my talk I will focus on spike timing dependent plasticity (STDP) and examine under what conditions this unsupervised learning rule can facilitate the propagation of rhythmic activity downstream in the central nervous system. Next, the I will apply the theory of STDP to the whisker system and demonstrate how STDP can shape the distribution of preferred phases of firing in a downstream population. Interestingly, in both these cases STDP dynamics does not relax to a fixed-point solution, rather the synaptic weights remain dynamic. Nevertheless, STDP allows for the system to retain its functionality in the face of continuous remodeling of the entire synaptic population.
Tatyana Sharpee
Salk Institute
March, 3 2021
Reading out responses of large neural populations with
minimal information loss
Classic studies show that in many species – from leech and cricket to primate – responses of neural populations can be quite successfully read out using a measure neural population activity termed the population vector. However, despite its successes, detailed analyses have shown that the standard population vector discards substantial amounts of information contained in the responses of a neural population, and so is unlikely to accurately describe how signal communication between parts of the nervous system. I will describe recent theoretical results showing how to modify the population vector expression in order to read out neural responses without information loss, ideally. These results make it possible to quantify the contribution of weakly tuned neurons to perception. I will also discuss numerical methods that can be used to minimize information loss when reading out responses of large neural populations.
Gianluigi Mongillo
CNRS, Paris
February, 17, 2021
Glassy phase in dynamically balanced networks
We study the dynamics of (inhibitory) balanced networks at varying (i) the level of symmetry in the synaptic connectivity; and (ii) the ariance of the synaptic efficacies (synaptic gain). We find three regimes of activity. For suitably low synaptic gain, regardless of the level of symmetry, there exists a unique stable fixed point. Using a cavity-like approach, we develop a quantitative theory that describes the statistics of the activity in this unique fixed point, and the conditions for its stability. Increasing the synaptic gain, the unique fixed point destabilizes, and the network exhibits chaotic activity for zero or negative levels of symmetry (i.e., random or antisymmetric). Instead, for positive levels of symmetry, there is multi-stability among a large number of marginally stable fixed points. In this regime, ergodicity is broken and the network exhibits non-exponential relaxational dynamics. We discuss the potential relevance of such a “glassy” phase to explain some features of cortical activity.
Remi Monasson
CNRS, Paris
February, 10, 2021
Emergence of long time scales in data-driven network models
of zebrafish activity
Rhythmic activity in the brain has been reported in relation to a wide range of cognitive processes. Changes in the rhythmic activity have been related to pathological states. These observations raise the question of the origin of these rhythms: can the mechanisms responsible for generation of these rhythms and that allow the propagation of the rhythmic signal be acquired via a process of learning? In my talk I will focus on spike timing dependent plasticity (STDP) and examine under what conditions this unsupervised learning rule can facilitate the propagation of rhythmic activity downstream in the central nervous system. Next, the I will apply the theory of STDP to the whisker system and demonstrate how STDP can shape the distribution of preferred phases of firing in a downstream population. Interestingly, in both these cases STDP dynamics does not relax to a fixed-point solution, rather the synaptic weights remain dynamic. Nevertheless, STDP allows for the system to retain its functionality in the face of continuous remodeling of the entire synaptic population.
James Fitzgerald
Janelia Research Campus
February, 3, 2021
A geometric framework to predict structure from function
in neural networks
The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.
Brent Doiron
University of Chicago
January, 27, 2021
Cellular mechanisms behind stimulus evoked quenching
of variability
A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses. This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network, quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified. This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability. We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability. The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor. Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.