← Back to papers

Paper deep dive

Inhibitory normalization of error signals improves learning in neural circuits

Roy Henha Eyono, Daniel Levenstein, Arna Ghosh, Jonathan Cornford, Blake Richards

Year: 2026Venue: arXiv preprintArea: q-bio.NCType: PreprintEmbeddings: 65

Abstract

Abstract:Normalization is a critical operation in neural circuits. In the brain, there is evidence that normalization is implemented via inhibitory interneurons and allows neural populations to adjust to changes in the distribution of their inputs. In artificial neural networks (ANNs), normalization is used to improve learning in tasks that involve complex input distributions. However, it is unclear whether inhibition-mediated normalization in biological neural circuits also improves learning. Here, we explore this possibility using ANNs with separate excitatory and inhibitory populations trained on an image recognition task with variable luminosity. We find that inhibition-mediated normalization does not improve learning if normalization is applied only during inference. However, when this normalization is extended to include back-propagated errors, performance improves significantly. These results suggest that if inhibition-mediated normalization improves learning in the brain, it additionally requires the normalization of learning signals.

Tags

ai-safety (imported, 100%)preprint (suggested, 88%)q-bionc (suggested, 92%)

Links

Your browser cannot display the PDF inline. Open PDF directly →

Intelligence

Status: not_run | Model: - | Prompt: - | Confidence: 0%

Entities (0)

No extracted entities yet.

Relation Signals (0)

No relation signals yet.

Cypher Suggestions (0)

No Cypher suggestions yet.

Full Text

64,891 characters extracted from source content.

Expand or collapse full text

Inhibitory normalization of error signals improves learning in neural circuits Roy Henha Eyono1,2,∗, Daniel Levenstein3, Arna Ghosh4, Jonathan Cornford5, Blake Richards 1,2,4,6,7,8 1Mila-Quebec AI Institute, 2School of Computer Science, McGill University, 3Yale University, 4Google, Paradigms of Intelligence Team, 5Leeds University, 6Department of Neurology & Neurosurgery, McGill University, 7Montreal Neurological Institute, McGill University, 8CIFAR Learning in Machines & Brains Program ∗ Correspondence: roy.eyono@mila.quebec Keywords: Layer Normalization, Inhibition, Credit Assignment, Neural Networks Abstract Normalization is a critical operation in neural circuits. In the brain, there is evidence that normalization is implemented via inhibitory interneurons and allows neural populations to adjust to changes in the distribution of their inputs. In artificial neural networks (ANNs), normalization is used to improve learning in tasks that involve complex input distributions. However, it is unclear whether inhibition-mediated normalization in biological neural circuits also improves learning. Here, we explore this possibility using ANNs with separate excitatory and inhibitory populations trained on an image recognition task with variable luminosity. We find that inhibition-mediated normalization does not improve learning if normalization is applied only during inference. However, when this normalization is extended to include back-propagated errors, performance improves significantly. These results suggest that if inhibition-mediated normalization improves learning in the brain, it additionally requires the normalization of learning signals. 1 Introduction Inhibitory plasticity has traditionally been studied as a means of maintaining excitation–inhibition (E–I) balance in neural circuits (van Vreeswijk and Sompolinsky, 1998; Vogels et al., 2011; Denève and Machens, 2016). In this work, we examine a complementary function of inhibitory circuits: their role in normalization, in which inhibitory neurons scale the activity of excitatory neurons relative to nearby neurons (Carandini and Heeger, 2012). There are several examples in which inhibitory interneurons facilitate normalization in the brain. For instance, Atallah et al. (2012) showed that manipulating parvalbumin-positive interneurons in mouse V1 produces largely divisive and partially additive changes in pyramidal cell responses, suggesting that this class of interneurons can implement gain control consistent with normalization. Similarly, Carandini and Heeger (2012) described a feedforward normalization circuit in the fly antennal lobe, in which presynaptic local interneurons divisively scaled odor inputs. Similar to the brain, normalization plays an important role in artificial neural networks (ANNs) (Wu and He, 2018). Layer normalization, which normalizes across units within the same layer (Ba et al., 2016), has become a key component of transformer-based architectures and recurrent neural network models (Vaswani et al., 2017; Ba et al., 2016), and it leads to significant improvements in learning, especially for sequence modeling tasks (Xiong et al., 2020). While inhibitory circuits are critical for learning (Richards et al., 2010), it is unclear whether their importance for learning may relate to their role in normalization. This raises a question: Could normalization mediated by inhibitory circuits improve learning in the same way that layer normalization does in ANNs? To understand the potential role of inhibitory normalization in learning, we trained ANNs with separate excitatory and inhibitory populations (EI-networks) on a visual classification task. First, using hard-coded layer normalization, we found that adding layer normalization to the EI-network significantly improved model training in the face of luminance changes. We then asked whether layer normalization mediated by inhibitory circuits could provide a similar boost in performance. To answer this we trained the inhibitory cells in the network to perform layer normalization (I-normalization). We observed that, while I-normalization could successfully center and scale neural activity, it did not produce the same benefits for learning as the hard-coded layer normalization operation. Closer examination of the layer normalization operation revealed that its primary contribution to learning lay not in stabilizing activations, but in shaping gradients during backpropagation (Xu et al., 2019). This insight led us to implement an additional lateral inhibitory mechanism to normalize the back-propagated error signals in EI-networks. With this form of I-normalization, the EI-network was able to recapitulate the performance benefits of hard-coded layer normalization. Altogether, our results support the idea that inhibition-mediated normalization could be one of the reasons that inhibition is important for learning in real neural networks. But, our results also suggest that normalizing activity alone would not be sufficient. To obtain the benefits for learning, I-normalization would need to not only normalize the neural activity, but also any signals used for learning. This has interesting implications for inhibitory circuits in the brain that target the apical dendrites of pyramidal neurons, where learning signals may be received. 2 Results 2.1 Layer-normalization improves perceptual invariance To study how I-normalization could impact learning, we used an ANN that enforces Dale’s principle by constraining each neuron to be either purely excitatory or purely inhibitory, so that all outgoing synaptic weights from the same neurons share the same sign (EI-networks; Fig. 1a). Following prior work (Cornford et al., 2021), such networks have been shown to achieve performance comparable to standard ANNs when trained with gradient descent. In these networks, the activity of excitatory units at layer ℓ is governed by the interaction between a direct excitatory drive and an indirect, feed-forward inhibitory pathway. The activity in the network is calculated as follows: ℓI _ ^I =ℓI​E​ℓ−1E, =W_ ^IEh_ -1^E, ℓ _ =ℓE​E​ℓ−1E−ℓE​I​ℓI, =W_ ^Eh_ -1^E-W_ ^EIh_ ^I, ℓE _ ^E =ϕ​(ℓ+ℓ), =φ(z_ +b_ ), where ℓEh_ ^E and ℓIh_ ^I represent the activity vectors of the excitatory and inhibitory populations at layer ℓ , respectively. The weight matrices are defined as follows: ℓE​EW_ ^E is the direct excitatory-to-excitatory connection, ℓI​EW_ ^IE projects activity from the previous excitatory layer to the current inhibitory population (feedforward inhibition), and ℓE​IW_ ^EI represents the inhibitory weights that subtractively modulate the excitatory drive. The term ℓb_ denotes the learnable bias vector, and ϕ​(x)=ReLU​(x)φ(x)=ReLU(x) is the non-linear activation function applied element-wise. Figure 1: Schematic of the Excitatory-Inhibitory (EI) network with Layer Normalization and the perceptual invariance task. a: Feedforward EI network architecture. Gray units represents the Excitatory (E) population, and purple unit represents the Inhibitory (I) population. Outgoing synaptic weights share the same sign (E,+E,+ or I,−I,-). b: To test perceptual invariance, a shift is applied to each individual image in the FashionMNIST dataset during both training and testing. For every image, a unique constant Δ is sampled within the threshold |Δ|<ϵ| |<ε. The figure displays three example augmentations to illustrate how the shift varies across the allowable range. We trained the EI-network on a modified Fashion MNIST categorization task with shifts in luminosity. We did this because we reasoned that normalization would be especially important for handling input distribution variability. Specifically, for each image, we created a series of augmentations of luminance by translating the pixel values by a constant, Δ , sampled within the threshold range Δ∼Uniform​(−ϵ,+ϵ) (-ε,+ε). This was done for every sampled image during both training and test time (see Methods). The variable ϵε then served as a hyperparameter to adjust the magnitude of the range of luminance shifts in the data distribution (Fig. 1b). Succeeding in this task requires perceptual invariance to changes in luminosity, a capability that animals routinely exhibit and which is critical for navigating a dynamic environment. We first examined the impact of hard-coded layer normalization in these EI-networks (Fig. 1a). Specifically, we applied a centering and scaling operation to the excitatory pre-activations in each layer in order to bring the pre-activation mean to zero and variance, one: hℓE h_ ^E =ϕ​(z^ℓ), =φ( z_ ), z^ℓ z_ =zℓ−μℓσℓ2+c, = z_ - _ _ ^2+c, μℓ _ =1nℓ​∑i=1nℓzℓi, = 1n_ _i=1^n_ z_ ^i, σℓ _ =1nℓ​∑i=1nℓ(zℓi−μℓ)2, = 1n_ _i=1^n_ (z_ ^i- _ )^2, where nℓn_ is the number of excitatory neurons in layer ℓ , c=1×10−5c=1× 10^-5 is a small constant to prevent division by zero, and the bias term has been omitted for simplicity. We found that this layer normalization operation improved training, with the improvement being more pronounced for larger ranges of luminance shifts (Fig. 2a). Figure 2: Layer normalization (LN) improves perceptual invariance in Excitatory-Inhibitory (EI) networks. a: Test accuracy (Acc %) comparison of EI networks with LN (x-axis) to those without LN (y-axis). Data points represent performance across 30 hyperparameter combinations (layer widths and E/I learning rates) and four luminosity ranges (ϵ=0,0.25,0.5,0.75ε=0,0.25,0.5,0.75). Points below the dashed diagonal line indicate cases where networks with LN performed better. b: Top-10 test accuracy comparison of an EI network with LN against an Excitatory-only (E-only) network, also with LN. The box plots summarize the distribution across the same 30 hyperparameter combinations reported in panel a. To assess the empirical contribution of the inhibitory units, we selectively ablated them before training. In this condition, only the excitatory weights were trained, with layer normalization alone preventing activity blow-up. Despite having fewer parameters than their EI-network counterparts, the E-networks with layer normalization showed no statistically significant difference in performance to the EI-network with layer normalization (Fig. 2b), suggesting that, for this task, training the inhibitory units on the task loss (cross-entropy) provides little to no benefit when hard-coded layer normalization is present. In other words, inhibition does not meaningfully contribute to task performance under these conditions, suggesting that the inhibitory units’ capacity could instead be directed toward implementing layer normalization. 2.2 Learned inhibition normalizes excitatory activity Figure 3: Inhibitory populations learn to implement layer normalization of excitatory activity. a: Schematic showing how the inhibitory circuit (purple) is trained locally via the ℒI-NormL_I-Norm loss (purple lines) to normalize excitatory activity. Excitatory to excitatory weights are updated only by the task loss ℒTaskL_Task (dotted left arrow). Forward Pass: Inhibition performs subtractive (−-) and divisive (÷ ) modulation. Backward Pass: Inhibitory gradients enforce layer-normalized excitatory statistics. b: Box-and-whisker plots of the first and second moments of excitatory activations. Each plot compares three conditions: No-Norm, Subtractive-only I-Norm (sub), and I-Norm (as depicted in a). Each box plot summarizes model results aggregated across the sampled range of ϵε luminosity augmentations. We next asked whether hard-coded layer normalization could be removed entirely and replaced with layer normalization implemented by inhibitory neurons. To implement layer normalization with inhibitory neurons, we used two distinct inhibitory populations: one providing subtractive inhibition and the other providing divisive inhibition. This design accounts for the functional diversity of inhibitory subtypes, which can implement divisive or subtractive operations or both depending on the context (Wilson et al., 2012; Pouille et al., 2013; El-Boustani and Sur, 2014). We will refer to these networks as I-normalization (or I-Norm) networks. Their activity was calculated as follows: hℓE h_ ^E =ϕ​(zℓ) =φ(z_ ) zℓ z_ =WℓE​E​hℓ−1E−Wℓ−1E​I​hℓIUℓE​I​hℓD, = W_ ^Eh_ -1^E-W_ -1^EIh_ ^I U_ ^EIh_ ^D, (1) where hℓD=UℓI​E​hℓ−1Eh_ ^D=U_ ^IEh_ -1^E represents the divisive inhibition population, hℓI=WℓI​E​hℓ−1Eh_ ^I=W_ ^IEh_ -1^E represents the subtractive inhibition population, and UℓX,WℓXU_ ^X,W_ ^X represents the synaptic weights associated with each respective inhibitory population. We then introduced a new normalization loss (ℒI-NormL_I-Norm), which was applied only to the inhibitory pathway weights (WE​IW^EI, WI​EW^IE, UE​IU^EI, UI​EU^IE), while the excitatory weights (WE​EW^E) were trained solely with the cross-entropy loss from the Fashion MNIST categorization task (ℒTaskL_Task, see Methods). The normalization loss was calculated as: ℒI−N​o​r​m=(1n​∑i=1nhiE)2+(1n​∑i=1n(hiE)2−1)2.L_I-Norm= ( 1n _i=1^nh_i^E )^2+ ( 1n _i=1^n(h_i^E)^2-1 )^2. This loss was designed to optimize the first and second moments of the excitatory unit activity (hEh^E). Figure 3a provides a schematic of this setup, showing the distinct inhibitory pathways and how the loss gradients propagate through them. Analysis of excitatory unit activity revealed that the inhibitory circuits effectively learned to normalize excitatory activity (Fig. 3b). In contrast, networks without layer normalization exhibited first and second moments that were off the target (μ=0,σ2=1μ=0,σ^2=1). Figure 3b similarly shows that a purely subtractive inhibitory circuit does not match the layer normalization target statistics as effectively as the segregated inhibitory approach depicted in Figure 3a, despite being trained with the same I-norm loss objective. These results demonstrate that inhibitory circuits can effectively recapitulate the impact of hard-coded layer normalization on neural activity. Importantly, unlike hard-coded layer normalization, which requires a population-level computation to aggregate statistics, the learned I-Normalization depends only on the feedforward activity of the preceding layer (hℓ−1Eh_ -1^E, see Eq. 2.2). This makes normalization inherently “predictive”: the inhibitory population must anticipate excitatory activity, rather than simply enforcing a hard-coded normalization operation. 2.3 Normalizing error signals recapitulates layer normalization function Figure 4: I-Norm networks struggle to recapitulate the performance of LN in EI networks. a: Test accuracy comparison between LN (x-axis) and I-Norm (y-axis) across hyperparameter and luminosity ranges (ϵ=0,0.25,0.5,0.75ε=0,0.25,0.5,0.75). Each point represents a single network training run. Points falling below the dashed diagonal line indicate cases where LN achieved higher test accuracy than I-Norm. b: Layer-wise alignment between I-Norm and LN. We quantify alignment as the cosine similarity between I-Norm and LayerNorm (LN) across all network layers for outputs (top) and gradients (bottom). All results correspond to the highest luminosity range (ϵ=0.75ε=0.75). We next examined the performance of the trained I-Norm networks on the perceptual invariance task. Although the I-norm network normalized excitatory activity, it still underperformed compared to EI-networks with hard-coded layer normalization (Fig. 4a). These observations point to a key mechanism found in layer normalization being absent from I-Norm networks. One possibility, motivated by observations in the machine learning literature (Xu et al., 2019), is that the effectiveness of hard-coded layer normalization depends on how it transforms error signals during error propagation, rather than how it shapes activity during forward processing. To test this idea, we compared the activity vectors and weight updates during training in I-Norm networks with the a version of the network with inhibition removed and hard-coded layer normalization. Despite the strong cosine similarity in the activity statistics (Fig. 4b, top), the weight updates between the two networks were poorly aligned (Fig. 4b, bottom). This suggests that the impact of layer normalization is related to its impact on the weight updates, rather than its impact on forward-pass activity, per se. Figure 5: Hard-coded LN gradients in I-Norm networks restore LN performance in EI networks. a: Schematic illustrating the Backward Pass of the I-Norm network incorporating, GradNorm, from equation 2. The GradNorm operation is applied to the backward signal (δ) to enforce the LN gradient. b: Average cosine similarity across all network layers. The top and bottom panels show the similarity between LN and I-Norm (with GradNorm) for outputs and gradients, respectively. c: Test accuracy (Acc %) comparison. LN network performance (x-axis) versus I-Norm network with GradNorm (y-axis). Data is shown across 30 hyperparameter initializations and four luminosity ranges (ϵ=0,0.25,0.5,0.75ε=0,0.25,0.5,0.75). Points clustered along the dashed diagonal line indicate a strong match in performance between the two models. Formal analysis of the weight updates confirmed that the effect of hard-coded layer normalization on learning arises from its influence on the weight updates themselves (see Appendix, 5.1). Specifically, for a network with layer normalization, the partial derivative used to update weights, ∂ℒT​a​s​k∂zi _Task∂ z_i, corresponds to a normalized version of the propagated error signal, which we denote as δ^i δ_i for neuron i. This relationship can be expressed as follows (see Appendix for a full derivation): Δ​WE​E W^E ∝−∂ℒT​a​s​k∂z=−δ - _Task∂ z=- δ δ^i δ_i =1σ2+c⏟scale​(δi−1n​∑j=1nδj⏟center−z^in​∑j=1nδj⋅z^j⏟decorrelate), = [rgb].75,0,.25 [named]pgfstrokecolorrgb.75,0,.25 1 σ^2+c_ [0.0pt][c] [rgb].75,0,.25 [named]pgfstrokecolorrgb.75,0,.25 scale ( _i- [rgb]0.7,0.35,0 [named]pgfstrokecolorrgb0.7,0.35,0 1n _j=1^n _j_ [0.0pt][c] [rgb]0.7,0.35,0 [named]pgfstrokecolorrgb0.7,0.35,0 center- [rgb]0,0.3,0.3 [named]pgfstrokecolorrgb0,0.3,0.3 z_in _j=1^n _j· z_j_ [0.0pt][c] [rgb]0,0.3,0.3 [named]pgfstrokecolorrgb0,0.3,0.3 decorrelate ), (2) where ziz_i is the activation of unit i, z^i z_i is its normalized activation, δi _i is the backpropagated error before normalization, δ^i δ_i is the error after normalization, μ and σ2σ^2 are the mean and variance across the n units, and c is a small constant for numerical stability. For clarity, we have omitted the layer index ℓ and color-coded the terms to highlight their different functional roles. As shown in the equation, layer normalization transforms the propagated error signals in three distinct ways: (1) it rescales the errors according to the variance of the excitatory activity; (2) it centers the errors by removing their mean; and (3) it orthogonalizes the error signal from the activations. We next evaluated I-Norm networks using the gradient modification in Equation 2, which we call GradNorm. In this setting, inhibition handles activity normalization, while the error calculations are hard-coded (Eqn. 2; Fig. 5a). GradNorm significantly improved the alignment of I-Norm weight updates with those of standard layer normalization (Fig. 5b), successfully recapitulating the performance gains associated with hard-coded layer normalization (Fig. 5c). This performance held across all ranges of luminosity, despite the lack of perfect output alignment(Fig. 5b). Together, these analyses and experiments indicate that the performance improvements observed in EI-networks with layer normalization derive from its effect on error signal propagation. This suggests that achieving the benefits of layer normalization in I-Norm networks would potentially require an additional inhibitory population capable of normalizing error signals. 2.4 Mean centering of error signals is the most salient component of credit assignment normalization Figure 6: Gradient centering, not scaling or decorrelating, drives LN performance recovery in I-Norm networks. a: Box plots of test accuracy comparing Scale, Decorrelate, Center, and Full LN gradient components applied to the I-Norm network. The red dashed line (+LN^+) shows the baseline performance of the hard-coded LN network. Results are separated by luminosity range (ϵ=0,0.75ε=0,0.75). Significance is indicated comparing components to the LN+LN^+ baseline. Red tildes (∼ ) denote outliers performing near random chance, omitted to preserve plot scaling. b: The panels contrast the LN gradient with specific gradient components trained on an I-Norm network: Center (top, orange), Decorrelate (middle, teal), and Scale (bottom, purple). To better understand which aspects of gradient normalization drive performance, we first examined the contribution of each term in the layer-normalization gradient equation. In particular, we asked whether the performance of I-Norm networks with hard-coded GradNorm hinges more on the scaling, centering, or decorrelation of the error signals (Eqn. 2). To investigate this, we trained I-Norm networks with hard-coded modifications to the gradient calculations (Eqn. 2), applying each of the three operations in isolation. Across luminance levels, we found that centering was critical for performance, while scaling and decorrelation had little impact. Networks trained with centering alone achieved test accuracies comparable to those of hard-coded layer normalization at luminance ranges of 0 and 0.75 (Mann-Whitney U test, p=0.4641p=0.4641 and p=0.6099p=0.6099, respectively), whereas networks trained with only scaling or decorrelation performed significantly worse across luminance levels (Fig. 6a). We further confirmed these findings by analyzing the alignment of the gradients in I-Norm networks with those trained using standard layer normalization. Networks trained with the centering operation showed higher alignment with the layer normalization networks compared to networks trained with either scaling or decorrelation alone (Fig. 6b). These results indicate that the centering of gradient calculations induced by layer normalization is the key factor driving its impact on training for the task. Motivated by this, we next aimed to implement the centering operation using an inhibitory circuit. 2.5 Centering of credit assignment signals via lateral inhibition An inhibitory implementation of the mean-centering operation must (i) pool information across the δi _i’s within the layer, and (i) broadcast back a common signal that each excitatory neuron can subtract from its update. But, inhibitory units in real neural circuits cannot all possess the same exact synaptic weights. This raises a fundamental question: can random inhibitory synapses implement gradient centering? Several theories argue for gradient approximation in the brain, each sharing a common requirement: error signals (i.e., δi _i) are propagated between regions or layers (Richards et al., 2019; Guerguiev et al., 2017; Lillicrap et al., 2016; Whittington and Bogacz, 2017; Lillicrap et al., 2020). Motivated by this, we assume here for sake of theorizing that such across-layer error signals are available, and focus on a more specific question: given access to the δi _i within a layer, how could an inhibitory circuit implement the mean-centering operation? Figure 7: Lateral inhibition with fixed synaptic weights implements gradient centering. a: Schematic of the I-Norm network with a lateral inhibition pathway. The inhibitory unit (red) pools and transforms excitatory activity (gray) using fixed, random connections to compute the centering term (mean). b: Test accuracies (Acc %) of the lateral inhibition solution, explicit gradient centering, and the full layer normalization gradient applied to the I-Norm network. The red dashed line (+LN^+) shows the baseline performance of the explicit layer normalization network. Results are averaged across hyperparameter and luminosity ranges (ϵ=0,0.75ε=0,0.75). Significance is shown relative to the LN+ baseline. Below, we establish a theoretical guarantee showing that a single inhibitory unit with fixed, random synaptic weights can indeed approximate the population mean of error signals. Theorem 1: Mean estimation via fixed random inhibition Let ωii=1n\ _i\_i=1^n be i.i.d. positive random variables with ​[ωi]=μ>0E[ _i]=μ>0 and Var⁡(ωi)<∞Var( _i)<∞ that parameterize a set of n synaptic weights, νi,ni=1n\ _i,n\_i=1^n, whose values sum to 11.Let δii=1n\ _i\_i=1^n be a bounded sequence of error signals with an empirical mean denoted by δ¯ δ, i.e., δ¯=1n​∑i=1nδi. δ= 1n _i=1^n _i. Define the normalized synaptic weight onto inhibitory neuron, i as νi,n=ωi∑j=1nωj. _i,n= _i _j=1^n _j. If we consider the inhibitory error pooling operation, which can be expressed as the dot product between the n-dimensional weight vector n=[ν1,n,…,νn,n]⊤ ν_n=[ _1,n,…, _n,n] and the error vector n=[δ1,…,δn]⊤ δ_n=[ _1,…, _n] : sn=n⋅n=∑i=1nνi,n​δi.s_n= ν_n· δ_n= _i=1^n _i,n\, _i. Then, in the limit as n→∞n→∞, the pooled signal converges to the empirical mean: limn→∞sn=δ¯almost surely. _n→∞s_n= δ surely. (End of Theorem 1) The proof of this theorem is provided in the Appendix. Theorem 1 shows that pooling errors via normalized random synapses becomes equivalent to uniform averaging in the large-n limit. Subtracting this inhibitory signal, therefore, could approximate gradient centering. Motivated by this theoretical guarantee, we construct a simple lateral inhibitory circuit (Fig. 7a), inspired by “blanket” inhibition in cortex, in which a single inhibitory pool targets large populations of excitatory cells (Karnani et al., 2014). We tested this lateral inhibition mechanism for error normalization and compared its performance to networks using explicit centering or full gradient normalization. Across all luminance ranges, test accuracy with lateral inhibition was comparable to the explicit centering and full gradient-normalization baselines, with no significant differences found at luminance range = 0 (Mann-Whitney U test, p=0.4641p=0.4641) or luminance range = 0.75 (Mann-Whitney U test, p=0.6099p=0.6099) (Fig. 7b). Together, these results show that the learning benefits of layer normalization can be reproduced using three distinct inhibitory populations: two inhibitory populations that estimate the mean and variance to normalize excitatory activity in the forward pass and another lateral inhibitory population that centers error signals to normalize gradient updates. 3 Discussion Layer normalization enhances learning in ANNs by stabilizing activations in the forward pass and regularizing error signals in the backward pass. In our work, we leveraged a neural network with distinct excitatory and inhibitory circuits to understand the potential role of inhibitory normalization in learning in neural circuits. Using local objectives, the inhibitory circuits learned to stabilize excitatory activations, but lacked the additional learning benefits provided by standard layer normalization. Through ablation studies, we showed that centering the gradient alone is sufficient to recover these benefits, even without explicitly regularizing error signals. Moreover, we demonstrated that such centering can be achieved through lateral inhibitory circuits with fixed random weights, leading to performance improvements approaching that of standard layer normalization. Altogether, our results suggest that if inhibitory normalization in the brain helps learning it may be due to there being multiple inhibitory populations with distinct roles, some related to normalizing activity, and others related to normalizing error signals used for learning. Previous studies have emphasized how inhibition can center (subtractive) or scale (divisive) neuronal activity: Somatic-targeting PV interneurons typically scale down (divide) responses, whereas dendrite-targeting SST interneurons subtract from them (Wilson et al., 2012; Pouille et al., 2013) . Consistent with this, Atallah et al. (2012) showed that increasing PV-cell activity produces a linear combination of additive and multiplicative changes in pyramidal firing, effectively a form of gain control with bias while maintaining tuning specificity. These findings collectively support the idea that PV-cells could implement activity normalization. Our work builds on this foundation by training inhibitory cells to perform normalization on excitatory activity, demonstrating that such circuits can recapitulate both the centering and gain control functions of standard layer normalization. However, previous models of inhibitory normalization have focused primarily on regulating neuronal activity. Learned divisive normalization frameworks (Burg et al., 2021; Shen et al., 2021), for example, describe how cortical circuits can perform normalization but do not consider how inhibitory mechanisms might also regulate error signals. Our findings address this gap by showing that forward-pass normalization by inhibition alone cannot reproduce the full learning benefits of layer normalization. We propose that inhibitory subcircuits, potentially SST-subtypes targeting apical dendrites where error-related signals arrive or neurogliaform cells (Overstreet-Wadiche and McBain, 2015), can normalize credit assignment signals and thereby support efficient and stable learning. This aligns with recent theories of burst-dependent backpropagation (Payeur et al., 2021), which suggest that inhibitory microcircuits are essential in shaping back-propagated error signals rather than merely transforming neural responses. In this view, inhibition not only stabilizes neuronal activity but also contributes to modulating credit assignment signals, revealing a hypothetical, previously unrecognized role for inhibition. One of the more interesting insights from our work is the ability of inhibitory circuits to infer the mean and variance of downstream excitatory activity from an earlier layer using a simple regularization loss, without direct knowledge of the excitatory activations. In the context of ANNs, predicting these statistics eliminates the need for hard-coded layer normalization. Although this mechanism is not present in standard networks, it could help maintain stable representations under varying sensory conditions, such as large shifts in luminance for image categorization or other distributional changes, reducing the need for explicit gain, bias, or error correction in variance computation. Reflecting on the role of gradient normalization, a particularly striking result of our study was that centering the gradients alone was sufficient in recapitulating layer normalization’s performance on the task, indicating that the mean component of the gradient carries the majority of the functional impact. In practice, the mean computation occasionally reverses the signs of individual gradients, which in the context of gradient descent may produce substantially different learning dynamics. Why such sign changes can improve learning remains unclear and potentially represents an interesting direction for future investigation. Here, we implemented the gradient-centering operation using a lateral inhibitory circuit with fixed random weights. This mechanism is conceptually related to feedback alignment (Lillicrap et al., 2016), in which fixed random feedback weights transmit useful gradient signals. In our case, the fixed random inhibitory weights serve to compute the mean of the gradient rather than the exact backpropagated values. This observation raises further questions for future research, including whether using random inhibitory weights to maintain homeostasis could represent a biologically plausible alternative to approximating precise gradient signals. All things considered, this work has some important limitations that should be considered. First, our empirical findings rely primarily on a perceptual invariance task in which the dominant inductive bias is a global shift in pixel intensities. In this task, mean-centering would naturally be the most functionally relevant component of normalization. Consequently, subtracting the mean from the activations directly targets the structure of the perturbation. In more complex sensory domains, there is no guarantee that the mean will remain the most salient statistic to normalize. Natural images, for example, exhibit variance fluctuations, which might require inhibitory circuits to compensate beyond centering the gradient. Although centering the gradient works well for our perceptual invariance, we are yet to test whether it generalizes to other tasks. Finally, a core conceptual limitation is that the random-weight inhibitory mechanism we employ to center error signals is not biologically grounded; for example, it is linear and and operates under the assumption that the random weights νi _i are unit vectors. Nonetheless, it provides a high-level illustration of how lateral inhibition with random synaptic connections could implement normalization of error signals. Thus, the biological claims of this work are best interpreted as a high-level algorithmic hypothesis, rather than as a proposal for a physiological mechanism. Future work should evaluate inhibitory normalization in tasks where additional statistics beyond the mean (e.g., variance, covariance, or sparsity) are behaviorally relevant. Furthermore, additional studies could build biophysical models that examine the relationship between plasticity rules and normalization in more realistic inhibitory circuits. By expanding both the task domain and the realism of the circuit motifs studied, future work may uncover a more complete and unified account of how normalization driven by inhibition could impact learning in neural circuits. 4 Methods 4.1 Dataset We constructed a modified version of the Fashion-MNIST classification task that incorporates shifts in image luminosity. All images were normalized to lie within the range [0, 1]. For each image, we generated luminance augmentations by adding a constant offset, Δ , to all pixel values, where Δ∼Uniform​(−ϵ,+ϵ) (-ε,+ε). Since the pixel values are bounded between 0 and 1, the augmented images were clamped to remain within this range. The parameter ϵε served as a hyperparameter controlling the magnitude of the luminance shifts. We conducted experiments with ϵ∈0,0.25,0.5,0.75ε∈0,0.25,0.5,0.75, where ϵ=0ε=0 corresponds to the standard Fashion-MNIST dataset. Each model was trained on 60,000 images and evaluated on a test set of 10,000 images, both of which included luminosity-shifted samples. Reported accuracies in the manuscript refer to model performance on the test dataset. 4.2 Layer Normalization Layer Normalization (LN) (Ba et al., 2016) normalizes the pre-activations of a layer across hidden units for each input sample, rather than across the batch. Given a vector of pre-activation inputs =(z1,…,zN)z=(z_1,…,z_N) to a layer of N hidden units, LN computes the mean and variance over the hidden units for a single sample: μ μ =1N​∑i=1Nzi, = 1N _i=1^Nz_i, σ2 σ^2 =1N​∑i=1N(zi−μ)2. = 1N _i=1^N(z_i-μ)^2. Each activation is then normalized as z^i=zi−μσ2+c, z_i= z_i-μ σ^2+c, where c is a small constant to prevent numerical instability. In our experiments, we use c=10−5c=10^-5. We omit the additional gain and bias terms often used in LN in our experiments, because existing literature indicates that the gain and bias occasionally hurts training (Xu et al., 2019). 4.3 Network Architecture & Loss Functions We enforce Dale’s principle in our networks by constraining each neuron to be exclusively excitatory (E) or inhibitory (I), such that all outgoing synaptic weights share the same sign. Our networks are limited to 2 hidden layers, with the inhibitory layer width set to 10%10\% of the excitatory layer width. The architecture is trained end-to-end using standard gradient descent. Standard EI Network In the standard EI-Network, the activity of the excitatory units (ℓEh_ ^E) at layer ℓ is governed by the subtractive interaction of E and I populations: ℓE=ϕ​(ℓ)whereℓ=ℓE​E​ℓ−1E−ℓ−1E​I​ℓI.h_ ^E=φ(z_ ) _ =W_ ^Eh_ -1^E-W_ -1^EIh_ ^I. The inhibitory population activity (ℓIh_ ^I) is a feedforward function of the preceding excitatory activity: ℓI=ℓI​E​ℓ−1E.h_ ^I=W_ ^IEh_ -1^E. Here, ϕ​(x)=ReLU​(x)φ(x)=ReLU(x) is the non-linear activation function. For E-only networks (Fig.2), the inhibitory computation (ℓIh_ ^I) is excluded, simplifying the pre-activation to ℓ=ℓE​E​ℓ−1Ez_ =W_ ^Eh_ -1^E. The Inhibitory Normalization (I-Norm) Network To approximate Layer Normalization (LN) using segregated inhibitory populations, we extended the EI-network architecture by introducing a separate inhibitory population dedicated to divisive inhibition (ℓDh_ ^D). The resulting I-Norm network equations introduce divisive normalization to the pre-activation ℓz_ : ℓE=ϕ​(ℓ)whereℓ=ℓE​E​ℓ−1E−ℓ−1E​I​ℓIℓE​I​ℓD(Eq. 2.2)h_ ^E=φ(z_ ) _ = W_ ^Eh_ -1^E-W_ -1^EIh_ ^I U_ ^EIh_ ^D (Eq. eq:inorm) The subtractive (ℓIh_ ^I) and divisive (ℓDh_ ^D) inhibitory populations are computed as: ℓI=ℓI​E​ℓ−1EandℓD=ℓI​E​ℓ−1E.h_ ^I=W_ ^IEh_ -1^E _ ^D=U_ ^IEh_ -1^E. The weights associated with the divisive pathway are denoted by XU^X. Initialization of Standard EI-network Following the procedures established by Cornford et al. (2021), all excitatory parameters (E​E,I​EW^E,W^IE) are initialized from an exponential distribution: i​jE​E∼Exp​(λE)W^E_ij (λ^E). The inhibitory parameters are initialized to ensure that excitation and subtractive inhibition are balanced (​[zkE]=​[(E​I​I)k]E[z^E_k]=E[(W^EIz^I)_k]). Specifically: I​EW^IE is initialized using the mean of the rows of E​EW^E: I​E=1ne​∑j=1nej,:E​EW^IE= 1n_e _j=1^n_ew^E_j,:. E​IW^EI is initialized from Exp​(λE)Exp(λ^E) and then row-normalized (i,:E​I←i,:E​I∑ki​kE​IW^EI_i,:← W^EI_i,: _kW^EI_ik), which approximates the balancing constant 1ni 1n_i. Initialization of I-Norm Divisive Pathway To maintain consistency and ensure ​[E]=0E[z^E]=0 at initialization, the subtractive inhibition pathway (E​I,I​EW^EI,W^IE) is initialized exactly as in the standard EI-network. For the divisive pathway (I​E,E​IU^IE,U^EI), the goal is to initialize the denominator (E​I​D U^EIh^D) to approximate the empirical variance of the subtractive pre-activations. We achieve this by employing a Singular Value Decomposition (SVD) of the effective excitatory weight matrix =E​X−E​I​I​XW=W^EX-W^EIW^IX (where I​XW^IX is the I​EW^IE weights): =​⊤.W=U . The principal components of V are used to initialize the divisive pathway: I​E=​⊤,andi​jE​I=1ne.U^IE= , ^EI_ij= 1n_e. This ensures that the denominator approximates the required empirical variance at the initialization. We note that the use of SVD for I​EU^IE initialization does not guarantee that all resulting weights are positive, so technically, the divisive pathway in our I-Norm network breaks with Dale’s Law. However, we note that divisive inhibition in real neurons is likely driven by shunting (Carandini and Heeger, 1994), and shunting inhibition is more likely to be able to switch signs due to chloride dynamics in dendrites (Raimondo et al., 2012). Training Procedure and Dual Learning Objectives The network optimizes excitatory and inhibitory synapses under distinct learning objectives to decouple task learning from neural activity normalization. Excitatory connections (E​EW^E) are trained on the standard cross-entropy classification loss (ℒtaskL_task), following the rule: Δ​E​E=−ηE​E​∂ℒtask∂E​E ^E=- _E _task ^E. Inhibitory connections (E​I,I​E,E​I,I​EW^EI,W^IE,U^EI,U^IE) are optimized with a local loss function (ℒI-NormL_I-Norm) designed to preserve stability by enforcing the statistics of layer normalization (mean =0=0, variance =1=1): ℒI-Norm=(1n​∑i=1nhiE)2+(1n​∑i=1n(hiE)2−1)2.L_I-Norm= ( 1n _i=1^nh_i^E )^2+ ( 1n _i=1^n(h_i^E)^2-1 )^2. To ensure that stability mechanisms do not interfere with task learning, gradient isolation is enforced using stop-gradient mechanisms: The excitatory activations are detached when computing the I-Norm loss. The inhibitory outputs are detached during the main forward pass (propagation of ℒtaskL_task), preventing I-Norm gradients from affecting the task objective. 4.4 Hyperparameters: Table 1: Hyperparameters used in the experimental evaluation Parameter Value/Range Description Training Parameters Epochs 50 Number of training epochs Batch size 32 Mini-batch size Dataset FashionMNIST Source dataset Learning Rates Excitatory LR (ηe​x​c _exc) 10−310^-3 to 10−110^-1 Log-uniform sampling Inhibitory LR (ηw​e​i _wei) 10−510^-5 to 10−210^-2 Log-uniform sampling Inhibitory LR (ηw​i​x _wix) 10−210^-2 to 10010^0 Log-uniform sampling Network Architecture Hidden Layer depth 2 Fixed Hidden layer width 100-500 Uniform sampling Output classes 10 FashionMNIST classes I-Norm Loss Weight λI−N​o​r​m _I-Norm 10−510^-5 to 10010^0 Grid Search Optimization Momentum 0 SGD momentum Weight decay 0 L2 regularization Algorithm SGD Optimizer type We conducted a comprehensive hyperparameter sweep to evaluate the performance on the FashionMNIST dataset. Our experimental design employed a grid search strategy combined with random sampling to explore the hyperparameter space systematically (Table 1). Training Configuration: All experiments were trained for 50 epochs using mini-batches of size 32. We used the FashionMNIST dataset with 10 output classes. The training employed SGD optimization with no momentum or weight decay to isolate the effects of the I-Norm mechanisms. Learning Rate Sampling: We implemented separate learning rates for excitatory and inhibitory connections, sampled from log-uniform distributions. The excitatory learning rate (ηe​x​c _exc) was sampled from [10−3,10−1][10^-3,10^-1], while inhibitory learning rates for excitatory-inhibitory (ηw​e​i _wei) and inhibitory-inhibitory (ηw​i​x _wix) connections were sampled from [10−5,10−2][10^-5,10^-2] and [10−2,100][10^-2,10^0] respectively. We employed the same inhibitory learning rates for both the subtractive and divisive inhibitory components. This design reflects the biological principle that inhibitory plasticity operates on different timescales than excitatory plasticity. Network Architecture: The hidden layer width was randomly sampled from a uniform distribution over [100,500][100,500] neurons, allowing us to evaluate the robustness of I-Norm mechanism across different network sizes. We maintained the same hidden layer depth of 2 hidden layers. I-Norm Loss Parameters: We systematically varied the I-Norm loss weight (λh​o​m​e​o _homeo) across five values: 10−510^-5, 10−410^-4, 10−310^-3, 10−210^-2,10−110^-1, and 10010^0. The default I-norm weight we show in our results is 10−210^-2. Data Augmentation: To test robustness to input variations, we applied brightness jitter with factors of 0, 0.25, 0.5, and 0.75, simulating varying lighting conditions. 4.5 Measures and Analysis Cosine Similarity Analysis To measure the alignment of the internal dynamics of the I-Norm network against the explicit LN+LN^+ baseline. We computed the cosine similarity for both the activity outputs and the gradient signals. We measured the output alignment between the excitatory unit outputs (lEh^E_l) of the I-Norm network and the LN+LN^+ baseline at layer l: output_alignmentl=(lE)I-Norm⋅(lE)LN+‖(lE)I-Norm‖2​‖(lE)LN+‖2.output\_alignment_l= (h^E_l)^I-Norm·(h^E_l)^LN^+||(h^E_l)^I-Norm||_2||(h^E_l)^LN^+||_2. We similarly measured the alignment of the gradient signals, specifically the cosine similarity between the gradient of the ℒI-NormL_I-Norm loss and the task-driven gradient of the ℒTaskL_Task loss, with respect to the excitatory weights lE​EW^E_l. Note that the ℒTaskL_Task loss is computed with respect to the excitatory-only network with LN+LN^+. gradient_alignmentl=∇lE​EℒI-Norm⋅∇lE​EℒTask‖∇lE​EℒI-Norm‖2​‖∇lE​EℒTask‖2.gradient\_alignment_l= _W^E_lL_I-Norm· _W^E_lL_Task|| _W^E_lL_I-Norm||_2|| _W^E_lL_Task||_2. Statistical Moments of Neural Activations To confirm that the I-Norm mechanism successfully learned to enforce normalization constraints, we tracked the first and second moments of the excitatory pre-activations (lz_l) throughout training, where l is the layer. • First Moment (Mean): μl=​[l]=1N​∑i=1Nzl,i _l=E[z_l]= 1N _i=1^Nz_l,i. • Second Moment: σl2=​[l2]=1N​∑i=1Nzl,i2 _l^2=E[z_l^2]= 1N _i=1^Nz_l,i^2. The objective of the ℒI-NormL_I-Norm loss is to drive these moments toward the LN targets (mean =0=0, variance =1=1). Statistical Significance Testing To robustly determine statistical significance for comparisons between different network architectures and training conditions (e.g., accuracy comparisons across hyperparameter runs), we employed the Mann-Whitney U Test. This non-parametric test was chosen because the accuracy distributions obtained from hyperparameter sampling may not strictly follow a normal distribution. The results of this test are reported using conventional notations. 4.6 Implementation and Reproducibility All experiments were implemented in PyTorch and conducted on NVIDIA RTX 8000 GPUs with 16GB memory allocation. Training was performed using SLURM job arrays with 30 random hyperparameter configurations per experimental condition, where each run required approximately 20 minutes of compute time. The codebase includes automated batch scripts for hyperparameter sweeps and random configuration generation, ensuring consistent experimental protocols across all runs. The full repository is publicly available at: https://github.com/RoyHEyono/inhibitory-normalization. 4.7 Acknowledgments The authors would like to thank Tom George and Ibrahima Daw for helpful comments on the manuscript. This work was supported by the following sources RHE: Deepmind fellowship. DL: FRQNT Strategic Clusters Program (2020-RS4-265502 – Centre UNIQUE – Unifying Neuroscience and Artificial Intelligence – Québec), the Richard and Edith Strauss Postdoctoral Fellowship in Medicine and the Thomas Kingsley Lawrence Fund. BR: This work was supported by NSERC (Discovery Grant: RGPIN-2020- 05105; Discovery Accelerator Supplement: RGPAS-2020-00031), CIFAR (Canada AI Chair; Learning in Machine and Brains Fellowship), and DoD OUSD (R&E) under Cooperative Agreement PHY-2229929 (The NSF AI Institute for Artificial and Natural Intelligence). This research was enabled in part by support provided by (Calcul Québec) (https://w.calculquebec.ca/en/) and the Digital Research Alliance of Canada (https://alliancecan.ca/en). The authors acknowledge the material support of NVIDIA in the form of computational resources. 5 Appendix 5.1 Derivative of Layer Normalization We begin by establishing the objective function for our network. Let’s define the cross entropy ℒtask=−∑iyi​log⁡(y^i),L_task=- _iy_i ( y_i), where yiy_i is the label and y^i=s​o​f​t​m​a​x​(ϕ​(zL)) y_i=softmax(φ(z^L)) is the prediction of the network. The error at the final softmax layer L can be defined as: δL=∇y^ℒtask⊙ϕ′​(zL),δ^L= _ yL_task φ^ (z^L), where ϕφ is the activation function. In the final output layer L, ϕLφ^L is defined as the softmax function. For all preceding hidden layers l<Ll<L, we employ the Rectified Linear Unit (ReLU) activation, ϕl​(z)=max⁡(0,z)φ^l(z)= (0,z). Using the chain rule, we can propagate this error backward through the network. The backpropagated error for earlier layers l is defined as: δl=(Wl+1E​E)T​δl+1⊙ϕ′​(zl).δ^l=(W^E_l+1)^Tδ^l+1 φ^ (z^l). With the error signal established, we can define the gradient for the weight parameters. The weight update for excitatory weights Wl+1E​EW^E_l+1 w.r.t to the error at δl+1δ^l+1 is defined as: ∂ℒtask∂Wl+1E​E=δl+1​(hlE)T. _task∂ W^E_l+1=δ^l+1(h^E_l)^T. In standard architectures, layer normalization is often introduced before the activation function. Layer normalization is defined as: z^l=zl−μlσl,μl=1H​∑i=1Hzl,σl=1H​∑i=1H(zl−μl)2+c. z^l= z^l-μ^lσ^l,μ^l= 1H _i=1^Hz^l,σ^l= 1H _i=1^H(z^l-μ^l)^2+c. (3) To backpropagate through this operation, we must account for the dependency of the normalized output z^l z^l on the input vector zlz^l. When layer normalization is applied to layer l, the error signal at layer l becomes: δn​o​r​ml=(∂z^l∂zl)T​δl. _norm^l= ( ∂ z^l∂ z^l )^Tδ^l. (4) And hence, the weight update for the preceding layer now incorporates this adjusted error: ∂ℒtask∂Wl+1E​E=δn​o​r​ml+1​(hlE)T. _task∂ W^E_l+1=δ^l+1_norm(h^E_l)^T. Let’s now compute the Jacobian ∂z^l∂zl ∂ z^l∂ z^l in δn​o​r​mlδ^l_norm. This represents how each component of the normalized vector changes with respect to each component of the input vector. We first apply the quotient rule to ∂z^l∂zl ∂ z^l∂ z^l from Equation 3: ∂z^l∂zl ∂ z^l∂ z^l =1σl​∂zl​(zl−μl)−zl−μl(σl)2​∂σl∂zl = 1σ^l ∂ z^l (z^l-μ^l )- z^l-μ^l(σ^l)^2 ∂σ^l∂ z^l =1σl​∂(zl−μl)∂zl−zl−μl(σl)2​∂σl∂zl = 1σ^l ∂ (z^l-μ^l )∂ z^l- z^l-μ^l(σ^l)^2 ∂σ^l∂ z^l =1σl​(H−∂μl∂zl−zl−μlσl​∂σl∂zl) = 1σ^l ( I_H- ∂μ^l∂ z^l- z^l-μ^lσ^l ∂σ^l∂ z^l ) =1σl​(H−∂μl∂zl−z^l​(∂σl∂zl)T). = 1σ^l ( I_H- ∂μ^l∂ z^l- z^l ( ∂σ^l∂ z^l )^T ). To complete the derivation, we solve for the partial derivatives of the mean (μ) and standard deviation (σ). Note that, ∂μl∂zl=1H​H​(H)T, ∂μ^l∂ z^l= 1H1_H(1_H)^T, ∂σl∂zl=1H​(zl−μlσl)=1H​z^l. ∂σ^l∂ z^l= 1H ( z^l-μ^lσ^l )= 1H z^l. By substituting these intermediate results back into our Jacobian expression, we arrive at the full matrix representation. Altogether, ∂z^l∂zl ∂ z^l∂ z^l =1σl​(H−1H​H​(H)T−z^l​∂σl∂zl) = 1σ^l ( I_H- 1H1_H(1_H)^T- z^l ∂σ^l∂ z^l ) =1σl​(H−1H​H×H−1H​(z^l)​(z^l)T). = 1σ^l ( I_H- 1H1_H× H- 1H( z^l)( z^l)^T ). When we re-introduce Equation 4, the following expression for the normalized error emerges: δn​o​r​ml δ^l_norm =(∂z^l∂zl)T​δl = ( ∂ z^l∂ z^l )^Tδ^l =1σl​(H−1H​H×H−1H​z^l​(z^l)T)​δl. = 1σ^l (I_H- 1H1_H× H- 1H z^l( z^l)^T )δ^l. For implementation purposes, it is often useful to view the transformation on a per-element basis. Elementwise, this translates to: (δnorml)i (δ^l_norm)_i =∑j=1Hδjl​∂z^jl∂zil = _j=1^Hδ^l_j ∂ z^l_j∂ z^l_i =1σl​(δil−1H​∑j=1Hδjl−z^ilH​∑j=1Hδjl​z^jl). = 1σ^l (δ^l_i- 1H _j=1^Hδ^l_j- z^l_iH _j=1^Hδ^l_j z^l_j ). 5.2 Mean estimation via fixed random inhibition Theorem 1: Let ωii=1n\ _i\_i=1^n be i.i.d. positive random variables with ​[ωi]=μ>0E[ _i]=μ>0 and Var⁡(ωi)<∞Var( _i)<∞ that parameterize a set of n synaptic weights, νi,ni=1n\ _i,n\_i=1^n, whose values sum to 11.Let δii=1n\ _i\_i=1^n be a bounded sequence of error signals with an empirical mean denoted by δ¯ δ, i.e., δ¯=1n​∑i=1nδi. δ= 1n _i=1^n _i. Define the normalized synaptic weights νi,n=ωi∑j=1nωj, _i,n= _i _j=1^n _j, and the inhibitory pooling operation, which can be expressed as the dot product between the n-dimensional weight vector n=[ν1,n,…,νn,n]⊤ ν_n=[ _1,n,…, _n,n] and the error vector n=[δ1,…,δn]⊤ δ_n=[ _1,…, _n] : sn=n⋅n=∑i=1nνi,n​δi.s_n= ν_n· δ_n= _i=1^n _i,n\, _i. Then, in the limit as n→∞n→∞, the pooled signal converges to the empirical mean: limn→∞sn=δ¯. _n→∞s_n= δ. Proof: We first rewrite the pooled signal as a ratio: sn=∑i=1nνi,n​δi=∑i=1nωi​δi∑j=1nωj.s_n= _i=1^n _i,n\, _i= _i=1^n _i _i _j=1^n _j. Dividing numerator and denominator by n gives sn=1n​∑i=1nωi​δi1n​∑j=1nωj.s_n= 1n _i=1^n _i _i 1n _j=1^n _j. Convergence of the denominator. By Kolmogorov’s Strong Law of Large Numbers (SLLN) (Kolmogoroff, 1933), 1n​∑j=1nωj→μas ​n→∞. 1n _j=1^n _j\;→\;μ n→∞. Convergence of the numerator. Decompose: 1n​∑i=1nωi​δi=1n​∑i=1n(ωi−μ)​δi+μ​1n​∑i=1nδi. 1n _i=1^n _i _i= 1n _i=1^n( _i-μ) _i\;+\;μ\, 1n _i=1^n _i. Because the sequence δi\ _i\ is bounded, say |δi|≤C| _i|≤ C, we have Var⁡((ωi−μ)​δi)≤C2​Var⁡(ωi)<∞.Var\! (( _i-μ) _i )≤ C^2\,Var( _i)<∞. Thus the random variables (ωi−μ)​δi( _i-μ) _i are independent, mean-zero, and uniformly square-integrable. Kolmogorov’s Strong Law of Large Numbers for independent, non-identically distributed random variables therefore implies 1n​∑i=1n(ωi−μ)​δi→ 0 1n _i=1^n( _i-μ) _i\;→\;0 By assumption on δi\ _i\, 1n​∑i=1nδi→δ¯, 1n _i=1^n _i\;→\; δ, hence 1n​∑i=1nωi​δi→μ​δ¯. 1n _i=1^n _i _i\;→\;μ δ. Taking the ratio. Since the denominator converges to μ>0μ>0, we obtain sn=1n​∑i=1nωi​δi1n​∑j=1nωj→μ​δ¯μ=δ¯.s_n= 1n _i=1^n _i _i 1n _j=1^n _j\;→\; μ δμ= δ. (End of Proof 1) References B. V. Atallah, W. Bruns, M. Carandini, and M. Scanziani (2012) Parvalbumin-expressing interneurons linearly transform cortical responses to visual stimuli. Neuron 73 (1), p. 159–170. Cited by: §1, §3. J. L. Ba, J. R. Kiros, and G. E. Hinton (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §1, §4.2. M. F. Burg, S. A. Cadena, G. H. Denfield, E. Y. Walker, A. S. Tolias, M. Bethge, and A. S. Ecker (2021) Learning divisive normalization in primary visual cortex. PLoS computational biology 17 (6), p. e1009028. Cited by: §3. M. Carandini and D. J. Heeger (1994) Summation and division by neurons in primate visual cortex. Science 264 (5163), p. 1333–1336. Cited by: §4.3. M. Carandini and D. J. Heeger (2012) Normalization as a canonical neural computation. Nature reviews neuroscience 13 (1), p. 51–62. Cited by: §1, §1. J. Cornford, D. Kalajdzievski, M. Leite, A. Lamarquette, D. Kullmann, and B. Richards (2021) Learning to live with dale’s principle: anns with separate excitatory and inhibitory units. In ICLR 2021-9th International Conference on Learning Representations, Cited by: §2.1, §4.3. S. Denève and C. K. Machens (2016) Efficient codes and balanced networks. Nature neuroscience 19 (3), p. 375–382. Cited by: §1. S. El-Boustani and M. Sur (2014) Response-dependent dynamics of cell-specific inhibition in cortical networks in vivo. Nature communications 5 (1), p. 5689. Cited by: §2.2. J. Guerguiev, T. P. Lillicrap, and B. A. Richards (2017) Towards deep learning with segregated dendrites. elife 6, p. e22901. Cited by: §2.5. M. M. Karnani, M. Agetsuma, and R. Yuste (2014) A blanket of inhibition: functional inferences from dense inhibitory connectivity. Current opinion in Neurobiology 26, p. 96–102. Cited by: §2.5. A. Kolmogoroff (1933) Grundbegriffe der wahrscheinlichkeitsrechnung. Cited by: §5.2. T. P. Lillicrap, D. Cownden, D. B. Tweed, and C. J. Akerman (2016) Random synaptic feedback weights support error backpropagation for deep learning. Nature communications 7 (1), p. 13276. Cited by: §2.5, §3. T. P. Lillicrap, A. Santoro, L. Marris, C. J. Akerman, and G. Hinton (2020) Backpropagation and the brain. Nature Reviews Neuroscience 21 (6), p. 335–346. Cited by: §2.5. L. Overstreet-Wadiche and C. J. McBain (2015) Neurogliaform cells in cortical circuits. Nature Reviews Neuroscience 16 (8), p. 458–468. Cited by: §3. A. Payeur, J. Guerguiev, F. Zenke, B. A. Richards, and R. Naud (2021) Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits. Nature neuroscience 24 (7), p. 1010–1019. Cited by: §3. F. Pouille, O. Watkinson, M. Scanziani, and A. J. Trevelyan (2013) The contribution of synaptic location to inhibitory gain control in pyramidal cells. Physiological reports 1 (5). Cited by: §2.2, §3. J. V. Raimondo, H. Markram, and C. J. Akerman (2012) Short-term ionic plasticity at gabaergic synapses. Frontiers in synaptic neuroscience 4, p. 5. Cited by: §4.3. B. A. Richards, T. P. Lillicrap, P. Beaudoin, Y. Bengio, R. Bogacz, A. Christensen, C. Clopath, R. P. Costa, A. de Berker, S. Ganguli, et al. (2019) A deep learning framework for neuroscience. Nature neuroscience 22 (11), p. 1761–1770. Cited by: §2.5. B. A. Richards, O. P. Voss, and C. J. Akerman (2010) GABAergic circuits control stimulus-instructed receptive field development in the optic tectum. Nature neuroscience 13 (9), p. 1098–1106. Cited by: §1. Y. Shen, J. Wang, and S. Navlakha (2021) A correspondence between normalization strategies in artificial and biological neural networks. Neural computation 33 (12), p. 3179–3203. Cited by: §3. C. van Vreeswijk and H. Sompolinsky (1998) Chaotic balanced state in a model of cortical circuits. Neural computation 10 (6), p. 1321–1371. Cited by: §1. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. Advances in neural information processing systems 30. Cited by: §1. T. P. Vogels, H. Sprekeler, F. Zenke, C. Clopath, and W. Gerstner (2011) Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334 (6062), p. 1569–1573. Cited by: §1. J. C. Whittington and R. Bogacz (2017) An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity. Neural computation 29 (5), p. 1229–1262. Cited by: §2.5. N. R. Wilson, C. A. Runyan, F. L. Wang, and M. Sur (2012) Division and subtraction by distinct cortical inhibitory networks in vivo. Nature 488 (7411), p. 343–348. Cited by: §2.2, §3. Y. Wu and K. He (2018) Group normalization. In Proceedings of the European conference on computer vision (ECCV), p. 3–19. Cited by: §1. R. Xiong, Y. Yang, D. He, K. Zheng, S. Zheng, C. Xing, H. Zhang, Y. Lan, L. Wang, and T. Liu (2020) On layer normalization in the transformer architecture. In International conference on machine learning, p. 10524–10533. Cited by: §1. J. Xu, X. Sun, Z. Zhang, G. Zhao, and J. Lin (2019) Understanding and improving layer normalization. Advances in neural information processing systems 32. Cited by: §1, §2.3, §4.2.