It occurred to me that LayerNorm seems to be implementing something like lateral inhibition, using extreme values of one neuron to affect the activations of other neurons. In biological brains, lateral inhibition plays a key role in many computations, enabling things like sparse coding and attention. Of course, in those systems, input goes through every neuron’s own nonlinear activation function prior to having lateral inhibition applied.
I would be interested in seeing the effect of applying a nonlinearity (such as ReLU, GELU, ELU, etc.) prior to LayerNorm in an artificial neural network. My guess is that it would help prevent neurons with strong negative pre-activations from messing with the output of more positively activated neurons, as happens with pure LayerNorm. Of course, that would limit things to the first orthant for ReLU, although not for GELU or ELU. Not sure how that would affect stretching and folding operations, though.
By the way, have you looked at how this would affect processing in a CNN, normalizing each pixel of a given layer across all feature channels? I think I’ve tried using LayerNorm in such a context before, but I don’t recall it turning out too well. Maybe I could look into that again sometime.
That was my first thought as well. As far as I know, the most popular simple model used for this in the neuro literature, divisive normalization, uses similar but not quite identical formula. Different authors use different variations, but it’s something shaped like
zi=yαiβα+∑jκijyαj
where yi is the unit’s activation before lateral inhibition, β adds a shift/bias, κij are the respective inhibition coefficients, and the exponent α modulates the sharpness of the sigmoid (2 is a typical value). Here’s an interactive desmos plot with just a single self-inhibiting unit. This function is asymmetric in the way you describe, if I understand you correctly, but to my knowledge it’s never gained any popularity outside of its niche. The ML community seems to much prefer Softmax, LayerNorm et al. and I’m curious if anyone knows if there’s a deep technical reason for these different choices.
I think in feed-forward networks (i.e. they don’t re-use the same neuron multiple times), having to learn all the kij inhibition coefficients is too much to ask. RNNs have gone in an out of fashion, and maybe they could use something like this (maybe scaled down a little), but you could achieve similar inhibition effects with multiple different architectures—LSTMs already have multiplication built into them, but in a different way. There is not a particularly deep technical reason for different choices.
Awesome visualizations. Thanks for doing this.
It occurred to me that LayerNorm seems to be implementing something like lateral inhibition, using extreme values of one neuron to affect the activations of other neurons. In biological brains, lateral inhibition plays a key role in many computations, enabling things like sparse coding and attention. Of course, in those systems, input goes through every neuron’s own nonlinear activation function prior to having lateral inhibition applied.
I would be interested in seeing the effect of applying a nonlinearity (such as ReLU, GELU, ELU, etc.) prior to LayerNorm in an artificial neural network. My guess is that it would help prevent neurons with strong negative pre-activations from messing with the output of more positively activated neurons, as happens with pure LayerNorm. Of course, that would limit things to the first orthant for ReLU, although not for GELU or ELU. Not sure how that would affect stretching and folding operations, though.
By the way, have you looked at how this would affect processing in a CNN, normalizing each pixel of a given layer across all feature channels? I think I’ve tried using LayerNorm in such a context before, but I don’t recall it turning out too well. Maybe I could look into that again sometime.
That was my first thought as well. As far as I know, the most popular simple model used for this in the neuro literature, divisive normalization, uses similar but not quite identical formula. Different authors use different variations, but it’s something shaped like
zi=yαiβα+∑jκijyαjwhere yi is the unit’s activation before lateral inhibition, β adds a shift/bias, κij are the respective inhibition coefficients, and the exponent α modulates the sharpness of the sigmoid (2 is a typical value). Here’s an interactive desmos plot with just a single self-inhibiting unit. This function is asymmetric in the way you describe, if I understand you correctly, but to my knowledge it’s never gained any popularity outside of its niche. The ML community seems to much prefer Softmax, LayerNorm et al. and I’m curious if anyone knows if there’s a deep technical reason for these different choices.
I think in feed-forward networks (i.e. they don’t re-use the same neuron multiple times), having to learn all the kij inhibition coefficients is too much to ask. RNNs have gone in an out of fashion, and maybe they could use something like this (maybe scaled down a little), but you could achieve similar inhibition effects with multiple different architectures—LSTMs already have multiplication built into them, but in a different way. There is not a particularly deep technical reason for different choices.