I agree that the layer norm does some work here but I think some parts of the explanation can be attributed to the inductive bias of the cross-entropy loss. I have been playing around with small toy transformers without layer norm and they show roughly similar behavior as described in this post (I ran different experiments, so I’m not confident in this claim).
My intuition was roughly: - the softmax doesn’t care about absolute size, only about the relative differences of the logits. - thus, the network merely has to make the correct logits really big and the incorrect logits small - to get the logits, you take the inner product of the activations and the unembedding. The more aligned the directions of the correct class with the corresponding unembedding weights are (i.e. the smaller their cosine similarity), the bigger the logits. - Thus, direction matters more than distance.
Layernorm seems to even further reduce the effect of distance but I think the core inductive bias comes from the cross-entropy loss.
I agree that there’s many reasons that directions do matter, but clearly distance would matter too in the softmax case!
Also, without layernorm, intermediate components of the network could “care more’ about the magnitude of the residual stream (whereas it only matters for the unembed here), while for networks w/ layernorm the intermediate components literally do not have access to magnitude data!
I agree that the layer norm does some work here but I think some parts of the explanation can be attributed to the inductive bias of the cross-entropy loss. I have been playing around with small toy transformers without layer norm and they show roughly similar behavior as described in this post (I ran different experiments, so I’m not confident in this claim).
My intuition was roughly:
- the softmax doesn’t care about absolute size, only about the relative differences of the logits.
- thus, the network merely has to make the correct logits really big and the incorrect logits small
- to get the logits, you take the inner product of the activations and the unembedding. The more aligned the directions of the correct class with the corresponding unembedding weights are (i.e. the smaller their cosine similarity), the bigger the logits.
- Thus, direction matters more than distance.
Layernorm seems to even further reduce the effect of distance but I think the core inductive bias comes from the cross-entropy loss.
I agree that there’s many reasons that directions do matter, but clearly distance would matter too in the softmax case!
Also, without layernorm, intermediate components of the network could “care more’ about the magnitude of the residual stream (whereas it only matters for the unembed here), while for networks w/ layernorm the intermediate components literally do not have access to magnitude data!
fair. You convinced me that the effect is more determined by layer-norm than cross-entropy.