I’m pretty sure the main reason it’s good to think of features/tokens as being directions is because of layer norm. All any “active” component of the neural network “sees” is the direction of the residual stream, and not its magnitude, since layer norm’s output is always norm 1 (before applying the scale and bias terms). You do mention this in part 1.2, but I think it should be mentioned elsewhere (e.g. Intro, Part 2, Conclusion), as it seems pretty important?
I agree that the layer norm does some work here but I think some parts of the explanation can be attributed to the inductive bias of the cross-entropy loss. I have been playing around with small toy transformers without layer norm and they show roughly similar behavior as described in this post (I ran different experiments, so I’m not confident in this claim).
My intuition was roughly: - the softmax doesn’t care about absolute size, only about the relative differences of the logits. - thus, the network merely has to make the correct logits really big and the incorrect logits small - to get the logits, you take the inner product of the activations and the unembedding. The more aligned the directions of the correct class with the corresponding unembedding weights are (i.e. the smaller their cosine similarity), the bigger the logits. - Thus, direction matters more than distance.
Layernorm seems to even further reduce the effect of distance but I think the core inductive bias comes from the cross-entropy loss.
I agree that there’s many reasons that directions do matter, but clearly distance would matter too in the softmax case!
Also, without layernorm, intermediate components of the network could “care more’ about the magnitude of the residual stream (whereas it only matters for the unembed here), while for networks w/ layernorm the intermediate components literally do not have access to magnitude data!
While I think this is important, and will probably edit the post, I think even in the unembedding, when getting the logits, the behaviour cares more about direction than distance.
When I think of distance, I implicitly think Euclidean distance: d(x1,x2)=|x1−x2|=√∑i(x1,i−x2,i)2
But the actual “distance” used for calculating logits looks like this: d(x1,x2)=x1⋅x2=|x1||x2|cosθ12
Which is a lot more similar to cosine similarity: d(x1,x2)=^x1⋅^x2=cosθ12
I think that because the metric is so similar to the cosine similarity, it makes more sense to think of size + directions instead of distances and points.
Yeah, I agree! You 100% should not think about the unembed as looking for “the closest token”, as opposed to looking for the token with the largest dot product (= high cosine similarity + large size).
I suspect the piece would be helpful for people with similar confusions, though I think by default most people already think of features as directions (this is an incredible tacit assumption that’s made everywhere in mech interp work), especially since the embed/unembed are linear functions.
I’m pretty sure the main reason it’s good to think of features/tokens as being directions is because of layer norm. All any “active” component of the neural network “sees” is the direction of the residual stream, and not its magnitude, since layer norm’s output is always norm 1 (before applying the scale and bias terms). You do mention this in part 1.2, but I think it should be mentioned elsewhere (e.g. Intro, Part 2, Conclusion), as it seems pretty important?
I agree that the layer norm does some work here but I think some parts of the explanation can be attributed to the inductive bias of the cross-entropy loss. I have been playing around with small toy transformers without layer norm and they show roughly similar behavior as described in this post (I ran different experiments, so I’m not confident in this claim).
My intuition was roughly:
- the softmax doesn’t care about absolute size, only about the relative differences of the logits.
- thus, the network merely has to make the correct logits really big and the incorrect logits small
- to get the logits, you take the inner product of the activations and the unembedding. The more aligned the directions of the correct class with the corresponding unembedding weights are (i.e. the smaller their cosine similarity), the bigger the logits.
- Thus, direction matters more than distance.
Layernorm seems to even further reduce the effect of distance but I think the core inductive bias comes from the cross-entropy loss.
I agree that there’s many reasons that directions do matter, but clearly distance would matter too in the softmax case!
Also, without layernorm, intermediate components of the network could “care more’ about the magnitude of the residual stream (whereas it only matters for the unembed here), while for networks w/ layernorm the intermediate components literally do not have access to magnitude data!
fair. You convinced me that the effect is more determined by layer-norm than cross-entropy.
While I think this is important, and will probably edit the post, I think even in the unembedding, when getting the logits, the behaviour cares more about direction than distance.
When I think of distance, I implicitly think Euclidean distance:
d(x1,x2)=|x1−x2|=√∑i(x1,i−x2,i)2
But the actual “distance” used for calculating logits looks like this:
d(x1,x2)=x1⋅x2=|x1||x2|cosθ12
Which is a lot more similar to cosine similarity:
d(x1,x2)=^x1⋅^x2=cosθ12
I think that because the metric is so similar to the cosine similarity, it makes more sense to think of size + directions instead of distances and points.
Yeah, I agree! You 100% should not think about the unembed as looking for “the closest token”, as opposed to looking for the token with the largest dot product (= high cosine similarity + large size).
I suspect the piece would be helpful for people with similar confusions, though I think by default most people already think of features as directions (this is an incredible tacit assumption that’s made everywhere in mech interp work), especially since the embed/unembed are linear functions.