Huh interesting about the backup heads in GPT-Neo! I would not expect a dropout-less model to have that—some ideas to consider:
the backup heads could have other main functions but incidentally are useful for the specific task we’re looking at, so they end up taking the place of the main heads
thinking of virtual attention heads, the computations performed are not easily interpretable at the individual head-level once you have a lot of layers, sort of like how neurons aren’t interpretable in big models due to superposition
Re: GPT-Neo being weird, one of the colabs in the original logit lens post shows that logit lens is pretty decent for standard GPT-2 of varying sizes but basically useless for GPT-Neo, i.e. outputs some extremely unlikely tokens for every layer before the last one. The bigger GPT-Neos are a bit better (some layers are kinda intepretable with logit lens) but still bad. Basically, the residual stream is just in a totally wacky basis until the last layer’s computations, unlike GPT-2 which shows more stability (the whole reason logit lens works).
One weird thing I noticed with GPT-Neo 125M’s embedding matrix is that the input static embeddings are super concentrated in vector space, avg. pairwise cosine similarity is 0.960 compared to GPT-2 small’s 0.225.
On the later layers not doing much, I saw some discussion on the EleutherAI discord that probes can recover really good logit distributions from the middle layers of the big GPT-Neo models. I haven’t looked into this more myself so I don’t know how it compares to GPT-2. Just seems to be an overall profoundly strange model.
Just dug into it more, the GPT-Neo embed just has a large constant offset. Average norm is 11.4, norm of mean is 11. Avg cosine sim is 0.93 before, after subtracting the mean it’s 0.0024 (avg absolute value of cosine sim is 0.1831)
Wait, WTF? Are you sure? 0.96 is super high. The only explanation I can see for that is a massive constant offset dominating the cosine sim (which isn’t crazy tbh).
The Colab claims that the logit lens doesn’t work for GPT-Neo, but does work if you include the final block, which seems sane to me. I think that in GPT-2 the MLP0 is basically part of the embed, so it doesn’t seem crazy for the inverse to be true (esp if you do the dumb thing of making your embedding + unembedding matrix the same)
I’m pretty sure! I don’t think I messed up anywhere in my code (just nested for loop lol). An interesting consequence of this is that for GPT-2, applying logit lens to the embedding matrix (i.e. softmax(WEWU)=softmax(WEWTE)) gives us a near-perfect autoencoder (the top output is the token fed in itself), but for GPT-Neo it always gets us the vector with the largest magnitude since in the dot product x⋅y=∥x∥∥y∥cos(θ) the cosine similarity is a useless term.
What do you mean about MLP0 being basically part of the embed btw? There is no MLP before the first attention layer right?
See my other comment—it turns out to be the boring fact that there’s a large constant offset in the GPT-Neo embeddings. If you subtract the mean of the GPT-Neo embed it looks normal. (Though the fact that this exists is interesting! I wonder what that direction is used for?)
What do you mean about MLP0 being basically part of the embed btw? There is no MLP before the first attention layer right?
I mean that, as far as I can tell (medium confidence) attn0 in GPT-2 isn’t used for much, and MLP0 contains most of the information about the value of the token at each position. Eg, ablating MLP0 completely kills performance, while ablating other MLPs doesn’t. And generally the kind of tasks that I’d expect to depend on tokens depend substantially on MLP0
Cool that you figured that out, easily explains the high cosine similarity! It does seem to me that a large constant offset to all the embeddings is interesting, since that means GPT-Neo’s later layers have to do computation taking that into account, which seems not at all like an efficient decision. I will def poke around more.
Interesting on MLP0 (I swear I use zero indexing lol just got momentarily confused)! Does that hold across the different GPT sizes?
Huh interesting about the backup heads in GPT-Neo! I would not expect a dropout-less model to have that—some ideas to consider:
the backup heads could have other main functions but incidentally are useful for the specific task we’re looking at, so they end up taking the place of the main heads
thinking of virtual attention heads, the computations performed are not easily interpretable at the individual head-level once you have a lot of layers, sort of like how neurons aren’t interpretable in big models due to superposition
Re: GPT-Neo being weird, one of the colabs in the original logit lens post shows that logit lens is pretty decent for standard GPT-2 of varying sizes but basically useless for GPT-Neo, i.e. outputs some extremely unlikely tokens for every layer before the last one. The bigger GPT-Neos are a bit better (some layers are kinda intepretable with logit lens) but still bad. Basically, the residual stream is just in a totally wacky basis until the last layer’s computations, unlike GPT-2 which shows more stability (the whole reason logit lens works).
One weird thing I noticed with GPT-Neo 125M’s embedding matrix is that the input static embeddings are super concentrated in vector space, avg. pairwise cosine similarity is 0.960 compared to GPT-2 small’s 0.225.
On the later layers not doing much, I saw some discussion on the EleutherAI discord that probes can recover really good logit distributions from the middle layers of the big GPT-Neo models. I haven’t looked into this more myself so I don’t know how it compares to GPT-2. Just seems to be an overall profoundly strange model.
Just dug into it more, the GPT-Neo embed just has a large constant offset. Average norm is 11.4, norm of mean is 11. Avg cosine sim is 0.93 before, after subtracting the mean it’s 0.0024 (avg absolute value of cosine sim is 0.1831)
Wait, WTF? Are you sure? 0.96 is super high. The only explanation I can see for that is a massive constant offset dominating the cosine sim (which isn’t crazy tbh).
The Colab claims that the logit lens doesn’t work for GPT-Neo, but does work if you include the final block, which seems sane to me. I think that in GPT-2 the MLP0 is basically part of the embed, so it doesn’t seem crazy for the inverse to be true (esp if you do the dumb thing of making your embedding + unembedding matrix the same)
I’m pretty sure! I don’t think I messed up anywhere in my code (just nested for loop lol). An interesting consequence of this is that for GPT-2, applying logit lens to the embedding matrix (i.e. softmax(WEWU)=softmax(WEWTE)) gives us a near-perfect autoencoder (the top output is the token fed in itself), but for GPT-Neo it always gets us the vector with the largest magnitude since in the dot product x⋅y=∥x∥∥y∥cos(θ) the cosine similarity is a useless term.
What do you mean about MLP0 being basically part of the embed btw? There is no MLP before the first attention layer right?
See my other comment—it turns out to be the boring fact that there’s a large constant offset in the GPT-Neo embeddings. If you subtract the mean of the GPT-Neo embed it looks normal. (Though the fact that this exists is interesting! I wonder what that direction is used for?)
I mean that, as far as I can tell (medium confidence) attn0 in GPT-2 isn’t used for much, and MLP0 contains most of the information about the value of the token at each position. Eg, ablating MLP0 completely kills performance, while ablating other MLPs doesn’t. And generally the kind of tasks that I’d expect to depend on tokens depend substantially on MLP0
Cool that you figured that out, easily explains the high cosine similarity! It does seem to me that a large constant offset to all the embeddings is interesting, since that means GPT-Neo’s later layers have to do computation taking that into account, which seems not at all like an efficient decision. I will def poke around more.
Interesting on MLP0 (I swear I use zero indexing lol just got momentarily confused)! Does that hold across the different GPT sizes?
Haven’t checked lol