The dot product of the row of a neuron’s weight vector (ie a row in W_out) with the unembedding matrix (in this case the embedding.T because GPT is tied embeddings) is what directly contributes to the logit outputs.
If the neuron activation is relatively very high, then this swamps the direction of your activations. So, artificially increasing W_in’s neurons to eg 100 should cause the same token to be predicted regardless of the prompt.
This means that neuron A could be more congruent than neuron B, but B contribute more to the logits of their token simply because B is activated more.
This is useful for mapping features to specific neurons if those features can be described as using a single token (like “ an”). I’d like to think more later about finding neurons for groups of speech, like a character’s catch phrase.
This seems all correct to me except possibly this:
So, artificially increasing W_in’s neurons to eg 100 should cause the same token to be predicted regardless of the prompt
W_in is the input weights for each neuron. So you could increase the activation of the " an" neuron by multiplying the input weights of that neuron by 100. (ie. Win.T[892]*=100.)
And if you increase the " an" neuron’s activation you will increase " an"’s logit. Our data suggests that if the activation is >10 then it will almost always be the top prediction.
If the neuron activation is relatively very high, then this swamps the direction of your activations
I think this is true but not necessarily relevant. On the one hand, this neuron’s activation will increase the logit of " an" regardless of what the other activations are. On the other hand if the other activations are high then this may reduce the probability of " an" by either increasing other logits or activating other neurons in later layers that output the opposite direction to " an" to the residual stream.
For clarifying my own understanding:
The dot product of the row of a neuron’s weight vector (ie a row in W_out) with the unembedding matrix (in this case the embedding.T because GPT is tied embeddings) is what directly contributes to the logit outputs.
If the neuron activation is relatively very high, then this swamps the direction of your activations. So, artificially increasing W_in’s neurons to eg 100 should cause the same token to be predicted regardless of the prompt.
This means that neuron A could be more congruent than neuron B, but B contribute more to the logits of their token simply because B is activated more.
This is useful for mapping features to specific neurons if those features can be described as using a single token (like “ an”). I’d like to think more later about finding neurons for groups of speech, like a character’s catch phrase.
This seems all correct to me except possibly this:
W_in is the input weights for each neuron. So you could increase the activation of the
" an"
neuron by multiplying the input weights of that neuron by 100. (ie. Win.T[892]*=100.)And if you increase the
" an"
neuron’s activation you will increase" an"
’s logit. Our data suggests that if the activation is >10 then it will almost always be the top prediction.I think this is true but not necessarily relevant. On the one hand, this neuron’s activation will increase the logit of
" an"
regardless of what the other activations are. On the other hand if the other activations are high then this may reduce the probability of" an"
by either increasing other logits or activating other neurons in later layers that output the opposite direction to" an"
to the residual stream.