Thank you for the feedback! I ran some of the experiments you suggested and added them to the appendix of the post.
While I was running some of the experiments, I realized I had made a big mistake in my analysis: in fact, the direction which matter the most (found by RLACE) is the one with large changes (and not the one with crisp changes)! (I’ve edited the post to correct that mistake.)
What I’m actually doing is an affine projection: v←((v−m)−((v−m).d)d)+m where v is the activation, d the direction (normalized), and m is “the median in the direction of d” m=medianv(d.v)d.
Looking at the gradient might be a good idea, I haven’t tried it.
About your hypotheses:
Definitely something like that is going on, though I don’t think I capture most of the highly correlated features you might want to catch, since the text I use to find the direction is very basic.
You might be interested in two different kinds of metrics:
Is your classifier doing well on the activations? (This is the accuracy I report, I chose accuracy since it is easier to understand than the loss of a linear classifier)
Is your model actually outputting he in sentences about men, she in sentences about women, and is it confused in general about gender. I did measure something like the logit difference of he vs she (I actually measured probability ratios relative to the bigger probability to avoid giving to much weight to outliers), and found a “bias” (on the training data) of 0.87 (no bias is 0, max is 1) before edit, 0.73 after edit with RLACE, and 0.82 after edit with INLP. (I can give more detail about the metric if someone is interested.)
Dropout doesn’t seem to be the source of the effect: I ran the experiment with GPT-Neo-125M and found qualitatively similar results (see appendix).
Yes, gender might be hard. I’m open to suggestions for better concepts! Most concept are not as crisp as gender, which might make things harder. Indeed, the technique requires you to provide “positive” and “negative” sentences, ideally pairs of sentences which differ only by the target concept.
Breaking the model is one of the big things this technique does. But I find it interesting if you are able to break the model “more” when it comes to gender related subject, and it looks like this is happening (generation diverge slower when it’s not about gender). One observation providing evidence for “you’re mostly breaking the model”: in experiments where the concept is political left vs political right (see notebook in appendix), the model edited for gender produced weird results.
Great idea, swapping works remarkably well!
Eye balling the completions, the “swap” works better than the projection without breaking the model more than the projection (see appendix), and using the metric I described above, you get a bias of −0.29 (inverted bias) for the model edited with RLACE and 0.68 for the mode edited with INLP.
You can also use the opposite idea to increase bias (mutliply the importance of the direction by 2), and this somewhat works: you get a bias of 0.83 (down from 0.87) with RLACE, and 0.90 (up from 0.87) with INLP. INLP did increase the bias. RLACE has probably broken too many things to be able to be more biased than “reality”.
I think this is evidence for the fact that this technique is not just breaking the model.
Thank you for the feedback! I ran some of the experiments you suggested and added them to the appendix of the post.
While I was running some of the experiments, I realized I had made a big mistake in my analysis: in fact, the direction which matter the most (found by RLACE) is the one with large changes (and not the one with crisp changes)! (I’ve edited the post to correct that mistake.)
What I’m actually doing is an affine projection: v←((v−m)−((v−m).d)d)+m where v is the activation, d the direction (normalized), and m is “the median in the direction of d” m=medianv(d.v)d.
Looking at the gradient might be a good idea, I haven’t tried it.
About your hypotheses:
Definitely something like that is going on, though I don’t think I capture most of the highly correlated features you might want to catch, since the text I use to find the direction is very basic.
You might be interested in two different kinds of metrics:
Is your classifier doing well on the activations? (This is the accuracy I report, I chose accuracy since it is easier to understand than the loss of a linear classifier)
Is your model actually outputting he in sentences about men, she in sentences about women, and is it confused in general about gender. I did measure something like the logit difference of he vs she (I actually measured probability ratios relative to the bigger probability to avoid giving to much weight to outliers), and found a “bias” (on the training data) of 0.87 (no bias is 0, max is 1) before edit, 0.73 after edit with RLACE, and 0.82 after edit with INLP. (I can give more detail about the metric if someone is interested.)
Dropout doesn’t seem to be the source of the effect: I ran the experiment with GPT-Neo-125M and found qualitatively similar results (see appendix).
Yes, gender might be hard. I’m open to suggestions for better concepts! Most concept are not as crisp as gender, which might make things harder. Indeed, the technique requires you to provide “positive” and “negative” sentences, ideally pairs of sentences which differ only by the target concept.
Breaking the model is one of the big things this technique does. But I find it interesting if you are able to break the model “more” when it comes to gender related subject, and it looks like this is happening (generation diverge slower when it’s not about gender). One observation providing evidence for “you’re mostly breaking the model”: in experiments where the concept is political left vs political right (see notebook in appendix), the model edited for gender produced weird results.
Great idea, swapping works remarkably well!
Eye balling the completions, the “swap” works better than the projection without breaking the model more than the projection (see appendix), and using the metric I described above, you get a bias of −0.29 (inverted bias) for the model edited with RLACE and 0.68 for the mode edited with INLP.
You can also use the opposite idea to increase bias (mutliply the importance of the direction by 2), and this somewhat works: you get a bias of 0.83 (down from 0.87) with RLACE, and 0.90 (up from 0.87) with INLP. INLP did increase the bias. RLACE has probably broken too many things to be able to be more biased than “reality”.
I think this is evidence for the fact that this technique is not just breaking the model.