List of some short mech interp project ideas (see also: medium-sized and longer ideas). Feel encouraged to leave thoughts in the replies below!
Directly testing the linear representation hypothesis by making up a couple of prompts which contain a few concepts to various degrees and test
Does the model indeed represent intensity as magnitude? Or are there separate features for separately intense versions of a concept? Finding the right prompts is tricky, e.g. it makes sense that friendship and love are different features, but maybe “my favourite coffee shop” vs “a coffee shop I like” are different intensities of the same concept
Do unions of concepts indeed represent addition in vector space? I.e. is the representation of “A and B” vector_A + vector_B? I wonder if there’s a way you can generate a big synthetic dataset here, e.g. variations of “the soft green sofa” → “the [texture] [colour] [furniture]”, and do some statistical check.
Mostly I expect this to come out positive, and not to be a big update, but seems cheap to check.
SAEs vs Clustering: How much better are SAEs than (other) clustering algorithms? Previously I worried that SAEs are “just” finding the data structure, rather than features of the model. I think we could try to rule out some “dataset clustering” hypotheses by testing how much structure there is in the dataset of activations that one can explain with generic clustering methods. Will we get 50%, 90%, 99% variance explained?
I think a second spin on this direction is to look at “interpretability” / “mono-semanticity” of such non-SAE clustering methods. Do clusters appear similarly interpretable? I This would address the concern that many things look interpretable, and we shouldn’t be surprised by SAE directions looking interpretable. (Related: Szegedy et al., 2013 look at random directions in an MNIST network and find them to look interpretable.)
Activation steering vs prompting: I’ve heard the view that “activation steering is just fancy prompting” which I don’t endorse in its strong form (e.g. I expect it to be much harder for the model to ignore activation steering than to ignore prompt instructions). However, it would be nice to have a prompting-baseline for e.g. “Golden Gate Claude”. What if I insert a “<system> Remember, you’re obsessed with the Golden Gate bridge” after every chat message? I think this project would work even without the steering comparison actually.
List of some short mech interp project ideas (see also: medium-sized and longer ideas). Feel encouraged to leave thoughts in the replies below!
Directly testing the linear representation hypothesis by making up a couple of prompts which contain a few concepts to various degrees and test
Does the model indeed represent intensity as magnitude? Or are there separate features for separately intense versions of a concept? Finding the right prompts is tricky, e.g. it makes sense that friendship and love are different features, but maybe “my favourite coffee shop” vs “a coffee shop I like” are different intensities of the same concept
Do unions of concepts indeed represent addition in vector space? I.e. is the representation of “A and B” vector_A + vector_B? I wonder if there’s a way you can generate a big synthetic dataset here, e.g. variations of “the soft green sofa” → “the [texture] [colour] [furniture]”, and do some statistical check.
Mostly I expect this to come out positive, and not to be a big update, but seems cheap to check.
SAEs vs Clustering: How much better are SAEs than (other) clustering algorithms? Previously I worried that SAEs are “just” finding the data structure, rather than features of the model. I think we could try to rule out some “dataset clustering” hypotheses by testing how much structure there is in the dataset of activations that one can explain with generic clustering methods. Will we get 50%, 90%, 99% variance explained?
I think a second spin on this direction is to look at “interpretability” / “mono-semanticity” of such non-SAE clustering methods. Do clusters appear similarly interpretable? I This would address the concern that many things look interpretable, and we shouldn’t be surprised by SAE directions looking interpretable. (Related: Szegedy et al., 2013 look at random directions in an MNIST network and find them to look interpretable.)
Activation steering vs prompting: I’ve heard the view that “activation steering is just fancy prompting” which I don’t endorse in its strong form (e.g. I expect it to be much harder for the model to ignore activation steering than to ignore prompt instructions). However, it would be nice to have a prompting-baseline for e.g. “Golden Gate Claude”. What if I insert a “<system> Remember, you’re obsessed with the Golden Gate bridge” after every chat message? I think this project would work even without the steering comparison actually.