I’m glad you like it! Yeah the lack of a dataset is the thing that excites me about this kind of approach, because it allows us to get validation of our mechanistic explanations via partial “dataset recovery”, which I find to be really compelling. It’s a lot slower going, and may only work out for the first few layers, but it makes for a rewarding loop.
The utility of SAEs is in telling us in an unsupervised way that there is a feature that codes for “known entity”, but this project doesn’t use SAEs explicitly. I look for sparse sets of neurons that activate highly on “known entities”. Neel Nanda / Wes Gurnee’s sparse probing work is the inspiration here: https://arxiv.org/abs/2305.01610
But we only know to look for this sparse set of neurons because the SAEs told us the “known entity” feature exists, and it’s only because we know this feature exists that we expect neurons identified on a small set of entities (I think I looked at <5 examples and identified Neuron 0.2946, but admittedly kinda cheated by double checking on neuronpedia) to generalize.
If you count linear probing as a non-interp strategy, you could find the linear direction associated with “entity detection”, and then just run the model over all 50257^2 possible pairs of input tokens. The mech interp approach still has to deal with 50257^2 pairs of inputs, but we can use our circuit analysis to save significant time by avoiding the model overhead, meaning we get the list of bigrams pretty much instantly. The circuit analysis also tells us we have only have to look at the previous 2 tokens for determining the broad component of the “entity detection” direction, which we might not know a priori. But I wouldn’t say this is a project only interp can do, just maybe interp speeds it up significantly.
[Note the reason we need 50257^2 inputs even in the mechanistic approach is because I don’t know of a good method for extracting the sparse set of large EQKE entries without computing the whole matrix. If we could find a way to do this, then we could save significant time. But it’s not necessarily a bottleneck for analysing n-grams, because the 50257^2 complexity is coming from the quadratic form in attention, not the fact that we are looking at bigrams. So if we found a circuit for n-grams, it wouldn’t necessarily take us 50257^n time to list, whereas non-interp approaches would scale like 50257^n.]
I mostly agree with this analysis. But I think there are better safety cases for interp beyond enumeration of features. As you say, there might be shallow copies of the dataset inside models, but this is insufficient for safety approaches based upon ruling out any ‘negative features’, because models only need to store enough information for the dataset to induce behaviour.
But the enumeration of features approach is naive anyways, because it ignores any compositional/dense structure in models, which is exactly the kind of thing that we would expect competent models to develop.
Something I think interpretability is uniquely equipped to do, however, is find high-level structures in models. If, for instance, models have generalized patterns of thinking, or approaches to solving problems, then we should expect these to be encoded in the model’s weights. Approaches to solving problems that generalize, which we expect models to learn, should not be tied to the specifics of particular datapoints. And these are the more safety-relevant structures to uncover, arguably, because we expect them to be the source of model capabilities.
Throwing lots of data at the wall like with SAEs can help with uncovering such structures, because they can tell us in an unsupervised manner certain intermediate representations arising from the structures. But taking these intermediate representations as atomic, as opposed to clusters in the output of general structures, is a mistake. IMO the pipeline should look something more like: find an SAE feature that seems to belong to a general category of features, and then the real mechanistic work of uncovering what general structure gives rise to this category of features should start.