I run the White Box Evaluations Team at the UK AI Security Institute. This is primarily a mechanistic interpretability team focussed on estimating and addressing risks associated with deceptive alignment. I’m a MATS 5.0 and ARENA 1.0 Alumni. Previously, I cofounded the AI Safety Research Infrastructure Org Decode Research and conducted independent research into mechanistic interpretability of decision transformers. I studied computational biology and statistics at the University of Melbourne in Australia.
Joseph Bloom
Neuronpedia has an API (copying from a recent message Johnny wrote to someone else recently.):
“Docs are coming soon but it’s really simple to get JSON output of any feature. just add ”/api/feature/” right after “neuronpedia.org”.for example, for this feature: https://neuronpedia.org/gpt2-small/0-res-jb/0
the JSON output of it is here: https://www.neuronpedia.org/api/feature/gpt2-small/0-res-jb/0
(both are GET requests so you can do it in your browser)note the additional ”/api/feature/”i would prefer you not do this 100,000 times in a loop though—if you’d like a data dump we’d rather give it to you directly.”Feel free to join the OSMI slack and post in the Neuronpedia or Sparse Autoencoder channels if you have similar questions in the future :) https://join.slack.com/t/opensourcemechanistic/shared_invite/zt-1qosyh8g3-9bF3gamhLNJiqCL_QqLFrA
I’m a little confused by this question. What are you proposing?
Lots of thoughts. This is somewhat stream of consciousness as I happen to be short on time this week, but feel free to follow up again in the future:
Anthropic tested their SAEs on a model with random weights here and found that the results look noticeably different in some respects to SAEs trained on real models “The resulting features are here, and contain many single-token features (such as “span”, “file”, ”.”, and “nature”) and some other features firing on seemingly arbitrary subsets of different broadly recognizable contexts (such as LaTeX or code).” I think further experiments like this which identify classes of features which are highly non-trivial, don’t occur in SAEs trained on random models (or random models with a W_E / W_U from a real model) or which can be related to interpretable circuity would help.
I should not that, to the extent that SAEs could be capturing structure in the data, the model might want to capture structure in the data too, so it’s not super clear to me what you would observe that would distinguish SAEs capturing structure in the data which the model itself doesn’t utilise. Working this out seems important.
Furthermore, the embedding space of LLM’s is highly structured already and since we lack good metrics, it’s hard to say how precisely SAE’s capture “marginal” structure over existing methods. So quantifying what we mean by structure seems important too.
The specific claim that SAEs learn features which are combinations of true underlying features is a reasonable one given the L1 penalty, but I think it’s far from obvious how we should think about this in practice.
I’m pretty excited about deliberate attempts to understand where SAEs might be misleading or not capturing information well (eg: here or here). It seems like there are lots of technical questions that are slightly more low level that help us build up to this.
So in summary: I’m a bit confused about what we mean here and think there are various technical threads to follow up on. Knowing which actually resolve this requires we try to define our terms here more thoroughly.
Thanks for asking:
Currently we load SAEs into my codebase here. How hard this is will depend on how different your SAE architecture/forward pass is from what I currently support. We’re planning to support users / do this ourselves for the first n users and once we can, we’ll automate the process. So feel free to link us to huggingface or a public wandb artifact.
We run the SAEs over random samples from the same dataset on which the model was trained (with activations drawn from forward passes of the same length). Callum’s SAE vis codebase has a demo where you can see how this works.
Since we’re doing this manually, the delay will depend on the complexity on handling the SAEs and things like whether they’re trained on a new model (not GPT2 small) and how busy we are with other people’s SAEs or other features. We’ll try our best and keep you in the loop. Ballpark is 1 −2 weeks not months. Possibly days (especially if the SAEs are very similar to those we are hosting already). We expect this to be much faster in the future.
We’ve made the form in part to help us estimate the time / effort required to support SAEs of different kinds (eg: if we get lots of people who all have SAEs for the same model or with the same methodological variation, we can jump on that).
It helps a little but I feel like we’re operating at too high a level of abstraction.
with the mech interp people where they think we can identify values or other high-level concepts like deception simply by looking at the model’s linear representations bottom-up, where I think that’ll be a highly non-trivial problem.
I’m not sure anyone I know in mech interp is claiming this is a non-trivial problem.
biological and artificial neural-networks are based upon the same fundamental principles
I’m confused by this statement. Do we know this? Do we have enough of an understanding of either to say this? Don’t get me wrong, there’s some level on which I totally buy this. However, I’m just highly uncertain about what is really being claimed here.
Depending on model size I’m fairly confident we can train SAEs and see if they can find relevant features (feel free to dm me about this).
Thanks for posting this! I’ve had a lot of conversations with people lately about OthelloGPT and I think it’s been useful for creating consensus about what we expect sparse autoencoders to recover in language models.
Maybe I missed it but:
What is the performance of the model when the SAE output is used in place of the activations?
What is the L0? You say 12% of features active so I assume that means 122 features are active.This seems plausibly like it could be too dense (though it’s hard to say, I don’t have strong intuitions here). It would be preferable to have a sweep where you have varying L0′s, but similar explained variance. The sparsity is important since that’s where the interpretability is coming from. One thing worth plotting might be the feature activation density of your SAE features as compares to the feature activation density of the probes (on a feature density histogram). I predict you will have features that are too sparse to match your probe directions 1:1 (apologies if you address this and I missed this).
In particular, can you point to predictions (maybe in the early game) where your model is effectively perfect and where it is also perfect with the SAE output in place of the activations at some layer? I think this is important to quantify as I don’t think we have a good understanding of the relationship between explained variance of the SAE and model performance and so it’s not clear what counts as a “good enough” SAE.
I think a number of people expected SAEs trained on OthelloGPT to recover directions which aligned with the mine/their probe directions, though my personal opinion was that besides “this square is a legal move”, it isn’t clear that we should expect features to act as classifiers over the board state in the same way that probes do.
This reflects several intuitions:
At a high level, you don’t get to pick the ontology. SAEs are exciting because they are unsupervised and can show us results we didn’t expect. On simple toy models, they do recover true features, and with those maybe we know the “true ontology” on some level. I think it’s a stretch to extend the same reasoning to OthelloGPT just because information salient to us is linearly probe-able.
Just because information is linearly probeable, doesn’t mean it should be recovered by sparse autoencoders. To expect this, we’d have to have stronger priors over the underlying algorithm used by OthelloGPT. Sure, it must us representations which enable it to make predictions up to the quality it predicts, but there’s likely a large space of concepts it could represent. For example, information could be represented by the model in a local or semi-local code or deep in superposition. Since the SAE is trying to detect representations in the model, our beliefs about the underlying algorithm should inform our expectations of what it should recover, and since we don’t have a good description of the circuits in OthelloGPT, we should be more uncertain about what the SAE should find.
Separately, it’s clear that sparse autoencoders should be biased toward local codes over semi-local / compositional codes due to the L1 sparsity penalty on activations. This means that even if we were sure that the model represented information in a particular way, it seems likely the SAE would create representations for variables like (A and B) and (A and B’) in place of A even if the model represents A. However, the exciting thing about this intuition is it makes a very testable prediction about combinations of features likely combining to be effective classifiers over the board state. I’d be very excited to see an attempt to train neuron-in-a-haystack style sparse probes over SAE features in OthelloGPT for this reason.
Some other feedback:
Positive: I think this post was really well written and while I haven’t read it in more detail, I’m a huge fan of how much detail you provided and think this is great.
Positive: I think this is a great candidate for study and I’m very interested in getting “gold-standard” results on SAEs for OthelloGPT. When Andy and I trained them, we found they could train in about 10 minutes making them a plausible candidate for regular / consistent methods benchmarking. Fast iteration is valuable.
Negative: I found your bolded claims in the introduction jarring. In particular “This demonstrates that current techniques for sparse autoencoders may fail to find a large majority of the interesting, interpretable features in a language model”. I think this is overclaiming in the sense that OthelloGPT is not toy-enough, nor do we understand it well enough to know that SAEs have failed here, so much as they aren’t recovering what you expect. Moreover, I think it would best to hold-off on proposing solutions here (in the sense that trying to map directly from your results to the viability of the technique encourages us to think about arguments for / against SAEs rather than asking, what do SAEs actually recover, how do neural networks actually work and what’s the relationship between the two).
Negative: I’m quite concerned that tieing the encoder / decoder weights and not having a decoder output bias results in worse SAEs. I’ve found the decoder bias initialization to have a big effect on performance (sometimes) and so by extension whether or not it’s there seems likely to matter. Would be interested to see you follow up on this.
Oh, and maybe you saw this already but an academic group put out this related work: https://arxiv.org/abs/2402.12201 I don’t think they quantify the proportion of probe directions they recover, but they do indicate recovery of all types of features that been previously probed for. Likely worth a read if you haven’t seen it.
I think we got similar-ish results. @Andy Arditi was going to comment here to share them shortly.
@LawrenceC Nanda MATS stream played around with this as group project with code here: https://github.com/andyrdt/mats_sae_training/tree/othellogpt
@Evan Anders “For each feature, we find all of the problems where that feature is active, and we take the two measurements of “feature goodness” ← typo?
added a link at the top.
My mental model is the encoder is working hard to find particular features and distinguish them from others (so it’s doing a compressed sensing task) and that out of context it’s off distribution and therefore doesn’t distinguish noise properly. Positional features are likely a part of that but I’d be surprised if it was most of it.
I’ve heard this idea floated a few times and am a little worried that “When a measure becomes a target, it ceases to be a good measure” will apply here. OTOH, you can directly check whether the MSE / variance explained diverges significantly so at least you can track the resulting SAE’s use for decomposition. I’d be pretty surprised if an SAE trained with this objective became vastly more performant and you could check whether downstream activations of the reconstructed activations were off distribution. So overall, I’m pretty excited to see what you get!
problems
prompts*
This means they’re somewhat problematic for OOD use cases like treacherous turn detection or detecting misgeneralization.
I kinda want to push back on this since OOD in behavior is not obviously OOD in the activations. Misgeneralization especially might be better thought of as an OOD environment and on-distribution activations?
I think we should come back to this question when SAEs have tackled something like variable binding with SAEs. Right now it’s hard to say how SAEs are going to help us understand more abstract thinking and therefore I think it’s hard to say how problematic they’re going to be for detecting things like a treacherous turn. I think this will depend on how how representations factor. In the ideal world, they generalize with the model’s ability to generalize (Apologies for how high level / vague that idea is).
Some experiments I’d be excited to look at:
If the SAE is trained on a subset of the training distribution, can we distinguish it being used to decompose activations on those data points off the training distribution?
How does that compare to an SAE trained on the whole training distribution from the model, but then looking at when the model is being pushed off distribution?
I think I’m trying to get at—can we distinguish:
Anomalous activations.
Anomalous data points.
Anomalous mechanisms.
Lots of great work to look forward to!
Why do you want to refill and shuffle tokens whenever 50% of the tokens are used?
Neel was advised by the authors that it was important minimise batches having tokens from the same prompt. This approach leads to a buffer having activations from many different prompts fairly quickly.
Is this just tokens in the training set or also the test set? In Neel’s code I didn’t see a train/test split, isn’t that important?
I never do evaluations on tokens from prompts used in training, rather, I just sample new prompts from the buffer. Some library set aside a set of tokens to do evaluations on which are re-used. I don’t currently do anything like this but it might be reasonable. In general, I’m not worried about overfitting.
Also, can you track the number of epochs of training when using this buffer method (it seems like that makes it more difficult)?
Epochs in training makes sense in a data-limited regime which we aren’t in. OpenWebText has way more tokens than we ever train any sparse autoencoder on so we’re always on way less than 1 epoch. We never reuse the same activations when training, but may use more than one activation from the same prompt.
Awesome work! I’d be quite interested to know whether the benefits from this technique are equivalently significant with a larger SAE and also what the original perplexity was (when looking at the summary statistics table). I’ll probably reimplement at some point.
Also, kudos on the visualizations. Really love the color scales!
I think so, but expect others to object. I think many people interested in circuits are using attn and MLP SAEs and experimenting with transcoders and SAE variants for attn heads. Depends how much you care about being able to say what an attn head or MLP is doing or you’re happy to just talk about features. Sam Marks at the Bau Lab is the person to ask.