Thanks for posting this! I’ve had a lot of conversations with people lately about OthelloGPT and I think it’s been useful for creating consensus about what we expect sparse autoencoders to recover in language models.
Maybe I missed it but:
What is the performance of the model when the SAE output is used in place of the activations?
What is the L0? You say 12% of features active so I assume that means 122 features are active.This seems plausibly like it could be too dense (though it’s hard to say, I don’t have strong intuitions here). It would be preferable to have a sweep where you have varying L0′s, but similar explained variance. The sparsity is important since that’s where the interpretability is coming from. One thing worth plotting might be the feature activation density of your SAE features as compares to the feature activation density of the probes (on a feature density histogram). I predict you will have features that are too sparse to match your probe directions 1:1 (apologies if you address this and I missed this).
In particular, can you point to predictions (maybe in the early game) where your model is effectively perfect and where it is also perfect with the SAE output in place of the activations at some layer? I think this is important to quantify as I don’t think we have a good understanding of the relationship between explained variance of the SAE and model performance and so it’s not clear what counts as a “good enough” SAE.
I think a number of people expected SAEs trained on OthelloGPT to recover directions which aligned with the mine/their probe directions, though my personal opinion was that besides “this square is a legal move”, it isn’t clear that we should expect features to act as classifiers over the board state in the same way that probes do.
This reflects several intuitions:
At a high level, you don’t get to pick the ontology. SAEs are exciting because they are unsupervised and can show us results we didn’t expect. On simple toy models, they do recover true features, and with those maybe we know the “true ontology” on some level. I think it’s a stretch to extend the same reasoning to OthelloGPT just because information salient to us is linearly probe-able.
Just because information is linearly probeable, doesn’t mean it should be recovered by sparse autoencoders. To expect this, we’d have to have stronger priors over the underlying algorithm used by OthelloGPT. Sure, it must us representations which enable it to make predictions up to the quality it predicts, but there’s likely a large space of concepts it could represent. For example, information could be represented by the model in a local or semi-local code or deep in superposition. Since the SAE is trying to detect representations in the model, our beliefs about the underlying algorithm should inform our expectations of what it should recover, and since we don’t have a good description of the circuits in OthelloGPT, we should be more uncertain about what the SAE should find.
Separately, it’s clear that sparse autoencoders should be biased toward local codes over semi-local / compositional codes due to the L1 sparsity penalty on activations. This means that even if we were sure that the model represented information in a particular way, it seems likely the SAE would create representations for variables like (A and B) and (A and B’) in place of A even if the model represents A. However, the exciting thing about this intuition is it makes a very testable prediction about combinations of features likely combining to be effective classifiers over the board state. I’d be very excited to see an attempt to train neuron-in-a-haystack style sparse probes over SAE features in OthelloGPT for this reason.
Some other feedback:
Positive: I think this post was really well written and while I haven’t read it in more detail, I’m a huge fan of how much detail you provided and think this is great.
Positive: I think this is a great candidate for study and I’m very interested in getting “gold-standard” results on SAEs for OthelloGPT. When Andy and I trained them, we found they could train in about 10 minutes making them a plausible candidate for regular / consistent methods benchmarking. Fast iteration is valuable.
Negative: I found your bolded claims in the introduction jarring. In particular “This demonstrates that current techniques for sparse autoencoders may fail to find a large majority of the interesting, interpretable features in a language model”. I think this is overclaiming in the sense that OthelloGPT is not toy-enough, nor do we understand it well enough to know that SAEs have failed here, so much as they aren’t recovering what you expect. Moreover, I think it would best to hold-off on proposing solutions here (in the sense that trying to map directly from your results to the viability of the technique encourages us to think about arguments for / against SAEs rather than asking, what do SAEs actually recover, how do neural networks actually work and what’s the relationship between the two).
Negative: I’m quite concerned that tieing the encoder / decoder weights and not having a decoder output bias results in worse SAEs. I’ve found the decoder bias initialization to have a big effect on performance (sometimes) and so by extension whether or not it’s there seems likely to matter. Would be interested to see you follow up on this.
Oh, and maybe you saw this already but an academic group put out this related work: https://arxiv.org/abs/2402.12201 I don’t think they quantify the proportion of probe directions they recover, but they do indicate recovery of all types of features that been previously probed for. Likely worth a read if you haven’t seen it.
I’ve had a lot of conversations with people lately about OthelloGPT and I think it’s been useful for creating consensus about what we expect sparse autoencoders to recover in language models.
I’m surprised how many people have turned up trying to do something like this!
What is the performance of the model when the SAE output is used in place of the activations?
I didn’t test this.
What is the L0? You say 12% of features active so I assume that means 122 features are active.
That’s correct. I was satisfied with 122 because if the SAEs “worked perfectly” (and in the assumed ontology etc) they’d decompose the activations into 64 features for [position X is empty/own/enemy], plus presumably other features. So that level of density was acceptable to me because it would allow the desired ontology to emerge. Worth trying other densities though!
In particular, can you point to predictions (maybe in the early game) where your model is effectively perfect and where it is also perfect with the SAE output in place of the activations at some layer? I think this is important to quantify as I don’t think we have a good understanding of the relationship between explained variance of the SAE and model performance and so it’s not clear what counts as a “good enough” SAE.
I agree, but that’s part of what’s interesting to me here—what if OthelloGPT has a copy of a human-understandable ontology, and also an alien ontology, and sparse autoencoders find a lot of features in OthelloGPT that are interpretable but miss the human-understandable ontology? Now what if all of that happens in an AGI we’re trying to interpret? I’m trying to prove by example that “human-understandable ontology exists” and “SAEs find interpretable features” fail to imply “SAEs find the human-understandable ontology”. (But if I’m wrong and there’s a magic ingredient to make the SAE find the human-understandable ontology, lets find it and use it going forward!)
Separately, it’s clear that sparse autoencoders should be biased toward local codes over semi-local / compositional codes due to the L1 sparsity penalty on activations. This means that even if we were sure that the model represented information in a particular way, it seems likely the SAE would create representations for variables like (A and B) and (A and B’) in place of A even if the model represents A. However, the exciting thing about this intuition is it makes a very testable prediction about combinations of features likely combining to be effective classifiers over the board state. I’d be very excited to see an attempt to train neuron-in-a-haystack style sparse probes over SAE features in OthelloGPT for this reason.
I think that’s a plausible failure mode, and someone should definitely test for it!
I found your bolded claims in the introduction jarring. In particular “This demonstrates that current techniques for sparse autoencoders may fail to find a large majority of the interesting, interpretable features in a language model”.
I think our readings of that sentence are slightly different, where I wrote it with more emphasis on “may” than you took it. I really only mean this as an n=1 demonstration. But at the same time, if it turns out you need to untie your weights, or investigate one layer in particular, or some other small-but-important detail, that’s important to know about!
I believe I do? The only call I intended to make was “We hope that these results will inspire more work to improve the architecture or training methods of sparse autoencoders to address this shortcoming.” Personally I feel like SAEs have a ton of promise, but also could benefit from a battery of experimentation to figure out exactly what works best. I hope no one will read this post as saying “we need to throw out SAEs and start over”.
Negative: I’m quite concerned that tieing the encoder / decoder weights and not having a decoder output bias results in worse SAEs.
That’s plausible. I’ll launch a training run of an untied SAE and hopefully will have results back later today!
Followup on tied vs untied weights: it looks like untied makes a small improvement over tied, primarily in layers 2-4 which already have the most classifiers. Still missing the middle ring features though.
Next steps are using the Li et al model and training the SAE on more data.
Thanks for posting this! I’ve had a lot of conversations with people lately about OthelloGPT and I think it’s been useful for creating consensus about what we expect sparse autoencoders to recover in language models.
Maybe I missed it but:
What is the performance of the model when the SAE output is used in place of the activations?
What is the L0? You say 12% of features active so I assume that means 122 features are active.This seems plausibly like it could be too dense (though it’s hard to say, I don’t have strong intuitions here). It would be preferable to have a sweep where you have varying L0′s, but similar explained variance. The sparsity is important since that’s where the interpretability is coming from. One thing worth plotting might be the feature activation density of your SAE features as compares to the feature activation density of the probes (on a feature density histogram). I predict you will have features that are too sparse to match your probe directions 1:1 (apologies if you address this and I missed this).
In particular, can you point to predictions (maybe in the early game) where your model is effectively perfect and where it is also perfect with the SAE output in place of the activations at some layer? I think this is important to quantify as I don’t think we have a good understanding of the relationship between explained variance of the SAE and model performance and so it’s not clear what counts as a “good enough” SAE.
I think a number of people expected SAEs trained on OthelloGPT to recover directions which aligned with the mine/their probe directions, though my personal opinion was that besides “this square is a legal move”, it isn’t clear that we should expect features to act as classifiers over the board state in the same way that probes do.
This reflects several intuitions:
At a high level, you don’t get to pick the ontology. SAEs are exciting because they are unsupervised and can show us results we didn’t expect. On simple toy models, they do recover true features, and with those maybe we know the “true ontology” on some level. I think it’s a stretch to extend the same reasoning to OthelloGPT just because information salient to us is linearly probe-able.
Just because information is linearly probeable, doesn’t mean it should be recovered by sparse autoencoders. To expect this, we’d have to have stronger priors over the underlying algorithm used by OthelloGPT. Sure, it must us representations which enable it to make predictions up to the quality it predicts, but there’s likely a large space of concepts it could represent. For example, information could be represented by the model in a local or semi-local code or deep in superposition. Since the SAE is trying to detect representations in the model, our beliefs about the underlying algorithm should inform our expectations of what it should recover, and since we don’t have a good description of the circuits in OthelloGPT, we should be more uncertain about what the SAE should find.
Separately, it’s clear that sparse autoencoders should be biased toward local codes over semi-local / compositional codes due to the L1 sparsity penalty on activations. This means that even if we were sure that the model represented information in a particular way, it seems likely the SAE would create representations for variables like (A and B) and (A and B’) in place of A even if the model represents A. However, the exciting thing about this intuition is it makes a very testable prediction about combinations of features likely combining to be effective classifiers over the board state. I’d be very excited to see an attempt to train neuron-in-a-haystack style sparse probes over SAE features in OthelloGPT for this reason.
Some other feedback:
Positive: I think this post was really well written and while I haven’t read it in more detail, I’m a huge fan of how much detail you provided and think this is great.
Positive: I think this is a great candidate for study and I’m very interested in getting “gold-standard” results on SAEs for OthelloGPT. When Andy and I trained them, we found they could train in about 10 minutes making them a plausible candidate for regular / consistent methods benchmarking. Fast iteration is valuable.
Negative: I found your bolded claims in the introduction jarring. In particular “This demonstrates that current techniques for sparse autoencoders may fail to find a large majority of the interesting, interpretable features in a language model”. I think this is overclaiming in the sense that OthelloGPT is not toy-enough, nor do we understand it well enough to know that SAEs have failed here, so much as they aren’t recovering what you expect. Moreover, I think it would best to hold-off on proposing solutions here (in the sense that trying to map directly from your results to the viability of the technique encourages us to think about arguments for / against SAEs rather than asking, what do SAEs actually recover, how do neural networks actually work and what’s the relationship between the two).
Negative: I’m quite concerned that tieing the encoder / decoder weights and not having a decoder output bias results in worse SAEs. I’ve found the decoder bias initialization to have a big effect on performance (sometimes) and so by extension whether or not it’s there seems likely to matter. Would be interested to see you follow up on this.
Oh, and maybe you saw this already but an academic group put out this related work: https://arxiv.org/abs/2402.12201 I don’t think they quantify the proportion of probe directions they recover, but they do indicate recovery of all types of features that been previously probed for. Likely worth a read if you haven’t seen it.
I’m surprised how many people have turned up trying to do something like this!
I didn’t test this.
That’s correct. I was satisfied with 122 because if the SAEs “worked perfectly” (and in the assumed ontology etc) they’d decompose the activations into 64 features for [position X is empty/own/enemy], plus presumably other features. So that level of density was acceptable to me because it would allow the desired ontology to emerge. Worth trying other densities though!
I did not test this either.
I agree, but that’s part of what’s interesting to me here—what if OthelloGPT has a copy of a human-understandable ontology, and also an alien ontology, and sparse autoencoders find a lot of features in OthelloGPT that are interpretable but miss the human-understandable ontology? Now what if all of that happens in an AGI we’re trying to interpret? I’m trying to prove by example that “human-understandable ontology exists” and “SAEs find interpretable features” fail to imply “SAEs find the human-understandable ontology”. (But if I’m wrong and there’s a magic ingredient to make the SAE find the human-understandable ontology, lets find it and use it going forward!)
I think that’s a plausible failure mode, and someone should definitely test for it!
I think our readings of that sentence are slightly different, where I wrote it with more emphasis on “may” than you took it. I really only mean this as an n=1 demonstration. But at the same time, if it turns out you need to untie your weights, or investigate one layer in particular, or some other small-but-important detail, that’s important to know about!
I believe I do? The only call I intended to make was “We hope that these results will inspire more work to improve the architecture or training methods of sparse autoencoders to address this shortcoming.” Personally I feel like SAEs have a ton of promise, but also could benefit from a battery of experimentation to figure out exactly what works best. I hope no one will read this post as saying “we need to throw out SAEs and start over”.
That’s plausible. I’ll launch a training run of an untied SAE and hopefully will have results back later today!
I haven’t seen this before! I’ll check it out!
Followup on tied vs untied weights: it looks like untied makes a small improvement over tied, primarily in layers 2-4 which already have the most classifiers. Still missing the middle ring features though.
Next steps are using the Li et al model and training the SAE on more data.
This post seems like a case of there being too many natural abstractions.