I agree that the L0′s for 0_8192 are too high in later layers, though I’ll note that I think this is mainly due to the cluster of high-frequency features (see the spike in the histogram). Features outside of this spike look pretty decent, and without the spike our L0s would be much more reasonable.
Here are four random features from layer 3, at a range of frequencies outside of the spike.
Layer 3, 0_8192, feature 138 (frequency = 0.003) activates on the newline at the end of the “field of the invention” section in patent applications. I think it’s very likely predicting that the next few tokens will be “2. Description of the Related Art” (which always comes next in patents).
Layer 3, 0_8192, feature 27 (frequency = 0.009) seems to activate on the “is” in the phrase “this is”
Layer 3, 0_8192, feature 4 (frequency = 0.026) looks messy at first, but on closer inspection seems to activate on the final token of multi-token words in informative file/variable names.
Layer 3, 0_8192, feature 56 (frequency = 0.035) looks very polysemantic: it’s activating on certain terms in LaTeX expressions, words in between periods in urls and code, and some other random-looking stuff.
Remove high-frequency features from 0_8192 layer 3 until it has L0 < 40 (the same L0 as the 1_32768 layer 3 dictionary)
Recompute statistics for this modified dictionary.
I predict the resulting dictionary will be “like 1_32768 but a bit worse.” Concretely, I’m guessing that means % loss recovered around 72%.
Results:
I killed all features of frequency larger than 0.038. This was 2041 features, and resulted in a L0 just below 40. The stats:
MSE Loss: 0.27 (worse than 1_32768)
Percent loss recovered: 77.9% (a little bit better than 1_32768)
I was a bit surprised by this—it suggests the high-frequency features are disproportionately likely to be useful for reconstructing activations in ways that don’t actually mater to the model’s computation. (Though then again, maybe this is what we expect for uninterpretable features.)
It also suggests that we might be better off training dictionaries with a too-low L1 penalty and then just pruning away high-frequency features (sort of the dual operation of “train with a high L1 penalty and resample low-frequency features”). I’d be interested for someone to explore if there’s a version of this that helps.
I agree that the L0′s for 0_8192 are too high in later layers, though I’ll note that I think this is mainly due to the cluster of high-frequency features (see the spike in the histogram). Features outside of this spike look pretty decent, and without the spike our L0s would be much more reasonable.
Here are four random features from layer 3, at a range of frequencies outside of the spike.
Layer 3, 0_8192, feature 138 (frequency = 0.003) activates on the newline at the end of the “field of the invention” section in patent applications. I think it’s very likely predicting that the next few tokens will be “2. Description of the Related Art” (which always comes next in patents).
Layer 3, 0_8192, feature 27 (frequency = 0.009) seems to activate on the “is” in the phrase “this is”
Layer 3, 0_8192, feature 4 (frequency = 0.026) looks messy at first, but on closer inspection seems to activate on the final token of multi-token words in informative file/variable names.
Layer 3, 0_8192, feature 56 (frequency = 0.035) looks very polysemantic: it’s activating on certain terms in LaTeX expressions, words in between periods in urls and code, and some other random-looking stuff.
If you removed the high-frequency features to achieve some L0 norm, X, how much does loss recovered change?
If you increased the l1 penalty to achieve L0 norm X, how does the loss recovered change as well?
Ideally, we can interpret the parts of the model that are doing things, which I’m grounding out as loss recovered in this case.
Here’s an experiment I’m about to do:
Remove high-frequency features from 0_8192 layer 3 until it has L0 < 40 (the same L0 as the 1_32768 layer 3 dictionary)
Recompute statistics for this modified dictionary.
I predict the resulting dictionary will be “like 1_32768 but a bit worse.” Concretely, I’m guessing that means % loss recovered around 72%.
Results:
I killed all features of frequency larger than 0.038. This was 2041 features, and resulted in a L0 just below 40. The stats:
MSE Loss: 0.27 (worse than 1_32768)
Percent loss recovered: 77.9% (a little bit better than 1_32768)
I was a bit surprised by this—it suggests the high-frequency features are disproportionately likely to be useful for reconstructing activations in ways that don’t actually mater to the model’s computation. (Though then again, maybe this is what we expect for uninterpretable features.)
It also suggests that we might be better off training dictionaries with a too-low L1 penalty and then just pruning away high-frequency features (sort of the dual operation of “train with a high L1 penalty and resample low-frequency features”). I’d be interested for someone to explore if there’s a version of this that helps.