To assess the effect of dataset correlations on the interpretability of feature activations, we run dictionary learning on a version of our one-layer model with random weights. 28 The resulting features are here, and contain many single-token features (such as “span”, “file”, ”.”, and “nature”) and some other features firing on seemingly arbitrary subsets of different broadly recognizable contexts (such as LaTeX or code). However, we are unable to construct interpretations for the non-single-token features that make much sense and invite the reader to examine feature visualizations from the model with randomized weights to confirm this for themselves. We conclude that the learning process for the model creates a richer structure in its activations than the distribution of tokens in the dataset alone.
In my first SAE feature post, I show a clearly positional feature:
which is not a feature you’ll find in a SAE trained on a randomly intitialized transformer.
The reason the auto-interp metric is similar is likely due to the fact that SAEs on random weights still have single-token features (ie activate on one token). Single-token features are the easiest feature to auto-interp since the hypothesis is “activates on this token” which is easy to predict for an LLM.
When you look at their appendix at their sampled features for the random features, all three are single token features.
However, I do want to clarify that their paper is still novel (they did random weights and controls over many layers in Pythia 410M) and did many other experiments in their paper: it’s a valid contribution to the field, imo.
Also to clarify that SAEs aren’t perfect, and there’s a recent paper on it (which I don’t think captures all the problems), and I’m really glad Apollo’s diversified away from SAE’s by pursuing their weight-based interp approach (which I think is currently underrated karma-wise by 3x).
Well, maybe we did go astray, but it’s not for any reasons mentioned in this paper!
SAEs were trained on random weights since Anthropic’s first SAE paper in 2023:
In my first SAE feature post, I show a clearly positional feature:
which is not a feature you’ll find in a SAE trained on a randomly intitialized transformer.
The reason the auto-interp metric is similar is likely due to the fact that SAEs on random weights still have single-token features (ie activate on one token). Single-token features are the easiest feature to auto-interp since the hypothesis is “activates on this token” which is easy to predict for an LLM.
When you look at their appendix at their sampled features for the random features, all three are single token features.
However, I do want to clarify that their paper is still novel (they did random weights and controls over many layers in Pythia 410M) and did many other experiments in their paper: it’s a valid contribution to the field, imo.
Also to clarify that SAEs aren’t perfect, and there’s a recent paper on it (which I don’t think captures all the problems), and I’m really glad Apollo’s diversified away from SAE’s by pursuing their weight-based interp approach (which I think is currently underrated karma-wise by 3x).