We would have loved to see more motivation for why you are making the assumptions you are making when generating the toy data.
Relatedly, it would be great to see an analysis of the distribution of the MLP activations. This could give you some info where your assumptions in the toy model fall short.
This is valid; they’re not well fleshed out above. I’ll take a stab at it here below, and I discussed it a bit with Ryan below his comment. Meta-q: Are you primarily asking for better assumptions or that they be made more explicit?
RE MLP activations distribution: Good idea! One reason I didn’t really want to make too many assumptions that were specific to MLPs was that we should in theory be able to apply sparse coding to residual stream activations too. But looking closely at the distribution that you’re trying to model is, generally speaking, a good idea :) We’ll probably do that for the next round of experiments if we continue along this avenue.
As Charlie Steiner pointed out, you are using a very favorable ratio of in the toy model , i.e. of number of ground truth features to encoding dimension. I would expect you will mostly get antipodal pairs in that setup, rather than strongly interfering superposition. This may contribute significantly to the mismatch.
I hadn’t previously considered the importance of ‘strongly interfering’ superposition. But that’s clearly the right regime for real networks and probably does explain a lot about the mismatch. Thanks for highlighting this!
For the MMCS plots, we would be interested in seeing the distribution/histogram of MCS values. Especially for ~middling MCS values, where it’s not clear if all features are somewhat represented or some are a lot and some not at all.
Agree that this would be interesting! Trenton has had some ideas for metrics that better capture this notion, I think.
While we don’t think this has a big impact compared to the other potential mismatches between toy model and the MLP, we do wonder whether the model has the parameters/data/training steps it needs to develop superposition of clean features.
e.g. in the toy models report, Elhage et al. reported phase transitions of superposition over the course of training
Undertrained autoencoders is something that worries me too, especially for experiments that use larger dictionaries (They take longer to converge). In the next phase, this is definitely something we’d want to ensure/study in the next phase.
Very interesting to hear that you’ve been working on similar things! Excited to see results when they’re ready.
RE synthetic data: I’m a bit less confident in this method of data generation after the feedback below (see Tom Lieberum’s and Ryan Greenblatt’s comments). It may lose some ‘naturalness’ compared with the way the encoder in the ‘toy models of superposition’ puts one-hot features in superposition. It’s unclear whether that matters for the aims of this particular set of experiments, though.
RE metrics: It’s interesting to hear about your alternative to the MMCS metric. Putting the scale in the feature coefficients rather than in the features themselves does make things intuitive!
RE Orthogonal initialization:
IIRC this actually did help things learn faster (but I could be misremembering that, I didn’t make a note at that early stage). But if it does, I’m reasonably confident that it’ll be possible to find even better initialization schemes that work well for these autoencoders. The PCA-like algorithm sounds like a good idea (curious to hear the details!); I’d been thinking of a few similar-sounding things like:
1) Initializing the autoencoder features using noised copies of the left singular values of the weight matrix of the layer that we’re trying to interpret since these define the major axes of variation in the pre-activations, so might resemble the (post-activation) features. Also c.f. Beren and Sid’s work ‘The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable’. Or
2) If we expect the privileged basis hypothesis to apply, then initializing the autoencoder features with noised unit vectors might speed up learning.
Or other variations on those themes.