Well the substance of the claim is that when a model is calculating lots of things in superposition, these kinds of XORs arise naturally as a result of interference, so one thing to do might be to look at a small algorithmic dataset of some kind where there’s a distinct set of features to learn and no reason to learn the XORs and see if you can still probe for them. It’d be interesting to see if there are some conditions under which this is/isn’t true, e.g. if needing to learn more features makes the dependence between their calculation higher and the XORs more visible.
Maybe you could also go a bit more mathematical and hand-construct a set of weights which calculates a set of features in superposition so you can totally rule out any model effort being expended on calculating XORs and then see if they’re still probe-able.
Another thing you could do is to zero-out or max-ent the neurons/attention heads that are important for calculating the A feature, and see if you can still detect an A⊕B feature. I’m less confident in this because it might be too strong and delete even a ‘legitimate’ A⊕B feature or too weak and leave some signal in.
This kind of interference also predicts that the A|B and A|¬B features should be similar and so the degree of separation/distance from the category boundary should be small. I think you’ve already shown this to some extent with the PCA stuff though some quantification of the distance to boundary would be interesting. Even if the model was allocating resource to computing these XORs you’d still probably expect them to be much less salient though so not sure if this gives much evidence either way.
You can get ~75% just by computing the or. But we found that only at the last layer and step16000 of Pythia-70m training it achieves better than 75%, see this video
Well the substance of the claim is that when a model is calculating lots of things in superposition, these kinds of XORs arise naturally as a result of interference, so one thing to do might be to look at a small algorithmic dataset of some kind where there’s a distinct set of features to learn and no reason to learn the XORs and see if you can still probe for them. It’d be interesting to see if there are some conditions under which this is/isn’t true, e.g. if needing to learn more features makes the dependence between their calculation higher and the XORs more visible.
Maybe you could also go a bit more mathematical and hand-construct a set of weights which calculates a set of features in superposition so you can totally rule out any model effort being expended on calculating XORs and then see if they’re still probe-able.
Another thing you could do is to zero-out or max-ent the neurons/attention heads that are important for calculating the A feature, and see if you can still detect an A⊕B feature. I’m less confident in this because it might be too strong and delete even a ‘legitimate’ A⊕B feature or too weak and leave some signal in.
This kind of interference also predicts that the A|B and A|¬B features should be similar and so the degree of separation/distance from the category boundary should be small. I think you’ve already shown this to some extent with the PCA stuff though some quantification of the distance to boundary would be interesting. Even if the model was allocating resource to computing these XORs you’d still probably expect them to be much less salient though so not sure if this gives much evidence either way.
Would you expect that we can extract xors from small models like pythia-70m under your hypothesis?
Yeah I’d expect some degree of interference leading to >50% success on XORs even in small models.
You can get ~75% just by computing the or. But we found that only at the last layer and step16000 of Pythia-70m training it achieves better than 75%, see this video