Yeah I think that’s right, the problem is that the SAE sees 3 very non-orthogonal inputs, and settles on something sort of between them (but skewed towards the parent). I don’t know how to get the SAE to exactly learn the parent only in these scenarios—I think if we can solve that then we should be in pretty good shape.
This is all sketchy though. It doesn’t feel like we have a good answer to the question “How exactly do we want the SAEs to behave in various scenarios?”
I do think the goal should be to get the SAE to learn the true underlying features, at least in these toy settings where we know what the true features are. If the SAEs we’re training can’t handle simple toy examples without superposition I don’t have a lot of faith that when we’re training SAEs on real LLM activations that the results are trustworthy.
Yeah I think that’s right, the problem is that the SAE sees 3 very non-orthogonal inputs, and settles on something sort of between them (but skewed towards the parent). I don’t know how to get the SAE to exactly learn the parent only in these scenarios—I think if we can solve that then we should be in pretty good shape.
I do think the goal should be to get the SAE to learn the true underlying features, at least in these toy settings where we know what the true features are. If the SAEs we’re training can’t handle simple toy examples without superposition I don’t have a lot of faith that when we’re training SAEs on real LLM activations that the results are trustworthy.