See the second to last paragraph. The gradients of downstream quantities with respect to the activations contains information and structure that is not part of the activations. So in principle, there could be a general way to analyse the right gradients in the right way on top of the activations to find the features of the model. See e.g. this for an attempt to combine PCAs of activations and gradients together.
Thanks for the reference, I wanted to illuminate the value of gradients of activations in this toy example as I have been thinking about similar ideas.
I personally would be pretty excited about attribuition dictionary learning, but it seems like nobody did that on bigger models yet.
In my limited experience, attribution-patching style attributions tend to be a pain to optimise for sparsity. Very brittle. I agree it seems like a good thing to keep poking at though.
The third term in that. Though it was in a somewhat different context related to the weight partitioning project mentioned in the last paragraph, not SAE training.
Yes, brittle in hyperparameters. It was also just very painful to train in general. I wouldn’t straightforwardly extrapolate our experience to a standard SAE setup though, we had a lot of other things going on in that optimisation.
See the second to last paragraph. The gradients of downstream quantities with respect to the activations contains information and structure that is not part of the activations. So in principle, there could be a general way to analyse the right gradients in the right way on top of the activations to find the features of the model. See e.g. this for an attempt to combine PCAs of activations and gradients together.
Thanks for the reference, I wanted to illuminate the value of gradients of activations in this toy example as I have been thinking about similar ideas.
I personally would be pretty excited about attribuition dictionary learning, but it seems like nobody did that on bigger models yet.
In my limited experience, attribution-patching style attributions tend to be a pain to optimise for sparsity. Very brittle. I agree it seems like a good thing to keep poking at though.
Did you use something like LSAE as described here ? By brittle do you mean w.r.t the sparsity penality (and other hyperparameters)?
The third term in that. Though it was in a somewhat different context related to the weight partitioning project mentioned in the last paragraph, not SAE training.
Yes, brittle in hyperparameters. It was also just very painful to train in general. I wouldn’t straightforwardly extrapolate our experience to a standard SAE setup though, we had a lot of other things going on in that optimisation.
I see, thanks for sharing!