Thanks for the reference, I wanted to illuminate the value of gradients of activations in this toy example as I have been thinking about similar ideas.
I personally would be pretty excited about attribuition dictionary learning, but it seems like nobody did that on bigger models yet.
In my limited experience, attribution-patching style attributions tend to be a pain to optimise for sparsity. Very brittle. I agree it seems like a good thing to keep poking at though.
The third term in that. Though it was in a somewhat different context related to the weight partitioning project mentioned in the last paragraph, not SAE training.
Yes, brittle in hyperparameters. It was also just very painful to train in general. I wouldn’t straightforwardly extrapolate our experience to a standard SAE setup though, we had a lot of other things going on in that optimisation.
Thanks for the reference, I wanted to illuminate the value of gradients of activations in this toy example as I have been thinking about similar ideas.
I personally would be pretty excited about attribuition dictionary learning, but it seems like nobody did that on bigger models yet.
In my limited experience, attribution-patching style attributions tend to be a pain to optimise for sparsity. Very brittle. I agree it seems like a good thing to keep poking at though.
Did you use something like LSAE as described here ? By brittle do you mean w.r.t the sparsity penality (and other hyperparameters)?
The third term in that. Though it was in a somewhat different context related to the weight partitioning project mentioned in the last paragraph, not SAE training.
Yes, brittle in hyperparameters. It was also just very painful to train in general. I wouldn’t straightforwardly extrapolate our experience to a standard SAE setup though, we had a lot of other things going on in that optimisation.
I see, thanks for sharing!