I think so, but expect others to object. I think many people interested in circuits are using attn and MLP SAEs and experimenting with transcoders and SAE variants for attn heads. Depends how much you care about being able to say what an attn head or MLP is doing or you’re happy to just talk about features. Sam Marks at the Bau Lab is the person to ask.
Who else is actively pursuing sparse feature circuits in addition to Sam Marks? I’m curious because the code breaks in the forward pass of the linear layer in gpt2 since the dimensions are different from Pythia’s (768).
SAEs are model specific. You need Pythia SAEs to investigate Pythia. I don’t have a comprehensive list but you can look at the sparse autoencoder tag on LW for relevant papers.
I think so, but expect others to object. I think many people interested in circuits are using attn and MLP SAEs and experimenting with transcoders and SAE variants for attn heads. Depends how much you care about being able to say what an attn head or MLP is doing or you’re happy to just talk about features. Sam Marks at the Bau Lab is the person to ask.
Thank you for the feedback, and thanks for this.
Who else is actively pursuing sparse feature circuits in addition to Sam Marks? I’m curious because the code breaks in the forward pass of the linear layer in gpt2 since the dimensions are different from Pythia’s (768).
SAEs are model specific. You need Pythia SAEs to investigate Pythia. I don’t have a comprehensive list but you can look at the sparse autoencoder tag on LW for relevant papers.