Currently we load SAEs into my codebase here. How hard this is will depend on how different your SAE architecture/forward pass is from what I currently support. We’re planning to support users / do this ourselves for the first n users and once we can, we’ll automate the process. So feel free to link us to huggingface or a public wandb artifact.
We run the SAEs over random samples from the same dataset on which the model was trained (with activations drawn from forward passes of the same length). Callum’s SAE vis codebase has a demo where you can see how this works.
Since we’re doing this manually, the delay will depend on the complexity on handling the SAEs and things like whether they’re trained on a new model (not GPT2 small) and how busy we are with other people’s SAEs or other features. We’ll try our best and keep you in the loop. Ballpark is 1 −2 weeks not months. Possibly days (especially if the SAEs are very similar to those we are hosting already). We expect this to be much faster in the future.
We’ve made the form in part to help us estimate the time / effort required to support SAEs of different kinds (eg: if we get lots of people who all have SAEs for the same model or with the same methodological variation, we can jump on that).
Three questions:
What format do you upload SAEs in?
What data do you run the SAEs over to generate the activations / samples?
How long of a delay is there between uploading an SAE and it being available to view?
Thanks for asking:
Currently we load SAEs into my codebase here. How hard this is will depend on how different your SAE architecture/forward pass is from what I currently support. We’re planning to support users / do this ourselves for the first n users and once we can, we’ll automate the process. So feel free to link us to huggingface or a public wandb artifact.
We run the SAEs over random samples from the same dataset on which the model was trained (with activations drawn from forward passes of the same length). Callum’s SAE vis codebase has a demo where you can see how this works.
Since we’re doing this manually, the delay will depend on the complexity on handling the SAEs and things like whether they’re trained on a new model (not GPT2 small) and how busy we are with other people’s SAEs or other features. We’ll try our best and keep you in the loop. Ballpark is 1 −2 weeks not months. Possibly days (especially if the SAEs are very similar to those we are hosting already). We expect this to be much faster in the future.
We’ve made the form in part to help us estimate the time / effort required to support SAEs of different kinds (eg: if we get lots of people who all have SAEs for the same model or with the same methodological variation, we can jump on that).