I’m interested in using the SAEs and auto-interp GPT-3.5-Turbo feature explanations for RES-JB for some experiments. Is there a way to download this data?
Neuronpedia has an API (copying from a recent message Johnny wrote to someone else recently.):
“Docs are coming soon but it’s really simple to get JSON output of any feature. just add ”/api/feature/” right after “neuronpedia.org”.for example, for this feature: https://neuronpedia.org/gpt2-small/0-res-jb/0the JSON output of it is here: https://www.neuronpedia.org/api/feature/gpt2-small/0-res-jb/0(both are GET requests so you can do it in your browser)note the additional ”/api/feature/”i would prefer you not do this 100,000 times in a loop though—if you’d like a data dump we’d rather give it to you directly.”
Feel free to join the OSMI slack and post in the Neuronpedia or Sparse Autoencoder channels if you have similar questions in the future :) https://join.slack.com/t/opensourcemechanistic/shared_invite/zt-1qosyh8g3-9bF3gamhLNJiqCL_QqLFrA
I’m interested in using the SAEs and auto-interp GPT-3.5-Turbo feature explanations for RES-JB for some experiments. Is there a way to download this data?
Neuronpedia has an API (copying from a recent message Johnny wrote to someone else recently.):
“Docs are coming soon but it’s really simple to get JSON output of any feature. just add ”/api/feature/” right after “neuronpedia.org”.for example, for this feature: https://neuronpedia.org/gpt2-small/0-res-jb/0
the JSON output of it is here: https://www.neuronpedia.org/api/feature/gpt2-small/0-res-jb/0
(both are GET requests so you can do it in your browser)note the additional ”/api/feature/”i would prefer you not do this 100,000 times in a loop though—if you’d like a data dump we’d rather give it to you directly.”
Feel free to join the OSMI slack and post in the Neuronpedia or Sparse Autoencoder channels if you have similar questions in the future :) https://join.slack.com/t/opensourcemechanistic/shared_invite/zt-1qosyh8g3-9bF3gamhLNJiqCL_QqLFrA