Thank you for sharing your negative results. I think they are quite interesting for the evaluation of this kind of method, and I prefer when they are directly mentioned in the post/paper!
I didn’t get your answer about my question about baselines. The baseline I have in mind doesn’t use SAE at all. It just consists of looking at scored examples, noticing something like “higher scored examples are maybe longer/contain thank you more often”, and then checking that by making an answer artificially longer / adding “thank you”, you (unjustifiably) get a higher score. Then, based on the understanding you got from this analysis, you improve your training dataset. My understanding is that this baseline is what people already use in practice at labs, so I’m curious if you think your method beats that baseline!
I prefer when they are directly mentioned in the post/paper!
That would be a more honest picture. The simplest change I could think of was adding it to the high-level takeaways.
I do think you could use SAE features to beat that baseline if done in the way specified by General Takeaways. Specifically, if you have a completion that seems to do unjustifiably better, then you can find all feature’s effects on the rewards that were different than your baseline completion.
Features help come up with hypotheses, but also isolates the effect. If do have a specific hypothesis as mentioned, then you should be able to find features that capture that hypothesis (if SAEs are doing their job). When you create some alternative completion based on your hypothesis, you might unknowingly add/remove additional negative & positive features e.g. just wanting to remove completion-length, you also remove the end-of-sentence punctuation.
In general, I think it’s hard to come up with the perfect counterfactual, but SAE’s at least let you know if you’re adding or removing specific reward-relevant features in your counterfactual completions.
Thank you for sharing your negative results. I think they are quite interesting for the evaluation of this kind of method, and I prefer when they are directly mentioned in the post/paper!
I didn’t get your answer about my question about baselines. The baseline I have in mind doesn’t use SAE at all. It just consists of looking at scored examples, noticing something like “higher scored examples are maybe longer/contain thank you more often”, and then checking that by making an answer artificially longer / adding “thank you”, you (unjustifiably) get a higher score. Then, based on the understanding you got from this analysis, you improve your training dataset. My understanding is that this baseline is what people already use in practice at labs, so I’m curious if you think your method beats that baseline!
That would be a more honest picture. The simplest change I could think of was adding it to the high-level takeaways.
I do think you could use SAE features to beat that baseline if done in the way specified by General Takeaways. Specifically, if you have a completion that seems to do unjustifiably better, then you can find all feature’s effects on the rewards that were different than your baseline completion.
Features help come up with hypotheses, but also isolates the effect. If do have a specific hypothesis as mentioned, then you should be able to find features that capture that hypothesis (if SAEs are doing their job). When you create some alternative completion based on your hypothesis, you might unknowingly add/remove additional negative & positive features e.g. just wanting to remove completion-length, you also remove the end-of-sentence punctuation.
In general, I think it’s hard to come up with the perfect counterfactual, but SAE’s at least let you know if you’re adding or removing specific reward-relevant features in your counterfactual completions.