I don’t see how this is a success at doing something useful on a real task. (Edit: I see how this is a real task, I just don’t see how it’s a useful improvement on baselines.)
Because I don’t think this is realistically useful, I don’t think this at all reduces my probability that your techniques are fake and your models of interpretability are wrong.
Maybe the groundedness you’re talking about comes from the fact that you’re doing interp on a domain of practical importance? I agree that doing things on a domain of practical importance might make it easier to be grounded. But it mostly seems like it would be helpful because it gives you well-tuned baselines to compare your results to. I don’t think you have results that can cleanly be compared to well-established baselines?
(Tbc I don’t think this work is particularly more ungrounded/sloppy than other interp, having not engaged with it much, I’m just not sure why you’re referring to groundedness as a particular strength of this compared to other work. I could very well be wrong here.)
Because I don’t think this is realistically useful, I don’t think this at all reduces my probability that your techniques are fake and your models of interpretability are wrong.
Maybe the groundedness you’re talking about comes from the fact that you’re doing interp on a domain of practical importance?
??? Come on, there’s clearly a difference between “we can find an Arabic feature when we go looking for anything interpretable” vs “we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain”. I definitely agree this isn’t yet close to “doing something useful, beyond what well-tuned baselines can do”. But this should presumably rule out some hypotheses that current interpretability results are due to an extreme streetlight effect?
(I suppose you could have already been 100% confident that results so far weren’t the result of extreme streetlight effect and so you didn’t update, but imo that would just make you overconfident in how good current mech interp is.)
(I’m basically saying similar things as Lawrence.)
??? Come on, there’s clearly a difference between “we can find an Arabic feature when we go looking for anything interpretable” vs “we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain”.
Oh okay, you’re saying the core point is that this project was less streetlighty because the topic you investigated was determined by the field’s interest rather than cherrypicking. I actually hadn’t understood that this is what you were saying. I agree that this makes the results slightly better.
+1 to Rohin. I also think “we found a cheaper way to remove safety guardrails from a model’s weights than fine tuning” is a real result (albeit the opposite of useful), though I would want to do more actual benchmarking before we claim that it’s cheaper too confidently. I don’t think it’s a qualitative improvement over what fine tuning can do, thus hedging and saying tentative
I’m pretty skeptical that this technique is what you end up using if you approach the problem of removing refusal behavior technique-agnostically, e.g. trying to carefully tune your fine-tuning setup, and then pick the best technique.
I don’t think we really engaged with that question in this post, so the following is fairly speculative. But I think there’s some situations where this would be a superior technique, mostly low resource settings where doing a backwards pass is prohibitive for memory reasons, or with a very tight compute budget. But yeah, this isn’t a load bearing claim for me, I still count it as a partial victory to find a novel technique that’s a bit worse than fine tuning, and think this is significantly better than prior interp work. Seems reasonable to disagree though, and say you need to be better or bust
But it mostly seems like it would be helpful because it gives you well-tuned baselines to compare your results to. I don’t think you have results that can cleanly be compared to well-established baselines?
If we compared our jailbreak technique to other jailbreaks on an existing benchmark like Harm Bench and it does comparably well to SOTA techniques, or does even better than SOTA techniques, would you consider this success at doing something useful on a real task?
If it did better than SOTA under the same assumptions, that would be cool and I’m inclined to declare you a winner. If you have to train SAEs with way more compute than typical anti-jailbreak techniques use, I feel a little iffy but I’m probably still going to like it.
Bonus points if, for whatever technique you end up using, you also test the technique which is most like your technique but which doesn’t use SAEs.
I haven’t thought that much about how exactly to make these comparisons, and might change my mind.
I’m also happy to spend at least two hours advising on what would impress me here, feel free to use them as you will.
Thanks! Note that this work uses steering vectors, not SAEs, so the technique is actually really easy and cheap—I actively think this is one of the main selling points (you can jailbreak a 70B model in minutes, without any finetuning or optimisation). I am excited at the idea of seeing if you can improve it with SAEs though—it’s not obvious to me that SAEs are better than steering vectors, though it’s plausible.
I may take you up on the two hours offer, thanks! I’ll ask my co-authors
Ugh I’m a dumbass and forgot what we were talking about sorry. Also excited for you demonstrating the steering vectors beat baselines here (I think it’s pretty likely you succeed).
I don’t see how this is a success at doing something useful on a real task. (Edit: I see how this is a real task, I just don’t see how it’s a useful improvement on baselines.)
Because I don’t think this is realistically useful, I don’t think this at all reduces my probability that your techniques are fake and your models of interpretability are wrong.
Maybe the groundedness you’re talking about comes from the fact that you’re doing interp on a domain of practical importance? I agree that doing things on a domain of practical importance might make it easier to be grounded. But it mostly seems like it would be helpful because it gives you well-tuned baselines to compare your results to. I don’t think you have results that can cleanly be compared to well-established baselines?
(Tbc I don’t think this work is particularly more ungrounded/sloppy than other interp, having not engaged with it much, I’m just not sure why you’re referring to groundedness as a particular strength of this compared to other work. I could very well be wrong here.)
??? Come on, there’s clearly a difference between “we can find an Arabic feature when we go looking for anything interpretable” vs “we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain”. I definitely agree this isn’t yet close to “doing something useful, beyond what well-tuned baselines can do”. But this should presumably rule out some hypotheses that current interpretability results are due to an extreme streetlight effect?
(I suppose you could have already been 100% confident that results so far weren’t the result of extreme streetlight effect and so you didn’t update, but imo that would just make you overconfident in how good current mech interp is.)
(I’m basically saying similar things as Lawrence.)
Oh okay, you’re saying the core point is that this project was less streetlighty because the topic you investigated was determined by the field’s interest rather than cherrypicking. I actually hadn’t understood that this is what you were saying. I agree that this makes the results slightly better.
+1 to Rohin. I also think “we found a cheaper way to remove safety guardrails from a model’s weights than fine tuning” is a real result (albeit the opposite of useful), though I would want to do more actual benchmarking before we claim that it’s cheaper too confidently. I don’t think it’s a qualitative improvement over what fine tuning can do, thus hedging and saying tentative
I’m pretty skeptical that this technique is what you end up using if you approach the problem of removing refusal behavior technique-agnostically, e.g. trying to carefully tune your fine-tuning setup, and then pick the best technique.
Because fine-tuning can be a pain and expensive? But you can probably do this quite quickly and painlessly.
If you want to say finetuning is better than this, or (more relevantly) finetuning + this, can you provide some evidence?
I don’t think we really engaged with that question in this post, so the following is fairly speculative. But I think there’s some situations where this would be a superior technique, mostly low resource settings where doing a backwards pass is prohibitive for memory reasons, or with a very tight compute budget. But yeah, this isn’t a load bearing claim for me, I still count it as a partial victory to find a novel technique that’s a bit worse than fine tuning, and think this is significantly better than prior interp work. Seems reasonable to disagree though, and say you need to be better or bust
If we compared our jailbreak technique to other jailbreaks on an existing benchmark like Harm Bench and it does comparably well to SOTA techniques, or does even better than SOTA techniques, would you consider this success at doing something useful on a real task?
If it did better than SOTA under the same assumptions, that would be cool and I’m inclined to declare you a winner. If you have to train SAEs with way more compute than typical anti-jailbreak techniques use, I feel a little iffy but I’m probably still going to like it.
Bonus points if, for whatever technique you end up using, you also test the technique which is most like your technique but which doesn’t use SAEs.
I haven’t thought that much about how exactly to make these comparisons, and might change my mind.
I’m also happy to spend at least two hours advising on what would impress me here, feel free to use them as you will.
Thanks! Note that this work uses steering vectors, not SAEs, so the technique is actually really easy and cheap—I actively think this is one of the main selling points (you can jailbreak a 70B model in minutes, without any finetuning or optimisation). I am excited at the idea of seeing if you can improve it with SAEs though—it’s not obvious to me that SAEs are better than steering vectors, though it’s plausible.
I may take you up on the two hours offer, thanks! I’ll ask my co-authors
Ugh I’m a dumbass and forgot what we were talking about sorry. Also excited for you demonstrating the steering vectors beat baselines here (I think it’s pretty likely you succeed).