First and foremost, this is interpretability work, not directly safety work. Our goal was to see if insights about model internals could be applied to do anything useful on a real world task, as validation that our techniques and models of interpretability were correct. I would tentatively say that we succeeded here, though less than I would have liked. We are not making a strong statement that addressing refusals is a high importance safety problem.
I do want to push back on the broader point though, I think getting refusals right does matter. I think a lot of the corporate censorship stuff is dumb, and I could not care less about whether GPT4 says naughty words. And IMO it’s not very relevant to deceptive alignment threat models, which I care a lot about. But I think it’s quite important for minimising misuse of models, which is also important: we will eventually get models capable of eg helping terrorists make better bioweapons (though I don’t think we currently have such), and people will want to deploy those behind an API. I would like them to be as jailbreak proof as possible!
I don’t see how this is a success at doing something useful on a real task. (Edit: I see how this is a real task, I just don’t see how it’s a useful improvement on baselines.)
Because I don’t think this is realistically useful, I don’t think this at all reduces my probability that your techniques are fake and your models of interpretability are wrong.
Maybe the groundedness you’re talking about comes from the fact that you’re doing interp on a domain of practical importance? I agree that doing things on a domain of practical importance might make it easier to be grounded. But it mostly seems like it would be helpful because it gives you well-tuned baselines to compare your results to. I don’t think you have results that can cleanly be compared to well-established baselines?
(Tbc I don’t think this work is particularly more ungrounded/sloppy than other interp, having not engaged with it much, I’m just not sure why you’re referring to groundedness as a particular strength of this compared to other work. I could very well be wrong here.)
Because I don’t think this is realistically useful, I don’t think this at all reduces my probability that your techniques are fake and your models of interpretability are wrong.
Maybe the groundedness you’re talking about comes from the fact that you’re doing interp on a domain of practical importance?
??? Come on, there’s clearly a difference between “we can find an Arabic feature when we go looking for anything interpretable” vs “we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain”. I definitely agree this isn’t yet close to “doing something useful, beyond what well-tuned baselines can do”. But this should presumably rule out some hypotheses that current interpretability results are due to an extreme streetlight effect?
(I suppose you could have already been 100% confident that results so far weren’t the result of extreme streetlight effect and so you didn’t update, but imo that would just make you overconfident in how good current mech interp is.)
(I’m basically saying similar things as Lawrence.)
??? Come on, there’s clearly a difference between “we can find an Arabic feature when we go looking for anything interpretable” vs “we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain”.
Oh okay, you’re saying the core point is that this project was less streetlighty because the topic you investigated was determined by the field’s interest rather than cherrypicking. I actually hadn’t understood that this is what you were saying. I agree that this makes the results slightly better.
+1 to Rohin. I also think “we found a cheaper way to remove safety guardrails from a model’s weights than fine tuning” is a real result (albeit the opposite of useful), though I would want to do more actual benchmarking before we claim that it’s cheaper too confidently. I don’t think it’s a qualitative improvement over what fine tuning can do, thus hedging and saying tentative
I’m pretty skeptical that this technique is what you end up using if you approach the problem of removing refusal behavior technique-agnostically, e.g. trying to carefully tune your fine-tuning setup, and then pick the best technique.
I don’t think we really engaged with that question in this post, so the following is fairly speculative. But I think there’s some situations where this would be a superior technique, mostly low resource settings where doing a backwards pass is prohibitive for memory reasons, or with a very tight compute budget. But yeah, this isn’t a load bearing claim for me, I still count it as a partial victory to find a novel technique that’s a bit worse than fine tuning, and think this is significantly better than prior interp work. Seems reasonable to disagree though, and say you need to be better or bust
But it mostly seems like it would be helpful because it gives you well-tuned baselines to compare your results to. I don’t think you have results that can cleanly be compared to well-established baselines?
If we compared our jailbreak technique to other jailbreaks on an existing benchmark like Harm Bench and it does comparably well to SOTA techniques, or does even better than SOTA techniques, would you consider this success at doing something useful on a real task?
If it did better than SOTA under the same assumptions, that would be cool and I’m inclined to declare you a winner. If you have to train SAEs with way more compute than typical anti-jailbreak techniques use, I feel a little iffy but I’m probably still going to like it.
Bonus points if, for whatever technique you end up using, you also test the technique which is most like your technique but which doesn’t use SAEs.
I haven’t thought that much about how exactly to make these comparisons, and might change my mind.
I’m also happy to spend at least two hours advising on what would impress me here, feel free to use them as you will.
Thanks! Note that this work uses steering vectors, not SAEs, so the technique is actually really easy and cheap—I actively think this is one of the main selling points (you can jailbreak a 70B model in minutes, without any finetuning or optimisation). I am excited at the idea of seeing if you can improve it with SAEs though—it’s not obvious to me that SAEs are better than steering vectors, though it’s plausible.
I may take you up on the two hours offer, thanks! I’ll ask my co-authors
Ugh I’m a dumbass and forgot what we were talking about sorry. Also excited for you demonstrating the steering vectors beat baselines here (I think it’s pretty likely you succeed).
Nevermind that; somewhere around 5% of the population would probably be willing to end all human life if they could. Too many people take the correct point that “human beings are, on average, aligned” and forget about the words “on average”.
I think the correct solution to models powerful enough to materially help with, say, bioweapon design, is to not train them, or failing that to destroy them as soon as you find they can do that, not to release them publicly with some mitigations and hope nobody works out a clever jailbreak.
First and foremost, this is interpretability work, not directly safety work. Our goal was to see if insights about model internals could be applied to do anything useful on a real world task, as validation that our techniques and models of interpretability were correct. I would tentatively say that we succeeded here, though less than I would have liked. We are not making a strong statement that addressing refusals is a high importance safety problem.
I do want to push back on the broader point though, I think getting refusals right does matter. I think a lot of the corporate censorship stuff is dumb, and I could not care less about whether GPT4 says naughty words. And IMO it’s not very relevant to deceptive alignment threat models, which I care a lot about. But I think it’s quite important for minimising misuse of models, which is also important: we will eventually get models capable of eg helping terrorists make better bioweapons (though I don’t think we currently have such), and people will want to deploy those behind an API. I would like them to be as jailbreak proof as possible!
I don’t see how this is a success at doing something useful on a real task. (Edit: I see how this is a real task, I just don’t see how it’s a useful improvement on baselines.)
Because I don’t think this is realistically useful, I don’t think this at all reduces my probability that your techniques are fake and your models of interpretability are wrong.
Maybe the groundedness you’re talking about comes from the fact that you’re doing interp on a domain of practical importance? I agree that doing things on a domain of practical importance might make it easier to be grounded. But it mostly seems like it would be helpful because it gives you well-tuned baselines to compare your results to. I don’t think you have results that can cleanly be compared to well-established baselines?
(Tbc I don’t think this work is particularly more ungrounded/sloppy than other interp, having not engaged with it much, I’m just not sure why you’re referring to groundedness as a particular strength of this compared to other work. I could very well be wrong here.)
??? Come on, there’s clearly a difference between “we can find an Arabic feature when we go looking for anything interpretable” vs “we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain”. I definitely agree this isn’t yet close to “doing something useful, beyond what well-tuned baselines can do”. But this should presumably rule out some hypotheses that current interpretability results are due to an extreme streetlight effect?
(I suppose you could have already been 100% confident that results so far weren’t the result of extreme streetlight effect and so you didn’t update, but imo that would just make you overconfident in how good current mech interp is.)
(I’m basically saying similar things as Lawrence.)
Oh okay, you’re saying the core point is that this project was less streetlighty because the topic you investigated was determined by the field’s interest rather than cherrypicking. I actually hadn’t understood that this is what you were saying. I agree that this makes the results slightly better.
+1 to Rohin. I also think “we found a cheaper way to remove safety guardrails from a model’s weights than fine tuning” is a real result (albeit the opposite of useful), though I would want to do more actual benchmarking before we claim that it’s cheaper too confidently. I don’t think it’s a qualitative improvement over what fine tuning can do, thus hedging and saying tentative
I’m pretty skeptical that this technique is what you end up using if you approach the problem of removing refusal behavior technique-agnostically, e.g. trying to carefully tune your fine-tuning setup, and then pick the best technique.
Because fine-tuning can be a pain and expensive? But you can probably do this quite quickly and painlessly.
If you want to say finetuning is better than this, or (more relevantly) finetuning + this, can you provide some evidence?
I don’t think we really engaged with that question in this post, so the following is fairly speculative. But I think there’s some situations where this would be a superior technique, mostly low resource settings where doing a backwards pass is prohibitive for memory reasons, or with a very tight compute budget. But yeah, this isn’t a load bearing claim for me, I still count it as a partial victory to find a novel technique that’s a bit worse than fine tuning, and think this is significantly better than prior interp work. Seems reasonable to disagree though, and say you need to be better or bust
If we compared our jailbreak technique to other jailbreaks on an existing benchmark like Harm Bench and it does comparably well to SOTA techniques, or does even better than SOTA techniques, would you consider this success at doing something useful on a real task?
If it did better than SOTA under the same assumptions, that would be cool and I’m inclined to declare you a winner. If you have to train SAEs with way more compute than typical anti-jailbreak techniques use, I feel a little iffy but I’m probably still going to like it.
Bonus points if, for whatever technique you end up using, you also test the technique which is most like your technique but which doesn’t use SAEs.
I haven’t thought that much about how exactly to make these comparisons, and might change my mind.
I’m also happy to spend at least two hours advising on what would impress me here, feel free to use them as you will.
Thanks! Note that this work uses steering vectors, not SAEs, so the technique is actually really easy and cheap—I actively think this is one of the main selling points (you can jailbreak a 70B model in minutes, without any finetuning or optimisation). I am excited at the idea of seeing if you can improve it with SAEs though—it’s not obvious to me that SAEs are better than steering vectors, though it’s plausible.
I may take you up on the two hours offer, thanks! I’ll ask my co-authors
Ugh I’m a dumbass and forgot what we were talking about sorry. Also excited for you demonstrating the steering vectors beat baselines here (I think it’s pretty likely you succeed).
To put it another way, things can be important even if they’re not existential.
Nevermind that; somewhere around 5% of the population would probably be willing to end all human life if they could. Too many people take the correct point that “human beings are, on average, aligned” and forget about the words “on average”.
I think the correct solution to models powerful enough to materially help with, say, bioweapon design, is to not train them, or failing that to destroy them as soon as you find they can do that, not to release them publicly with some mitigations and hope nobody works out a clever jailbreak.