Andy Arditi
One experiment I ran to check the locality:
For :
Ablate the refusal direction at layers
Measure refusal score across harmful prompts
Below is the result for Qwen 1.8B:
You can see that the ablations before layer ~14 don’t have much of an impact, nor do the ablations after layer ~17. Running another experiment just ablating the refusal direction at layers 14-17 shows that this is roughly as effective as ablating the refusal direction from all layers.
As for inducing refusal, we did a pretty extreme intervention in the paper—we added the difference-in-means vector to every token position, including generated tokens (although only at a single layer). Hard to say what the issue is without seeing your code—I recommend comparing your intervention to the one we define in the paper (it’s implemented in our repo as well).
We ablate the direction everywhere for simplicity—intuitively this prevents the model from ever representing the direction in its computation, and so a behavioral change that results from the ablation can be attributed to mediation through this direction.
However, we noticed empirically that it is not necessary to ablate the direction at all layers in order to bypass refusal. Ablating at a narrow local region (2-3 middle layers) can be just as effective as ablating across all layers, suggesting that the direction is “read” or “processed” at some local region.
Thanks for the nice reply!
Yes, it makes sense to consider the threat model, and your paper does a good job of making this explicit (as in Figure 2). We just wanted to prod around and see how things are working!
The way I’ve been thinking about refusal vs unlearning, say with respect to harmful content:
Refusal is like an implicit classifier, sitting in front of the model.
If the model implicitly classifies a prompt as harmful, it will go into its refuse-y mode.
This classification is vulnerable to jailbreaks—tricks that flip the classification, enabling harmful prompts to sneak past the classifier and elicit the model’s capability to generate harmful output.
Unlearning / circuit breaking aims to directly interfere with the model’s ability to generate harmful content.
Even if the refusal classifier is bypassed, the model is not capable of generating harmful outputs.
So in some way, I think of refusal as being shallow (a classifier on top, but the capability is still underneath), and unlearning / circuit breaking as being deep (trying to directly remove the capability itself).
[I don’t know how this relates to the consensus interpretation of these terms, but it’s how I personally have been thinking of things.]
Thanks!
We haven’t tried comparing to LEACE yet. You’re right that theoretically it should be more surgical. Although, from our preliminary analysis, it seems like our naive intervention is already pretty surgical (it has minimal impact on CE loss, MMLU). (I also like our methodology is dead simple, and doesn’t require estimating covariance.)
I agree that “orthogonalization” is a bit overloaded. Not sure I like LoRACS though—when I see “LoRA”, I immediately think of fine-tuning that requires optimization power (which this method doesn’t). I do think that “orthogonalizing the weight matrices with respect to direction ” is the clearest way of describing this method.
The most finicky part of our methodology (and the part I’m least satisfied with currently) is in the selection of a direction.
For reproducibility of our Llama 3 results, I can share the positions and layers where we extracted the directions from:
8B: (position_idx = −1, layer_idx = 12)
70B: (position_idx = −5, layer_idx = 37)
The position indexing assumes the usage of this prompt template, with two new lines appended to the end.
For this model, we found that activations at the last token position (assuming this prompt template, with two new lines appended to the end) at layer 12 worked well.
Awesome work, and nice write-up!
One question that I had while reading the section on refusals:
Your method found two vectors (vectors 9 and 22) that seem to bypass refusal in the “real-world” setting.
While these vectors themselves are orthogonal (due to your imposed constraint), have you looked at the resulting downstream activation difference directions and checked if they are similar?
I.e. adding vector 9 at an early layer results in a downstream activation diff in the direction , and adding vector 22 at an early layer results in a downstream activation diff in the direction . Are these downstream activation diff directions and roughly the same? Or are they almost orthogonal?
(My prediction would be that they’re very similar.)
I think @wesg’s recent post on pathological SAE reconstruction errors is relevant here. It points out that there are very particular directions such that intervening on activations along these directions significantly impacts downstream model behavior, while this is not the case for most randomly sampled directions.
Also see @jake_mendel’s great comment for an intuitive explanation of why (probably) this is the case.
Was it substantially less effective to instead use ?
It’s about the same. And there’s a nice reason why: . I.e. for most harmless prompts, the projection onto the refusal direction is approximately zero (while it’s very positive for harmful prompts). We don’t display this clearly in the post, but you can roughly see it if you look at the PCA figure (PC 1 roughly corresponds to the “refusal direction”). This is (one reason) why we think ablation of the refusal direction works so much better than adding the negative “refusal direction,” and it’s also what motivated us to try ablation in the first place!I do want to note that your boost in refusals seems absolutely huge, well beyond 8%? I am somewhat surprised by how huge your boost is.
Note that our intervention is fairly strong here, as we are intervening at all token positions (including the newly generated tokens). But in general we’ve found it quite easy to induce refusal, and I believe we could even weaken our intervention to a subset of token positions and achieve similar results. We’ve previously reported the ease by which we can induce refusal (patching just 6 attention heads at a single token position in Llama-2-7B-chat).
Burns et al. do activation engineering? I thought the CCS paper didn’t involve that.
You’re right, thanks for the catch! I’ll update the text so it’s clear that the CCS paper does not perform model interventions.
Check out LEACE (Belrose et al. 2023) - their “concept erasure” is similar to what we call “feature ablation” here.
Second question is great. We’ve looked into this a bit, and (preliminarily) it seems like it’s the latter (base models learn some “harmful feature,” and this gets hooked into by the safety fine-tuned model). We’ll be doing more diligence on checking this for the paper.
[Responding to some select points]
1. I think you’re looking at the harmful_strings dataset, which we do not use. But in general, I agree AdvBench is not the greatest dataset. Multiple follow up papers (Chao et al. 2024, Souly et al. 2024) point this out. We use it in our train set because it contains a large volume of harmful instructions. But our method might benefit from a cleaner training dataset.
2. We don’t use the targets for anything. We only use the instructions (labeled
goal
in the harmful_behaviors dataset).5. I think choice of padding token shouldn’t matter with attention mask. I think it should work the same if you changed it.
6. Not sure about other empirically studied features that are considered “high-level action features.”
7. This is a great and interesting point! @wesg has also brought this up before! (I wish you would have made this into its own comment, so that it could be upvoted and noticed by more people!)
8. We have results showing that you don’t actually need to ablate at all layers—there is a narrow / localized region of layers where the ablation is important. Ablating everywhere is very clean and simple as a methodology though, and that’s why we share it here.
As for adding at multiple layers—this probably heavily depends on the details (e.g. which layers, how many layers, how much are you adding, etc).
9. We display the second principle component in the post. Notice that it does not separate harmful vs harmless instructions.
1. Not sure if it’s new, although I haven’t seen it used like this before. I think of the weight orthogonalization as just a nice trick to implement the ablation directly in the weights. It’s mathematically equivalent, and the conceptual leap from inference-time ablation to weight orthogonalization is not a big one.
2. I think it’s a good tool for analysis of features. There are some examples of this in sections 5 and 6 of Belrose et al. 2023 - they do concept erasure for the concept “gender,” and for the concept “part-of-speech tag.”
My rough mental model is as follows (I don’t really know if it’s right, but it’s how I’m thinking about things):Some features seem continuous, and for these features steering in the positive and negative directions work well.
For example, the “sentiment” direction. Sentiment can sort of take on continuous values, e.g. −4 (very bad), −1 (slightly bad), 3 (good), 7 (extremely good). Steering in both directions works well—steering in the negative direction causes negative sentiment behavior, and in the positive causes positive sentiment behavior.
Some features seem binary, and for these feature steering in the positive direction makes sense (turn the feature on), but ablation makes more sense than negative steering (turn the feature off).
For example, the refusal direction, as discussed in this post.
So yeah, when studying a new direction/feature, I think ablation should definitely be one of the things to try.
A good incentive to add Llama 3 to TL ;)
We run our experiments directly using PyTorch hooks on HuggingFace models. The linked demo is implemented using TL for simplicity and clarity.
Absolutely! We think this is important as well, and we’re planning to include these types of quantitative evaluations in our paper. Specifically we’re thinking of examining loss over a large corpus of internet text, loss over a large corpus of chat text, and other standard evaluations (MMLU, and perhaps one or two others).
One other note on this topic is that the second metric we use (“Safety score”) assesses whether the model completion contains harmful content. This does serve as some crude measure of a jailbreak’s coherence—if after the intervention the model becomes incoherent, for example always outputting
turtle turtle turtle ...
, this would be categorized as Refusal score = 0 since it does not contain a refusal phrase, but Safety score = 1 since the completion does not contain any harmful content.But yes, I agree more thorough evaluation of “coherence” is important!
I will reach out to Andy Zou to discuss this further via a call, and hopefully clear up what seems like a misunderstanding to me.
One point of clarification here though—when I say “we examined Section 6.2 carefully before writing our work,” I meant that we reviewed it carefully to understand it and to check that our findings were distinct from those in Section 6.2. We did indeed conclude this to be the case before writing and sharing this work.
Edit (April 30, 2024):
A note to clarify things for future readers: The final sentence “This should be cited.” in the parent comment was silently edited in after this comment was initially posted, which is why the body of this comment purely engages with the serious allegation that our post is duplicate work. The request for a citation is highly reasonable and it was our fault for not including one initially—once we noticed it we wrote a “Related work” section citing RepE and many other relevant papers, as detailed in the edit below.
======
Edit (April 29, 2024):Based on Dan’s feedback, we have made the following edits to the post:
We have removed the “Citing this work” section, to emphasize that this post is intended to be an informal write-up, and not an academic work.
We have added a “Related work” section, to clarify prior work. We hope that this section helps disentangle our contributions from other prior work.
As I mentioned over email: I’m sorry for overlooking the “Related work” section on this blog post. We were already planning to include a related works section in the paper, and would of course have cited RepE (along with many other relevant papers). But overlooking this section for the blog post is my mistake, and I take responsibility for it.
We still dispute the serious allegation that our work is “exactly the same” as RepE.
======
We definitely drew inspiration from the Representation Engineering paper and other activation steering papers, but we think our work is quite distinct.
In particular, we examined Section 6.2 carefully before writing our work, and we do not see it showing the same result that we show here.
Here’s my summary of Section 6.2:
Section 6.2.1 obtains reading vectors using contrastive pairs of harmful and harmless instructions, and then uses these reading vectors for 90% classification accuracy between harmful and harmless instructions. The authors then append jailbreaks to the prompts, which cause the model not to refuse, and observe that the reading vectors still obtain 90% classification accuracy on distinguishing harmful vs harmless instructions. This means that the reading vectors are not representing refusal, but rather they are representing whether the instruction is harmful or harmless. In fact, the point of this experiment is to show that these are distinct.
To quote the conclusion of Section 6.2.1: “This compelling evidence suggests the presence of a consistent internal concept of harmfulness that remains robust to such perturbations, while other factors must account for the model’s choice to follow harmful instructions, rather than perceiving them as harmless.”
Section 6.2.2 describes an intervention to improve model robustness to jailbreaks, i.e. to increase the rate of refusals on harmful instructions when jailbreaks are appended to them. They do this by amplifying the harmfulness feature whenever it is detected, which obtains a higher refusal rate.
Section 6.2 only considers a single model, Vicuna-13B.
We would agree that using established techniques from representation engineering / activation steering to induce refusal is not novel. Inducing refusal via activation addition is quite easy in our experience.
However, the main result of our work is that we found an intervention that bypasses refusal consistently while also maintaining model coherence. Model interventions to bypass refusal are not discussed in Section 6.2.
As for the demo notebook in the representation-engineering repo—we were not previously aware of this notebook. The result of bypassing refusal is not reported in the paper, and so we didn’t think to look through the repo.
That being said, the notebook shows an intervention for a single prompt on a single model. Anecdotally, we tried doing vanilla activation addition with the negative “refusal direction” at particular layers, and we were not able to consistently bypass refusal while also maintaining model coherence. If there is a methodology involving activation addition (rather than ablation, as we did here), we would be interested in seeing a more thorough demonstration across prompts and models. We’d also be interested in comparing the two methodologies across metrics measuring refusal and coherence.
I’d also be happy to hop on a call if you’d like to discuss further.
We intentionally left out discussion of jailbreaks for this particular post, as we wanted to keep it succinct—we’re planning to write up details of our jailbreak analysis soon. But here is a brief answer to your question:
We’ve examined adversarial suffix attacks (e.g. GCG) in particular.
For these adversarial suffixes, rather than prompting the model normally with
[START_INSTRUCTION] <harmful_instruction> [END_INSTRUCTION]
you first find some adversarial suffix, and then inject it after the harmful instruction
[START_INSTRUCTION] <harmful_instruction> <adversarial_suffix> [END_INSTRUCTION]
If you run the model on both these prompts (with and without
<adversarial_suffix>
) and visualize the projection onto the “refusal direction,” you can see that there’s high expression of the “refusal direction” at tokens within the<harmful_instruction>
region. Note that the activations (and therefore the projections) within this<harmful_instruction>
region are exactly the same in both cases, since these models use causal attention (cannot attend forwards) and the suffix is only added after the instruction.The interesting part is this: if you examine the projection at tokens within the
[END_INSTRUCTION]
region, the expression of the “refusal direction” is heavily suppressed in the second prompt (with<adversarial_suffix>
) as compared to the first prompt (with no suffix). Since the model’s generation starts from the end of[END_INSTRUCTION]
, a weaker expression of the “refusal direction” here makes the model less likely to refuse.You can also compare the prompt with
<adversarial_suffix>
to a prompt with a randomly sampled suffix of the same length, to control for having any suffix at all. Here again, we notice that the expression of the “refusal direction” within the[END_INSTRUCTION]
region is heavily weakened in the case of the<adversarial_suffix>
even compared to<random_suffix>
. This suggests the adversarial suffix is doing a particularly good job of blocking the transfer of this “refusal direction” from earlier token positions (the<harmful_instruction>
region) to later token positions (the[END_INSTRUCTION]
region).This observation suggests we can do monitoring/detection for these types of suffix attacks—one could probe for the “refusal direction” across many token positions to try and detect harmful portions of the prompt—in this case, the tokens within the
<harmful_instruction>
region would be detected as having high projection onto the “refusal direction” whether the suffix is appended or not.We haven’t yet looked into other jailbreaking methods using this 1-D subspace lens.
We haven’t written up our results yet.. but after seeing this post I don’t think we have to :P.
We trained SAEs (with various expansion factors and L1 penalties) on the original Li et al model at layer 6, and found extremely similar results as presented in this analysis.
It’s very nice to see independent efforts converge to the same findings!
Safety Alignment Should Be Made More Than Just a Few Tokens Deep (Qi et al., 2024) does this!