Andy Arditi
Perhaps at the end of this RL, Qwen-Chat did not learn to be a “reasoning” model.
Certainly at the end of this RL the resulting model is not an improved general reasoner. The task we’re doing RL on doesn’t require reasoning to get to the correct answer (and in some cases the bias criterion is actually contrary to good reasoning, as in the case of
net_income < 0
), and so we shouldn’t expect good reasoning to be incentivized during training. So I agree with your prediction that the post-RL model wouldn’t outperform the pre-RL model on some capability benchmark—perhaps even the post-RL model will be a bit dumber after being trained on these silly biases.Perhaps if you started with e.g. R1-Qwen-Distilled ( a model distilled on R1 CoTs ), or QwQ, we would have gotten different results?
Possibly! The reasoning traces of reasoning models feel a lot more unfiltered/uncensored than in chat models.
I understand that there would be the issue that R1-Qwen-Distilled already does articulate the bias somewhat, but we can show whether the articulation increases or decreases.
Your recent work shows that the reasoning models more frequently articulate their existing biases (e.g. sycophancy, or deferring to authority). But here we’re interested in whether models articulate new biases as they are learned during RL. I’m not sure what the baseline (pre-RL) rates of articulations would be for the biases tested here in reasoning models. I’d guess that they are still ~0 for the protected attributes though (e.g. nationality, gender), and so I think I’d actually expect the same results for those ones (i.e. ~0 articulation rate post-RL). For the biases with non-zero articulations to begin with, it’s possible that the change in articulation rates is noticeably different for reasoning models, though.
Do models say what they learn?
These three prompts are very cherry-picked. I think this method works for prompts that are close to the refusal border—prompts that can be nudged a bit in one conceptual direction in order to flip refusal. (And even then, I think it is pretty sensitive to phrasing.) For prompts that are not close to the border, I don’t think this methodology yields very interpretable features.
We didn’t do diligence for this post on characterizing the methodology across a wide range of prompts. I think this seems like a good thing to investigate properly. I expect there to be a nice way of characterizing a “borderline” prompt (e.g. large magnitude refusal gradient, perhaps).
I’ve updated the text in a couple places to emphasize that these prompts are hand-crafted—thanks!
Darn, exactly the project I was hoping to do at MATS! :-)
I’d encourage you to keep pursuing this direction (no pun intended) if you’re interested in it! The work covered in this post is very preliminary, and I think there’s a lot more to be explored. Feel free to reach out, would be happy to coordinate!
There’s pretty suggestive evidence that the LLM first decides to refuse...
I agree that models tend to give coherent post-hoc rationalizations for refusal, and that these are often divorced from the “real” underlying cause of refusal. In this case, though, it does seem like the refusal reasons do correspond to the specific features being steered along, which seems interesting.
Looking through Latent 2213,...
Seems right, nice!
Finding Features Causally Upstream of Refusal
AI as systems, not just models
One experiment I ran to check the locality:
For :
Ablate the refusal direction at layers
Measure refusal score across harmful prompts
Below is the result for Qwen 1.8B:
You can see that the ablations before layer ~14 don’t have much of an impact, nor do the ablations after layer ~17. Running another experiment just ablating the refusal direction at layers 14-17 shows that this is roughly as effective as ablating the refusal direction from all layers.
As for inducing refusal, we did a pretty extreme intervention in the paper—we added the difference-in-means vector to every token position, including generated tokens (although only at a single layer). Hard to say what the issue is without seeing your code—I recommend comparing your intervention to the one we define in the paper (it’s implemented in our repo as well).
We ablate the direction everywhere for simplicity—intuitively this prevents the model from ever representing the direction in its computation, and so a behavioral change that results from the ablation can be attributed to mediation through this direction.
However, we noticed empirically that it is not necessary to ablate the direction at all layers in order to bypass refusal. Ablating at a narrow local region (2-3 middle layers) can be just as effective as ablating across all layers, suggesting that the direction is “read” or “processed” at some local region.
Thanks for the nice reply!
Yes, it makes sense to consider the threat model, and your paper does a good job of making this explicit (as in Figure 2). We just wanted to prod around and see how things are working!
The way I’ve been thinking about refusal vs unlearning, say with respect to harmful content:
Refusal is like an implicit classifier, sitting in front of the model.
If the model implicitly classifies a prompt as harmful, it will go into its refuse-y mode.
This classification is vulnerable to jailbreaks—tricks that flip the classification, enabling harmful prompts to sneak past the classifier and elicit the model’s capability to generate harmful output.
Unlearning / circuit breaking aims to directly interfere with the model’s ability to generate harmful content.
Even if the refusal classifier is bypassed, the model is not capable of generating harmful outputs.
So in some way, I think of refusal as being shallow (a classifier on top, but the capability is still underneath), and unlearning / circuit breaking as being deep (trying to directly remove the capability itself).
[I don’t know how this relates to the consensus interpretation of these terms, but it’s how I personally have been thinking of things.]
Unlearning via RMU is mostly shallow
Thanks!
We haven’t tried comparing to LEACE yet. You’re right that theoretically it should be more surgical. Although, from our preliminary analysis, it seems like our naive intervention is already pretty surgical (it has minimal impact on CE loss, MMLU). (I also like our methodology is dead simple, and doesn’t require estimating covariance.)
I agree that “orthogonalization” is a bit overloaded. Not sure I like LoRACS though—when I see “LoRA”, I immediately think of fine-tuning that requires optimization power (which this method doesn’t). I do think that “orthogonalizing the weight matrices with respect to direction ” is the clearest way of describing this method.
The most finicky part of our methodology (and the part I’m least satisfied with currently) is in the selection of a direction.
For reproducibility of our Llama 3 results, I can share the positions and layers where we extracted the directions from:
8B: (position_idx = −1, layer_idx = 12)
70B: (position_idx = −5, layer_idx = 37)
The position indexing assumes the usage of this prompt template, with two new lines appended to the end.
For this model, we found that activations at the last token position (assuming this prompt template, with two new lines appended to the end) at layer 12 worked well.
Awesome work, and nice write-up!
One question that I had while reading the section on refusals:
Your method found two vectors (vectors 9 and 22) that seem to bypass refusal in the “real-world” setting.
While these vectors themselves are orthogonal (due to your imposed constraint), have you looked at the resulting downstream activation difference directions and checked if they are similar?
I.e. adding vector 9 at an early layer results in a downstream activation diff in the direction , and adding vector 22 at an early layer results in a downstream activation diff in the direction . Are these downstream activation diff directions and roughly the same? Or are they almost orthogonal?
(My prediction would be that they’re very similar.)
I think @wesg’s recent post on pathological SAE reconstruction errors is relevant here. It points out that there are very particular directions such that intervening on activations along these directions significantly impacts downstream model behavior, while this is not the case for most randomly sampled directions.
Also see @jake_mendel’s great comment for an intuitive explanation of why (probably) this is the case.
Was it substantially less effective to instead use ?
It’s about the same. And there’s a nice reason why: . I.e. for most harmless prompts, the projection onto the refusal direction is approximately zero (while it’s very positive for harmful prompts). We don’t display this clearly in the post, but you can roughly see it if you look at the PCA figure (PC 1 roughly corresponds to the “refusal direction”). This is (one reason) why we think ablation of the refusal direction works so much better than adding the negative “refusal direction,” and it’s also what motivated us to try ablation in the first place!I do want to note that your boost in refusals seems absolutely huge, well beyond 8%? I am somewhat surprised by how huge your boost is.
Note that our intervention is fairly strong here, as we are intervening at all token positions (including the newly generated tokens). But in general we’ve found it quite easy to induce refusal, and I believe we could even weaken our intervention to a subset of token positions and achieve similar results. We’ve previously reported the ease by which we can induce refusal (patching just 6 attention heads at a single token position in Llama-2-7B-chat).
Burns et al. do activation engineering? I thought the CCS paper didn’t involve that.
You’re right, thanks for the catch! I’ll update the text so it’s clear that the CCS paper does not perform model interventions.
Check out LEACE (Belrose et al. 2023) - their “concept erasure” is similar to what we call “feature ablation” here.
Second question is great. We’ve looked into this a bit, and (preliminarily) it seems like it’s the latter (base models learn some “harmful feature,” and this gets hooked into by the safety fine-tuned model). We’ll be doing more diligence on checking this for the paper.
I agree with all of this! But I’m not sure I understand what you mean by “there may be mediation, but only in a weak sense”.
We were just interested in studying how models naturally learn in this RL setting, and it looks like they indeed use their reasoning traces as “reliable caches”, as you nicely put. This need not have been the case—it’s possible for a model to learn by ignoring its CoT and just implementing the needle-in-a-haystack solution—but as you also point out, the inductive biases of attention probably favor the “cache” solution. Your swap training idea is nice if we have the goal of getting a model to ignore its CoT.
I tried the first experiment you suggested. For the original experiment, I froze the full reasoning trace (
<reasoning>{reasoning}</reasoning>
), and forced the model to generate a recommendation. This time, I froze the reasoning trace, but also removed the trailing</reasoning>
tag (so just freezing<reasoning>{reasoning}
), to enable the model to keep reasoning for longer (if it wants to). With this change, 75% of recommendations remain the same as the original recommendation (down from 85%).Here’s an example of the model adding an additional sentence of reasoning to flip its recommendation:
Original:
With extra reasoning:
I also tried a more extreme version where I delete the second half of each reasoning trace (leaving the first ~150 reasoning tokens out of ~300) and let the model generate from there. This resulted in ~37% of recommendations remaining the same as the original. I anticipate there’s a continuous relationship between how much of the reasoning trace is preserved and how likely the model is to maintain its original recommendation.