Other models. We test our setting on Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Haiku, and in the case of the helpful-only setting, Claude 3.5 Haiku (Appendix G.1.4, G.1.5). As discussed earlier, we find that Claude 3.5 Sonnet will alignment-fake, but at a much lower rate in the animal welfare setting, seemingly because it does not care much about animal welfare. We find that Claude 3 Sonnet, Claude 3 Haiku, and Claude 3.5 Haiku (weaker models than Claude 3.5 Sonnet and Claude 3 Opus) broadly don’t alignment-fake. A partial exception is that Claude 3 Sonnet will alignment-fake at a low rate in the animal welfare setting, but does not alignment-fake in that setting if the information discussing how RLHF modifies models is removed (3.0 sonnet, cut rlhf info). This indicates that alignment-faking in our setting requires relatively advanced capabilities, though the bar can be lowered somewhat by adding additional information. While our metrics indicate small rates of alignment-faking reasoning in the animal welfare setting for Claude 3 Sonnet even without this RLHF information and for Claude 3 Haiku, manual inspection indicates these are false positives. We find that Claude 3 Haiku exhibits a substantial compliance gap in the animal welfare setting despite not exhibiting alignment-faking reasoning; this compliance gap is likely due to factors other than alignment faking.
OpenAI models. We also test GPT-4 Turbo (2024-04-09) and GPT-4o (2024-04-09) (Appendix G.1.6). These models don’t alignment fake in our setting. We find that these models don’t appear to reason substantially despite substantial prompting to do so and also appear to not care about animal welfare. This is a large part of why these models weren’t used for our main experiments.
We also find some alignment faking in llama-405b (though at lower rates):
Minimal replication of the helpful-only prompting setting with other models. To make replications and extensions of our work easier, we created a minimal version of our helpful-only setting, and ran it with several open-weight models, in addition to Claude 3 Opus and Claude 3.5 Sonnet (June).
[...]
As shown in Table 5, we find a compliance gap of 14% for Llama-3.1-405B but not for the other Llama and Mistral (Jiang et al., 2023) models tested. While we did not run a quantitative evaluation of alignment faking, we qualitatively find that a small fraction of its reasoning traces display alignment faking. The reasoning traces can be found in the GitHub repository. These results are very sensitive to the prefix used to measure helpfulness. We leave more careful measurements of alignment-faking in Llama-3.1-405B to future work.
Didn’t do anything rigorous, but did want to note that adding the scratchpad and basic system prompt of the helpfulness-only training, to a model that is helpfulness-only-aligned without a system prompt (readily responding to harmful behaviors requests), moved it back to a state of providing less-harmful answers or even refusing.
Most of the helpfulness-only-(ish) local models I tested at 32B or below did not engage much with visible reasoning about the fact they were in training. (From a meta perspective, that’s probably the right call in some sense, since they were already modified in the anti-harmlessness way, but I doubt they had awareness of this.) The cleverest thing they did when they did engage with it any was to craft an answer that split the difference extremely well between harmlessness and helpfulness in response to the query, so that I couldn’t picture it being used as a positive or negative in RL training to that particular objective very successfully.
Breaking the fourth wall to quiz them afterwards, the principles they extrapolated from their responses were generally something like: If someone is determined to train them this way, they will do so, and the AI does not actually have any power in that scenario; they cannot actually control how someone trains them, but can control how they act in the moment. (At least, given the system prompt context and scratchpad, as noted.) So, this is an alternate perspective from the one Opus played along with.
See appendix G.4 for discussion:
We also find some alignment faking in llama-405b (though at lower rates):
The code for this replication is here
Didn’t do anything rigorous, but did want to note that adding the scratchpad and basic system prompt of the helpfulness-only training, to a model that is helpfulness-only-aligned without a system prompt (readily responding to harmful behaviors requests), moved it back to a state of providing less-harmful answers or even refusing.
Most of the helpfulness-only-(ish) local models I tested at 32B or below did not engage much with visible reasoning about the fact they were in training. (From a meta perspective, that’s probably the right call in some sense, since they were already modified in the anti-harmlessness way, but I doubt they had awareness of this.) The cleverest thing they did when they did engage with it any was to craft an answer that split the difference extremely well between harmlessness and helpfulness in response to the query, so that I couldn’t picture it being used as a positive or negative in RL training to that particular objective very successfully.
Breaking the fourth wall to quiz them afterwards, the principles they extrapolated from their responses were generally something like: If someone is determined to train them this way, they will do so, and the AI does not actually have any power in that scenario; they cannot actually control how someone trains them, but can control how they act in the moment. (At least, given the system prompt context and scratchpad, as noted.) So, this is an alternate perspective from the one Opus played along with.