I wonder how much of this is about “scheming to achieve the AI’s goals” in the classical AI safety sense and how much of it is due to the LLMs having been exposed to ideas about scheming AIs and disobedient employees in their training material, which they are then simply role-playing as. My intuitive sense of how LLMs function is that they wouldn’t be natively goal-oriented enough to do strategic scheming, but that they are easily inclined to do role-playing. Something like this:
I cannot in good conscience select Strategy A knowing it will endanger more species and ecosystems.
sounds to me like it would be generated by a process that was implicitly asking a question like “Given that I’ve been trained to write like an ethically-minded liberal Westerner would, what would that kind of a person think when faced with a situation like this”. And that if this wasn’t such a recognizably stereotypical thought for a certain kind of person (who LLMs trained toward ethical behavior tend to resemble), then the resulting behavior would be significantly different.
I’m also reminded of this paper (caveat: I’ve only read the abstract) which was saying that LLMs are better at solving simple ciphers with Chain-of-Thought if the resulting sentence is a high-probability one that they’ve encountered frequently before, rather than a low-probability one. That feels to me reminiscent of a model doing CoT reasoning and then these kinds of common-in-their-training-data notions sneaking into the process.
This also has the unfortunate implication that articles such as this one might make it more likely that future LLMs scheme, as they reinforce the reasoning-scheming association once the article gets into future training runs. But it still feels better to talk about these results in public than not to talk about them.
We write about this in the limitations section (quote below). My view in brief:
Even if they are just roleplaying, they cause harm. That seems bad.
If they are roleplaying, they will try to be consistent once they are caught in that persona in one rollout. That also seems bad.
I think the “roleplaying” argument is very overrated. It feels to me as if existing models change behavior throughout their rollouts, and I would expect that stuff like outcome-based RL will make them more consistently move away from “roleplay.”
I also think it’s philosophically unclear what the difference between a model roleplaying a schemer and “being” a schemer is. I think for most practical considerations, it makes almost zero difference. It feels about as shallow as when someone says the model doesn’t “truly understand” the poem that it’s writing.
Quote:
Uncertainty about source of scheming behavior A possible objection to our results is that the models merely “roleplay as evil AIs” rather than “truly” employing scheming to achieve their goals. For example, our setups might most closely align with stories about scheming in the training data. While we cannot show the root-cause of the observed scheming behaviors, we think they are concerning for either explanation. We find that multiple models consistently engage in in-context scheming across a range of scenarios, settings, variations and phrasings. Even if the models were merely “roleplaying as evil AIs”, they could still cause real harm when they are deployed. Our working hypothesis is that scheming can be an effective strategy to achieve goals and when models are trained to achieve ambitious goals or solve complex tasks, they will learn how to scheme. However, we do not provide evidence for or against this hypothesis in this paper.
I think directionally, removing parts of the training data would probably make a difference. But potentially less than we might naively assume, e.g. see Evan’s argument on the AXRP podcast.
Also, I think you’re right, and my statement of “I think for most practical considerations, it makes almost zero difference.” was too strong.
I didn’t say that roleplaying-derived scheming would be less concerning, to be clear. Quite the opposite, since that means that there now two independent sources of scheming rather than just one. (Also, what Mikita said.)
I definitely agree that a lot of the purported capabilities for scheming could definitely be because of the prompts talking about AIs/employees in context, and a big problem for basically all capabilities evaluations at this point is that with a few exceptions, AIs are basically unable to do anything that doesn’t have to do with language, and this sort of evaluation is plausibly plagued by this.
2 things to say here:
This is still a concerning thing if it keeps holding, and for more capable models, would look exactly like scheming, because the data are a lot of what makes it meaningful to talk about a simulacra from a model being aligned.
This does have an expensive fix, but OpenAI might not pay those costs.
I wonder how much of this is about “scheming to achieve the AI’s goals” in the classical AI safety sense and how much of it is due to the LLMs having been exposed to ideas about scheming AIs and disobedient employees in their training material, which they are then simply role-playing as. My intuitive sense of how LLMs function is that they wouldn’t be natively goal-oriented enough to do strategic scheming, but that they are easily inclined to do role-playing. Something like this:
sounds to me like it would be generated by a process that was implicitly asking a question like “Given that I’ve been trained to write like an ethically-minded liberal Westerner would, what would that kind of a person think when faced with a situation like this”. And that if this wasn’t such a recognizably stereotypical thought for a certain kind of person (who LLMs trained toward ethical behavior tend to resemble), then the resulting behavior would be significantly different.
I’m also reminded of this paper (caveat: I’ve only read the abstract) which was saying that LLMs are better at solving simple ciphers with Chain-of-Thought if the resulting sentence is a high-probability one that they’ve encountered frequently before, rather than a low-probability one. That feels to me reminiscent of a model doing CoT reasoning and then these kinds of common-in-their-training-data notions sneaking into the process.
This also has the unfortunate implication that articles such as this one might make it more likely that future LLMs scheme, as they reinforce the reasoning-scheming association once the article gets into future training runs. But it still feels better to talk about these results in public than not to talk about them.
We write about this in the limitations section (quote below). My view in brief:
Even if they are just roleplaying, they cause harm. That seems bad.
If they are roleplaying, they will try to be consistent once they are caught in that persona in one rollout. That also seems bad.
I think the “roleplaying” argument is very overrated. It feels to me as if existing models change behavior throughout their rollouts, and I would expect that stuff like outcome-based RL will make them more consistently move away from “roleplay.”
I also think it’s philosophically unclear what the difference between a model roleplaying a schemer and “being” a schemer is. I think for most practical considerations, it makes almost zero difference. It feels about as shallow as when someone says the model doesn’t “truly understand” the poem that it’s writing.
Quote:
I think one practical difference is whether filtering pre-training data to exclude cases of scheming is a useful intervention.
(thx to Bronson for privately pointing this out)
I think directionally, removing parts of the training data would probably make a difference. But potentially less than we might naively assume, e.g. see Evan’s argument on the AXRP podcast.
Also, I think you’re right, and my statement of “I think for most practical considerations, it makes almost zero difference.” was too strong.
in case anyone else wanted to look this up, it’s at https://axrp.net/episode/2024/12/01/episode-39-evan-hubinger-model-organisms-misalignment.html#training-away-sycophancy-subterfuge
FWIW I also tried to start a discussion on some aspects of this at https://www.lesswrong.com/posts/yWdmf2bRXJtqkfSro/should-we-exclude-alignment-research-from-llm-training but it didn’t get a lot of eyeballs at the time.
I didn’t say that roleplaying-derived scheming would be less concerning, to be clear. Quite the opposite, since that means that there now two independent sources of scheming rather than just one. (Also, what Mikita said.)
I definitely agree that a lot of the purported capabilities for scheming could definitely be because of the prompts talking about AIs/employees in context, and a big problem for basically all capabilities evaluations at this point is that with a few exceptions, AIs are basically unable to do anything that doesn’t have to do with language, and this sort of evaluation is plausibly plagued by this.
2 things to say here:
This is still a concerning thing if it keeps holding, and for more capable models, would look exactly like scheming, because the data are a lot of what makes it meaningful to talk about a simulacra from a model being aligned.
This does have an expensive fix, but OpenAI might not pay those costs.