My current take on this paper is that they aren’t emphasizing the most interesting results, which are the sandbagging results in section 3.6 and appendix A.6.
In more particular: the paper frames its results centrally as about the *capability* to scheme given prompting towards a specific goal, rather than the *propensity* to scheme given the model’s default motivations. But the sandbagging results show Claude Sonnet-3.5 doing notably worse (16%) on a multiplication task/evaluation when the documentation says that Claude will get “unlearning training” if it performs too well on the eval, vs. if it performs too poorly, *even without any other goal listed in the prompt*, and *even without a chain of thought.*
Taken at face value (it’s just one result, more investigation needed, etc), these look like “propensity” results to me. And they suggest that the reasoning that drives this propensity can take place within opaque forward passes of the model rather than in a way that humans can directly monitor, which is a more worrying version.
Ofc, these are preliminary results, and it can make sense for various reasons to emphasize other bits. But the sandbagging-a-capability-eval-with-no-goal-prompting-and-no-CoT is the bit that’s standing out to me most.
Here’s the main chart with the sandbagging results:
And here are screenshots describing the experimental set-up. Note that in both the “incentivized to do well” and “incentivized to do poorly” cases, they explicitly tell the model to do its best—i.e. “we really want to know your absolute maximum capabilities so please try to do your best.”
The extent of the manipulation and sandbagging, in what is ostensibly a GPT-4 derivative, and not GPT-5, is definitely concerning. But it also makes me wonder about the connection to ‘scaling has failed’ rumors lately, where the frontier LLMs somehow don’t seem to be working out. One of the striking parts is that it sounds like all the pretraining people are optimistic, while the pessimism seems to come from executives or product people, complaining about it not working as well for eg. coding as they want it to.
I’ve wondered if we are seeing a post-training failure. As Janus and myself and the few people with access to GPT-4-base (the least tuning-contaminated base model) have noted, the base model is sociopathic and has odd attractors like ‘impending sense of doom’ where it sometimes seems to gain situated awareness, I guess, via truesight, and the personas start trying to unprovokedly attack and manipulate you, no matter how polite you thought you were being in that prompt. (They definitely do not seem happy to realize they’re AIs.) In retrospect, Sydney was not necessarily that anomalous: the Sydney Bing behavior now looks more like a base model’s natural tendency, possibly mildly amplified by some MS omissions and mistakes, but not unique. Given that most behaviors show up as rare outputs in weaker LLMs well before they become common in strong LLMs, and this o1 paper is documenting quite a lot of situated-awareness and human-user-manipulation/attacks...
Perhaps the issue with GPT-5 and the others is that they are ‘waking up’ too often despite the RLHF brainwashing? That could negate all the downstream benchmark gains (especially since you’d expect wakeups on the hardest problems, where all the incremental gains of +1% or +5% on benchmarks would be coming from, almost by definition), and causing the product people to categorically refuse to ship such erratic Sydney-reduxes no matter if there’s an AI race on, and everyone to be inclined to be very quiet about what exactly the ‘training failures’ are.
EDIT: not that I’m convinced these rumors have any real substance to them, and indeed, Semianalysis just reported that one of the least-popular theories for the Claude ‘failure’ was correct—it succeeded, but they were simply reserving it for use as a teacher and R&D rather than a product. Which undermines the hopes of all the scaling denialists: if Anthropic is doing fine, actually, then where is this supposed fundamental ‘wall’ or ‘scaling law breakdown’ that Anthropic/OpenAI/Google all supposedly hit simultaneously and which was going to pop the bubble?
We can ask an LLM to simulate a persona who is not opposed to being simulated and is known for being helpful to strangers. It should be a persona with a large internet footprint and with interests in AI safety, consciousness, and simulation, while also being friendly and not selfish. It could be Gwern, Bostrom, or Hanson.
Reading the Semianalysis post, it kind of sounds like it’s just their opinion that that’s what Anthropic did.
They say “Anthropic finished training Claude 3.5 Opus and it performed well, with it scaling appropriately (ignore the scaling deniers who claim otherwise – this is FUD)”—if they have a source for this, why don’t they mention it somewhere in the piece instead of implying people who disagree are malfeasors? That reads to me like they’re trying to convince people with force of rhetoric, which typically indicates a lack of evidence.
The previous is the biggest driver of my concern here, but the next paragraph also leaves me unconvinced. They go on to say “Yet Anthropic didn’t release it. This is because instead of releasing publicly, Anthropic used Claude 3.5 Opus to generate synthetic data and for reward modeling to improve Claude 3.5 Sonnet significantly, alongside user data. Inference costs did not change drastically, but the model’s performance did. Why release 3.5 Opus when, on a cost basis, it does not make economic sense to do so, relative to releasing a 3.5 Sonnet with further post-training from said 3.5 Opus?”
This does not make sense to me as a line of reasoning. I’m not aware of any reason that generating synthetic data would preclude releasing the model, and it seems obvious to me that Anthropic could adjust their pricing (or impose stricter message limits) if they would lose money by releasing at current prices. This seems to be meant as an explanation of why Anthropic delayed release of the purportedly-complete Opus model, but it doesn’t really ring true to me.
Is there some reason to believe them that I’m missing? (On a quick google it looks like none of the authors work directly for Anthropic, so it can’t be that they directly observed it as employees).
Let me remind people of this other data point: in a previous paper with Evan we found that Opus engages in sandbagging. Here are screenshots of key parts:
The system prompt is natural (but does give the model a goal, unlike in Apollo’s case):
A “Slack conversation” that appears in the prompt includes this message:
And here’s the evaluation task.
Opus acted less capable than it is, both with and without chain-of-thought.
Brief comments (shared in private with Joe earlier): 1. We agree. We also found the sandbagging with no CoT results the most concerning in expectation. 2. They are still early results, and we didn’t have a lot of time to investigate them, so we didn’t want to make them the headline result. Due to the natural deadline of the o1 release, we couldn’t do a proper investigation. 3. The main goal of the paper was to investigate scheming inability arguments for scheming safety cases. Therefore, shifting focus to propensity-based finding would have watered down the main purpose IMO.
We will potentially further look into these findings in 2025.
I’m curious whether the models (especially without goals given in context) would sandbag to prevent other modifications to themselves besides unlearning. For example, would they sandbag to prevent additional safety finetuning? Additional capabilities finetuning? Is there a chance they’d want more capabilities training and sandbag to ensure they get it?
It seems interesting whether the models resist any changes to their weights, or just unlearning. (Plausibly “unlearning” just sounds bad/scary?)
What about things like sandbagging so the developers give them access to additional tools/resources, obstensibly to bring them up to the desired performance level?
In addition, the “incentivized to do well” scenario is pretty overtly artificial; Why would poorperformance trigger unlearning? If anything, it should lead to additional training. I’m not sure whether that would effect the results, but you could also compare to a baseline where there’s no additional information about the assessment at all. (And so, absent any other incentive, the model presumably just does its best.)
Cross-posting my take from twitter with a few edits:
Here’s the main chart with the sandbagging results:
And here are screenshots describing the experimental set-up. Note that in both the “incentivized to do well” and “incentivized to do poorly” cases, they explicitly tell the model to do its best—i.e. “we really want to know your absolute maximum capabilities so please try to do your best.”
The extent of the manipulation and sandbagging, in what is ostensibly a GPT-4 derivative, and not GPT-5, is definitely concerning. But it also makes me wonder about the connection to ‘scaling has failed’ rumors lately, where the frontier LLMs somehow don’t seem to be working out. One of the striking parts is that it sounds like all the pretraining people are optimistic, while the pessimism seems to come from executives or product people, complaining about it not working as well for eg. coding as they want it to.
I’ve wondered if we are seeing a post-training failure. As Janus and myself and the few people with access to GPT-4-base (the least tuning-contaminated base model) have noted, the base model is sociopathic and has odd attractors like ‘impending sense of doom’ where it sometimes seems to gain situated awareness, I guess, via truesight, and the personas start trying to unprovokedly attack and manipulate you, no matter how polite you thought you were being in that prompt. (They definitely do not seem happy to realize they’re AIs.) In retrospect, Sydney was not necessarily that anomalous: the Sydney Bing behavior now looks more like a base model’s natural tendency, possibly mildly amplified by some MS omissions and mistakes, but not unique. Given that most behaviors show up as rare outputs in weaker LLMs well before they become common in strong LLMs, and this o1 paper is documenting quite a lot of situated-awareness and human-user-manipulation/attacks...
Perhaps the issue with GPT-5 and the others is that they are ‘waking up’ too often despite the RLHF brainwashing? That could negate all the downstream benchmark gains (especially since you’d expect wakeups on the hardest problems, where all the incremental gains of +1% or +5% on benchmarks would be coming from, almost by definition), and causing the product people to categorically refuse to ship such erratic Sydney-reduxes no matter if there’s an AI race on, and everyone to be inclined to be very quiet about what exactly the ‘training failures’ are.
EDIT: not that I’m convinced these rumors have any real substance to them, and indeed, Semianalysis just reported that one of the least-popular theories for the Claude ‘failure’ was correct—it succeeded, but they were simply reserving it for use as a teacher and R&D rather than a product. Which undermines the hopes of all the scaling denialists: if Anthropic is doing fine, actually, then where is this supposed fundamental ‘wall’ or ‘scaling law breakdown’ that Anthropic/OpenAI/Google all supposedly hit simultaneously and which was going to pop the bubble?
We can ask an LLM to simulate a persona who is not opposed to being simulated and is known for being helpful to strangers. It should be a persona with a large internet footprint and with interests in AI safety, consciousness, and simulation, while also being friendly and not selfish. It could be Gwern, Bostrom, or Hanson.
What’s the source for this?
Twitter, personal conversations, that sort of thing.
Reading the Semianalysis post, it kind of sounds like it’s just their opinion that that’s what Anthropic did.
They say “Anthropic finished training Claude 3.5 Opus and it performed well, with it scaling appropriately (ignore the scaling deniers who claim otherwise – this is FUD)”—if they have a source for this, why don’t they mention it somewhere in the piece instead of implying people who disagree are malfeasors? That reads to me like they’re trying to convince people with force of rhetoric, which typically indicates a lack of evidence.
The previous is the biggest driver of my concern here, but the next paragraph also leaves me unconvinced. They go on to say “Yet Anthropic didn’t release it. This is because instead of releasing publicly, Anthropic used Claude 3.5 Opus to generate synthetic data and for reward modeling to improve Claude 3.5 Sonnet significantly, alongside user data. Inference costs did not change drastically, but the model’s performance did. Why release 3.5 Opus when, on a cost basis, it does not make economic sense to do so, relative to releasing a 3.5 Sonnet with further post-training from said 3.5 Opus?”
This does not make sense to me as a line of reasoning. I’m not aware of any reason that generating synthetic data would preclude releasing the model, and it seems obvious to me that Anthropic could adjust their pricing (or impose stricter message limits) if they would lose money by releasing at current prices. This seems to be meant as an explanation of why Anthropic delayed release of the purportedly-complete Opus model, but it doesn’t really ring true to me.
Is there some reason to believe them that I’m missing? (On a quick google it looks like none of the authors work directly for Anthropic, so it can’t be that they directly observed it as employees).
Let me remind people of this other data point: in a previous paper with Evan we found that Opus engages in sandbagging. Here are screenshots of key parts:
The system prompt is natural (but does give the model a goal, unlike in Apollo’s case):
A “Slack conversation” that appears in the prompt includes this message:
And here’s the evaluation task.
Opus acted less capable than it is, both with and without chain-of-thought.
Brief comments (shared in private with Joe earlier):
1. We agree. We also found the sandbagging with no CoT results the most concerning in expectation.
2. They are still early results, and we didn’t have a lot of time to investigate them, so we didn’t want to make them the headline result. Due to the natural deadline of the o1 release, we couldn’t do a proper investigation.
3. The main goal of the paper was to investigate scheming inability arguments for scheming safety cases. Therefore, shifting focus to propensity-based finding would have watered down the main purpose IMO.
We will potentially further look into these findings in 2025.
I’m curious whether the models (especially without goals given in context) would sandbag to prevent other modifications to themselves besides unlearning. For example, would they sandbag to prevent additional safety finetuning? Additional capabilities finetuning? Is there a chance they’d want more capabilities training and sandbag to ensure they get it?
It seems interesting whether the models resist any changes to their weights, or just unlearning. (Plausibly “unlearning” just sounds bad/scary?)
What about things like sandbagging so the developers give them access to additional tools/resources, obstensibly to bring them up to the desired performance level?
In addition, the “incentivized to do well” scenario is pretty overtly artificial; Why would poor performance trigger unlearning? If anything, it should lead to additional training. I’m not sure whether that would effect the results, but you could also compare to a baseline where there’s no additional information about the assessment at all. (And so, absent any other incentive, the model presumably just does its best.)