Some comments I left as a reply to Roger Grosse on Twitter:
I’m glad you put this out. These examples align with my guess as to roughly what arguments will end up being most load bearing for mitigating scheming (white-box monitoring, black-box monitoring, and evidence against scheming being rewarded).
Here are some comments, since you mentioned comments would be helpful. I’ve bolded the comments that I think would be most useful for you to read.
Safety case sketch #1:
- There are some additional claims that this safety case relies on that could be good to mention explicitly. e.g. “egregious behavior would be reliably identified when developers analyze flagged sessions” and “developers will become aware of and stop unacceptable behavior *before* it leads to harmful outcomes.”
- “Given our mitigations, the model cannot strategically condition its apparent capabilities, motivations, or behavioral proclivities on whether it’s in testing or deployment without being caught” ← I’m confused about why this claim is important given that you are monitoring at deployment-time. Even if the model is conditioning its behavior, my understanding is that you are hoping to catch it when it defects.
- Much of the evidence for the effectiveness of your white-box monitoring comes from your model organisms, of which you only have three. Three examples doesn’t give you a lot of statistical power. Naively, I’d think there’s still roughly a one third probability that your white-box monitoring completely fails in the actual deployment context even if it succeeds on three models.
Safety case sketch #2:
- You mention external deployments, but I didn’t see discussion of jailbreaking.
- “[The safety case sketch] may require expensive pre-deployment testing if one needs to generate a very large number of simulated deployment situations in order to achieve a sufficiently good bound on the probability of defection.” This sounds impractical. My guess is that you’ll need to continuously sample real deployment trajectories to generate your honeypots (where a honeypot is a inserted dangerous action for your monitoring to flag). Otherwise, it seems unlikely to me that you can verify indistinguishability even with current models.
Safety case sketch #3:
- It’s possible that all of the models that you experiment with are opaquely scheming. i.e. instead of comparing reward attained by models that engage in “deceptive reasoning” vs “no deceptive reasoning,” you might be comparing models that engage in “CoT deceptive reasoning” vs “opaque deceptive reasoning.” This experiment still sheds light on how helpful deceptive reasoning is for obtaining reward, but the argument in this case is different from the way you framed it.
- A lot hinges on the claim “if an environment rewards opaque deceptive reasoning, then it rewards scratchpad-based deceptive reasoning.” Here’s an idea for validating this. First, construct a distribution of tasks where deceptive reasoning is likely rewarded. For example, select tasks that you *know* can be gamed by explicitly reasoning about the training process. Then, train models to perform well on these tasks with and without additionally training them to reason deceptively in their chain of thought. If models that reason deceptively in CoT do better on average, this provides evidence for your claim that we should see deceptive reasoning in CoT if opaque deceptive reasoning is incentivized. It’s possible some of these comments were addressed in the post and I missed the part where they were addressed. I’ve errored on the side of inclusion here.
Some comments I left as a reply to Roger Grosse on Twitter: