Note that if the model is deployed on many more examples than it’s evaluated on, it can be pretty confident that it’s in deployment on any particular example just on priors, even if there’s no distribution shift. See here for more.
My initial read was that indistinguishability was used to justify that the trusted monitor couldn’t distinguish the deployment distribution.
I’m not sure if that’s what Roger meant—if so, it’s still a bit confusing since then he wouldn’t have needed to call the monitor ‘trusted.’
I’m pretty sympathetic to these arguments. I agree a lot of the risk comes from sycophantic AI systems producing bad research because humans are bad at evaluating research. This is part of why I spend most of my time developing safety evaluation methodologies.
On the other hand, I agree with Buck that scheming-like risks are pretty plausible and a meaningful part of the problem is also avoiding egregious sabotage.
I don’t think I agree with your claim that the hope of control is that “early transformative AI can be used to solve the hard technical problems of superintelligence.” I think the goal is to instead solve the problem of constructing successors that are at least as trustworthy as humans (which is notably easier).
I think scheming is perhaps the main reason this might end up being very hard—and so conditioned on no-alignment-by-default at top expert dominating capabilities, I put a lot of probability mass on scheming/egregious sabotage failure modes.