Compared to you, I more expect evidence of scheming if it exists.
You argue weak schemers might just play nice. But if so, we can use them to do loads of intellectual labour to make fancy behavioral red teaming and interp to catch out the next gen of AI.
More generally, the plan of bootstrapping to increasingly complex behavioral tests and control schemes seems likely to work. It seems like if one model has spent a lot of thinking time designing a scheme then another model would have to be much smarter to zero shot cause a catastrophe without the scheme detecting it. Eg. analogies with humans suggest this.
I agree that if an AI is incapable of competently scheming (i.e., alignment faking and sabotaging safety work without being caught), but is capable of massively accelerating safety work, then doing huge amounts of safety work with this AI is very promising.
(I put this aside in this post as I was trying to have a more narrow focus on how we’ll update about scheming independent of how easily scheming will be handled and without talking about methods that don’t currently exist.)
(The specific directions you mentioned of “fancy behavioral red teaming and interp” may not be that promising, but I think there are a moderate number of relatively empirical bets that look decently promising.)
It seems like the first AIs capable of massively accelerating safety work might also scheme pretty competently (it will depend on the architecture). However, we might be able to compensate with sufficient control measures such that the AI is forced to be very helpful (or is caught). Correspondingly, I’m excited about AI control.
(More generally, rapid takeoff might mean that we have to control AIs that are capable of competent scheming without having already obsoleted prior work.)
I’m reasonably optimistic about bootstrapping if the relevant AI company could afford several years of delay due to misalignment, was generally competent, and considered mitigating risk from scheming to be a top priority. You might be able to get away with less delay (especially if you heavily prep in advance). I don’t really expect any of these to hold, at least across all the relevant AI companies and in short timelines.
Thanks for this!
Compared to you, I more expect evidence of scheming if it exists.
You argue weak schemers might just play nice. But if so, we can use them to do loads of intellectual labour to make fancy behavioral red teaming and interp to catch out the next gen of AI.
More generally, the plan of bootstrapping to increasingly complex behavioral tests and control schemes seems likely to work. It seems like if one model has spent a lot of thinking time designing a scheme then another model would have to be much smarter to zero shot cause a catastrophe without the scheme detecting it. Eg. analogies with humans suggest this.
I agree that if an AI is incapable of competently scheming (i.e., alignment faking and sabotaging safety work without being caught), but is capable of massively accelerating safety work, then doing huge amounts of safety work with this AI is very promising.
(I put this aside in this post as I was trying to have a more narrow focus on how we’ll update about scheming independent of how easily scheming will be handled and without talking about methods that don’t currently exist.)
(The specific directions you mentioned of “fancy behavioral red teaming and interp” may not be that promising, but I think there are a moderate number of relatively empirical bets that look decently promising.)
It seems like the first AIs capable of massively accelerating safety work might also scheme pretty competently (it will depend on the architecture). However, we might be able to compensate with sufficient control measures such that the AI is forced to be very helpful (or is caught). Correspondingly, I’m excited about AI control.
(More generally, rapid takeoff might mean that we have to control AIs that are capable of competent scheming without having already obsoleted prior work.)
I’m reasonably optimistic about bootstrapping if the relevant AI company could afford several years of delay due to misalignment, was generally competent, and considered mitigating risk from scheming to be a top priority. You might be able to get away with less delay (especially if you heavily prep in advance). I don’t really expect any of these to hold, at least across all the relevant AI companies and in short timelines.