I agree that if an AI is incapable of competently scheming (i.e., alignment faking and sabotaging safety work without being caught), but is capable of massively accelerating safety work, then doing huge amounts of safety work with this AI is very promising.
(I put this aside in this post as I was trying to have a more narrow focus on how we’ll update about scheming independent of how easily scheming will be handled and without talking about methods that don’t currently exist.)
(The specific directions you mentioned of “fancy behavioral red teaming and interp” may not be that promising, but I think there are a moderate number of relatively empirical bets that look decently promising.)
It seems like the first AIs capable of massively accelerating safety work might also scheme pretty competently (it will depend on the architecture). However, we might be able to compensate with sufficient control measures such that the AI is forced to be very helpful (or is caught). Correspondingly, I’m excited about AI control.
(More generally, rapid takeoff might mean that we have to control AIs that are capable of competent scheming without having already obsoleted prior work.)
I’m reasonably optimistic about bootstrapping if the relevant AI company could afford several years of delay due to misalignment, was generally competent, and considered mitigating risk from scheming to be a top priority. You might be able to get away with less delay (especially if you heavily prep in advance). I don’t really expect any of these to hold, at least across all the relevant AI companies and in short timelines.
I agree that if an AI is incapable of competently scheming (i.e., alignment faking and sabotaging safety work without being caught), but is capable of massively accelerating safety work, then doing huge amounts of safety work with this AI is very promising.
(I put this aside in this post as I was trying to have a more narrow focus on how we’ll update about scheming independent of how easily scheming will be handled and without talking about methods that don’t currently exist.)
(The specific directions you mentioned of “fancy behavioral red teaming and interp” may not be that promising, but I think there are a moderate number of relatively empirical bets that look decently promising.)
It seems like the first AIs capable of massively accelerating safety work might also scheme pretty competently (it will depend on the architecture). However, we might be able to compensate with sufficient control measures such that the AI is forced to be very helpful (or is caught). Correspondingly, I’m excited about AI control.
(More generally, rapid takeoff might mean that we have to control AIs that are capable of competent scheming without having already obsoleted prior work.)
I’m reasonably optimistic about bootstrapping if the relevant AI company could afford several years of delay due to misalignment, was generally competent, and considered mitigating risk from scheming to be a top priority. You might be able to get away with less delay (especially if you heavily prep in advance). I don’t really expect any of these to hold, at least across all the relevant AI companies and in short timelines.