I agree with all of this but I don’t think it addresses my central point/question. (I’m not sure if you were trying to, or just making a more tangential comment.) To rephrase, it seems to me that ‘ML safety problems in humans’ is a natural/obvious framing that makes clear that alignment to human users/operators is likely far from sufficient to ensure the safety of human-AI systems, that in some ways corrigibility is actually opposed to safety, and that there are likely technical angles of attack on these problems. It seems surprising that someone like me had to point out this framing to people who are intimately familiar with ML safety problems, and also surprising that they largely respond with silence.
in some ways corrigibility is actually opposed to safety
We can talk about “corrigible by X” for arbitrary X. I don’t think these considerations imply a tension between corrigibility and safety, they just suggest “humans in the real world” may not be the optimal X. You might prefer use an appropriate idealization of humans / humans in some safe environment / etc.
To the extent that even idealized humans are not perfectly safe (e.g., perhaps a white-box metaphilosophical approach is even safer), and that corrigibility seems to conflict with greater transparency and hence cooperation between AIs, there still seems to be some tension between corrigibility and safety even when X = idealized humans.
ETA: Do you think IDA can be used to produce an AI that is corrigible by some kind of idealized human? That might be another approach that’s worth pursuing if it looks feasible.
ETA: Do you think IDA can be used to produce an AI that is corrigible by some kind of idealized human? That might be another approach that’s worth pursuing if it looks feasible.
I agree with all of this but I don’t think it addresses my central point/question. (I’m not sure if you were trying to, or just making a more tangential comment.) To rephrase, it seems to me that ‘ML safety problems in humans’ is a natural/obvious framing that makes clear that alignment to human users/operators is likely far from sufficient to ensure the safety of human-AI systems, that in some ways corrigibility is actually opposed to safety, and that there are likely technical angles of attack on these problems. It seems surprising that someone like me had to point out this framing to people who are intimately familiar with ML safety problems, and also surprising that they largely respond with silence.
We can talk about “corrigible by X” for arbitrary X. I don’t think these considerations imply a tension between corrigibility and safety, they just suggest “humans in the real world” may not be the optimal X. You might prefer use an appropriate idealization of humans / humans in some safe environment / etc.
To the extent that even idealized humans are not perfectly safe (e.g., perhaps a white-box metaphilosophical approach is even safer), and that corrigibility seems to conflict with greater transparency and hence cooperation between AIs, there still seems to be some tension between corrigibility and safety even when X = idealized humans.
ETA: Do you think IDA can be used to produce an AI that is corrigible by some kind of idealized human? That might be another approach that’s worth pursuing if it looks feasible.
Yes.