I guess I don’t understand how it would even be possible to be conservative in the right way if you don’t solve ELK: just because the network is Bayesian doesn’t mean it can’t be scheming, and thus be conservative in the right way on the training distribution but fail catastrophically later, right? What difference can the training process and the overseer make between “the model is confident and correct that doing X is totally safe” (but humans don’t know why) and “the model tells you it’s confident that doing X is totally safe—but it’s actually false”? Where do you get the information that distinguishes “confident OOD prediction because it’s correct” from “the model is confidently lying OOD”? It feels like to solve this kind of issue, you ultimately have to rely on ELK (or abstain from making predictions in domains humans don’t understand, or ignore the possibility of scheming).
I guess I don’t understand how it would even be possible to be conservative in the right way if you don’t solve ELK: just because the network is Bayesian doesn’t mean it can’t be scheming, and thus be conservative in the right way on the training distribution but fail catastrophically later, right? What difference can the training process and the overseer make between “the model is confident and correct that doing X is totally safe” (but humans don’t know why) and “the model tells you it’s confident that doing X is totally safe—but it’s actually false”? Where do you get the information that distinguishes “confident OOD prediction because it’s correct” from “the model is confidently lying OOD”? It feels like to solve this kind of issue, you ultimately have to rely on ELK (or abstain from making predictions in domains humans don’t understand, or ignore the possibility of scheming).