Ah I see, I think I was misunderstanding the method you were proposing. I agree that this strategy might “just work”.
Another concern I have is that a deceptively aligned model might just straightforwardly not learn to represent “the truth” at all—one speculative way this could happen is that a “situationally aware” and deceptive model might just “play the training game” and appear to learn to perform tasks at a superhuman level, but at test/inference time just resort to only outputting activations that correspond to beliefs that the human simulator would have. This seems pretty worst case-y, but I think I have enough concerns about deceptive alignment that this kind of strategy is still going to require some kind of check during the training process to ensure that the human simulator is being selected against. I’d be curious to hear if you agree or if you think that this approach would be robust to most training strategies for GPT-n and other kinds of models trained in a self-supervised way.
Ah I see, I think I was misunderstanding the method you were proposing. I agree that this strategy might “just work”.
Another concern I have is that a deceptively aligned model might just straightforwardly not learn to represent “the truth” at all—one speculative way this could happen is that a “situationally aware” and deceptive model might just “play the training game” and appear to learn to perform tasks at a superhuman level, but at test/inference time just resort to only outputting activations that correspond to beliefs that the human simulator would have. This seems pretty worst case-y, but I think I have enough concerns about deceptive alignment that this kind of strategy is still going to require some kind of check during the training process to ensure that the human simulator is being selected against. I’d be curious to hear if you agree or if you think that this approach would be robust to most training strategies for GPT-n and other kinds of models trained in a self-supervised way.