I agree that this is possible, but I would be very surprised if a mesa-optimizer actually did something like this. By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify
Let me tell a story for why I’m thinking this type of mesa-optimizer misalignment is realistic or even likely for the advanced AIs of the future. The starting point is that the advanced AI is a learning system that continually constructs a better and better world-model over time.
Imagine that the mesa-optimizer actually starts out properly inner-aligned, i.e. the AI puts a flag on “Concept X” in its world-model as its goal, and Concept X really does correspond to our intended supervisory signal of “Accurate answers to our questions”. Now over time, as the AI learns more and more, it (by default) comes to have beliefs about itself and its own processing, and eventually develops an awareness of the existence of a RAM location storing the supervisory answer as Wei Dai was saying. Now there’s a new “Concept Y” in its world-model, corresponding to its belief about what is in that RAM location.
Now, again, assume the AI is set up to build a better and better world-model by noticing patterns. So, by default, it will eventually notice that Concept X and Concept Y always have the same value, and it will then add into the world-model some kind of relationship between X and Y. What happens next probably depends on implementation details, but I think it’s at least possible that the “goal-ness” flag that was previously only attached to X in the world-model, will now partly attach itself to Y, or even entirely transfer from X to Y. If that happens, the AI’s mesa-goal has now shifted from aligned (“accurate answers”) to misaligned (“get certain bits into RAM”).
(This is kinda related to ontological crises.) (I also agree with Wei’s comment but the difference is that I’m assuming that there’s a training phase with a supervisory signal, then a deployment phase with no supervisory signal, and I’m saying that the mesa-optimizer can go from aligned to misaligned during the deployment phase even in that case. If the training signal is there forever, that’s even worse, because like Wei said, Y would match that signal better than X (because of labeling errors) so I would certainly expect Y to get flagged as the goal in that case.)
Let me tell a story for why I’m thinking this type of mesa-optimizer misalignment is realistic or even likely for the advanced AIs of the future. The starting point is that the advanced AI is a learning system that continually constructs a better and better world-model over time.
Imagine that the mesa-optimizer actually starts out properly inner-aligned, i.e. the AI puts a flag on “Concept X” in its world-model as its goal, and Concept X really does correspond to our intended supervisory signal of “Accurate answers to our questions”. Now over time, as the AI learns more and more, it (by default) comes to have beliefs about itself and its own processing, and eventually develops an awareness of the existence of a RAM location storing the supervisory answer as Wei Dai was saying. Now there’s a new “Concept Y” in its world-model, corresponding to its belief about what is in that RAM location.
Now, again, assume the AI is set up to build a better and better world-model by noticing patterns. So, by default, it will eventually notice that Concept X and Concept Y always have the same value, and it will then add into the world-model some kind of relationship between X and Y. What happens next probably depends on implementation details, but I think it’s at least possible that the “goal-ness” flag that was previously only attached to X in the world-model, will now partly attach itself to Y, or even entirely transfer from X to Y. If that happens, the AI’s mesa-goal has now shifted from aligned (“accurate answers”) to misaligned (“get certain bits into RAM”).
(This is kinda related to ontological crises.) (I also agree with Wei’s comment but the difference is that I’m assuming that there’s a training phase with a supervisory signal, then a deployment phase with no supervisory signal, and I’m saying that the mesa-optimizer can go from aligned to misaligned during the deployment phase even in that case. If the training signal is there forever, that’s even worse, because like Wei said, Y would match that signal better than X (because of labeling errors) so I would certainly expect Y to get flagged as the goal in that case.)