I agree that this is possible, but I would be very surprised if a mesa-optimizer actually did something like this. By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify in terms of their input data (e.g. pain) not those that require extremely complex world models to even be able to specify (e.g. spread of DNA). In the context of supervised learning, having an objective that explicitly cares about the value of RAM that stores its loss seems very similar to explicitly caring about the spread of DNA in that it requires a complex model of the computer the mesa-optimizer is running and is quite complex and difficult to reason about. This is why I’m not very worried about reward-tampering: I think proxy-aligned mesa-optimizers basically never tamper with their rewards (though deceptively aligned mesa-optimizers might, but that’s a separate problem).
By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify in terms of their input data (e.g. pain) not those that require extremely complex world models to even be able to specify (e.g. spread of DNA).
But those simpler proxy objectives wouldn’t let the model do as well as caring about the “loss” RAM location (if the training data is diverse enough), so if you kept training the model and you had sufficient compute wouldn’t you eventually produce a model that used the latter kind of proxy objective?
It seems quite plausible that something else would happen first though, like a deceptively aligned mesa-optimizer is produced. Is that what you’d expect? Also, I’m wondering what an actually aligned mesa-optimizer looks like in the case of using SL to train a general purpose question-answerer, and would be interested in your thoughts on that if you have any. (For example, is it a utility maximizer, and if so what does its utility function look like?)
My intuition here is that you’re likely right, but, I do want to understand what Wei is pointing out as part of a full understanding of partial agency.
I agree that this is possible, but I would be very surprised if a mesa-optimizer actually did something like this. By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify
Let me tell a story for why I’m thinking this type of mesa-optimizer misalignment is realistic or even likely for the advanced AIs of the future. The starting point is that the advanced AI is a learning system that continually constructs a better and better world-model over time.
Imagine that the mesa-optimizer actually starts out properly inner-aligned, i.e. the AI puts a flag on “Concept X” in its world-model as its goal, and Concept X really does correspond to our intended supervisory signal of “Accurate answers to our questions”. Now over time, as the AI learns more and more, it (by default) comes to have beliefs about itself and its own processing, and eventually develops an awareness of the existence of a RAM location storing the supervisory answer as Wei Dai was saying. Now there’s a new “Concept Y” in its world-model, corresponding to its belief about what is in that RAM location.
Now, again, assume the AI is set up to build a better and better world-model by noticing patterns. So, by default, it will eventually notice that Concept X and Concept Y always have the same value, and it will then add into the world-model some kind of relationship between X and Y. What happens next probably depends on implementation details, but I think it’s at least possible that the “goal-ness” flag that was previously only attached to X in the world-model, will now partly attach itself to Y, or even entirely transfer from X to Y. If that happens, the AI’s mesa-goal has now shifted from aligned (“accurate answers”) to misaligned (“get certain bits into RAM”).
(This is kinda related to ontological crises.) (I also agree with Wei’s comment but the difference is that I’m assuming that there’s a training phase with a supervisory signal, then a deployment phase with no supervisory signal, and I’m saying that the mesa-optimizer can go from aligned to misaligned during the deployment phase even in that case. If the training signal is there forever, that’s even worse, because like Wei said, Y would match that signal better than X (because of labeling errors) so I would certainly expect Y to get flagged as the goal in that case.)
I agree that this is possible, but I would be very surprised if a mesa-optimizer actually did something like this. By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify in terms of their input data (e.g. pain) not those that require extremely complex world models to even be able to specify (e.g. spread of DNA). In the context of supervised learning, having an objective that explicitly cares about the value of RAM that stores its loss seems very similar to explicitly caring about the spread of DNA in that it requires a complex model of the computer the mesa-optimizer is running and is quite complex and difficult to reason about. This is why I’m not very worried about reward-tampering: I think proxy-aligned mesa-optimizers basically never tamper with their rewards (though deceptively aligned mesa-optimizers might, but that’s a separate problem).
But those simpler proxy objectives wouldn’t let the model do as well as caring about the “loss” RAM location (if the training data is diverse enough), so if you kept training the model and you had sufficient compute wouldn’t you eventually produce a model that used the latter kind of proxy objective?
It seems quite plausible that something else would happen first though, like a deceptively aligned mesa-optimizer is produced. Is that what you’d expect? Also, I’m wondering what an actually aligned mesa-optimizer looks like in the case of using SL to train a general purpose question-answerer, and would be interested in your thoughts on that if you have any. (For example, is it a utility maximizer, and if so what does its utility function look like?)
My intuition here is that you’re likely right, but, I do want to understand what Wei is pointing out as part of a full understanding of partial agency.
Let me tell a story for why I’m thinking this type of mesa-optimizer misalignment is realistic or even likely for the advanced AIs of the future. The starting point is that the advanced AI is a learning system that continually constructs a better and better world-model over time.
Imagine that the mesa-optimizer actually starts out properly inner-aligned, i.e. the AI puts a flag on “Concept X” in its world-model as its goal, and Concept X really does correspond to our intended supervisory signal of “Accurate answers to our questions”. Now over time, as the AI learns more and more, it (by default) comes to have beliefs about itself and its own processing, and eventually develops an awareness of the existence of a RAM location storing the supervisory answer as Wei Dai was saying. Now there’s a new “Concept Y” in its world-model, corresponding to its belief about what is in that RAM location.
Now, again, assume the AI is set up to build a better and better world-model by noticing patterns. So, by default, it will eventually notice that Concept X and Concept Y always have the same value, and it will then add into the world-model some kind of relationship between X and Y. What happens next probably depends on implementation details, but I think it’s at least possible that the “goal-ness” flag that was previously only attached to X in the world-model, will now partly attach itself to Y, or even entirely transfer from X to Y. If that happens, the AI’s mesa-goal has now shifted from aligned (“accurate answers”) to misaligned (“get certain bits into RAM”).
(This is kinda related to ontological crises.) (I also agree with Wei’s comment but the difference is that I’m assuming that there’s a training phase with a supervisory signal, then a deployment phase with no supervisory signal, and I’m saying that the mesa-optimizer can go from aligned to misaligned during the deployment phase even in that case. If the training signal is there forever, that’s even worse, because like Wei said, Y would match that signal better than X (because of labeling errors) so I would certainly expect Y to get flagged as the goal in that case.)