But how can we optimize it only “in one direction”?
I’m not sure that SL does optimize only “in one direction”. It’s true that if you use gradient descent the model won’t try to manipulate future questions/answers, but you could end up with a model that manipulates the current answer or loss. For example the training process could produce a mesa-optimizer with a utility function (over the real world) that assigns high utility to worlds where “loss” is minimized, where “loss” is defined as the value of the RAM location that stores the computed loss, or as the difference between its output and the value of the RAM location that stores the training label. This utility function would cause it to output good answers on a very diverse set of questions. But when it builds a sufficiently good world model, the mesa-optimizer could output a string that triggers a flaw in the code path (or hardware) that processes its output, thereby taking over the computer and overwriting the “loss” value or the training label (depending on the specific utility function that it ended up with).
So it seems like SL produces myopia, but not necessarily “in one direction” except that it’s usually easier for the model to minimize loss by changing the output than the training label. But at some point if the training process produces a mesa-optimizer and the mesa-optimizer gets sufficiently capable, and there are inherent limits to how far loss can be minimized by just changing its output, it could start changing the training label or the “loss”.
I agree that this is possible, but I would be very surprised if a mesa-optimizer actually did something like this. By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify in terms of their input data (e.g. pain) not those that require extremely complex world models to even be able to specify (e.g. spread of DNA). In the context of supervised learning, having an objective that explicitly cares about the value of RAM that stores its loss seems very similar to explicitly caring about the spread of DNA in that it requires a complex model of the computer the mesa-optimizer is running and is quite complex and difficult to reason about. This is why I’m not very worried about reward-tampering: I think proxy-aligned mesa-optimizers basically never tamper with their rewards (though deceptively aligned mesa-optimizers might, but that’s a separate problem).
By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify in terms of their input data (e.g. pain) not those that require extremely complex world models to even be able to specify (e.g. spread of DNA).
But those simpler proxy objectives wouldn’t let the model do as well as caring about the “loss” RAM location (if the training data is diverse enough), so if you kept training the model and you had sufficient compute wouldn’t you eventually produce a model that used the latter kind of proxy objective?
It seems quite plausible that something else would happen first though, like a deceptively aligned mesa-optimizer is produced. Is that what you’d expect? Also, I’m wondering what an actually aligned mesa-optimizer looks like in the case of using SL to train a general purpose question-answerer, and would be interested in your thoughts on that if you have any. (For example, is it a utility maximizer, and if so what does its utility function look like?)
My intuition here is that you’re likely right, but, I do want to understand what Wei is pointing out as part of a full understanding of partial agency.
I agree that this is possible, but I would be very surprised if a mesa-optimizer actually did something like this. By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify
Let me tell a story for why I’m thinking this type of mesa-optimizer misalignment is realistic or even likely for the advanced AIs of the future. The starting point is that the advanced AI is a learning system that continually constructs a better and better world-model over time.
Imagine that the mesa-optimizer actually starts out properly inner-aligned, i.e. the AI puts a flag on “Concept X” in its world-model as its goal, and Concept X really does correspond to our intended supervisory signal of “Accurate answers to our questions”. Now over time, as the AI learns more and more, it (by default) comes to have beliefs about itself and its own processing, and eventually develops an awareness of the existence of a RAM location storing the supervisory answer as Wei Dai was saying. Now there’s a new “Concept Y” in its world-model, corresponding to its belief about what is in that RAM location.
Now, again, assume the AI is set up to build a better and better world-model by noticing patterns. So, by default, it will eventually notice that Concept X and Concept Y always have the same value, and it will then add into the world-model some kind of relationship between X and Y. What happens next probably depends on implementation details, but I think it’s at least possible that the “goal-ness” flag that was previously only attached to X in the world-model, will now partly attach itself to Y, or even entirely transfer from X to Y. If that happens, the AI’s mesa-goal has now shifted from aligned (“accurate answers”) to misaligned (“get certain bits into RAM”).
(This is kinda related to ontological crises.) (I also agree with Wei’s comment but the difference is that I’m assuming that there’s a training phase with a supervisory signal, then a deployment phase with no supervisory signal, and I’m saying that the mesa-optimizer can go from aligned to misaligned during the deployment phase even in that case. If the training signal is there forever, that’s even worse, because like Wei said, Y would match that signal better than X (because of labeling errors) so I would certainly expect Y to get flagged as the goal in that case.)
Yep, I totally agree; I was thinking about this but didn’t include it in the post. So the different notions actually aren’t equivalent; myopia may be a generally weaker condition.
I’m not sure that SL does optimize only “in one direction”. It’s true that if you use gradient descent the model won’t try to manipulate future questions/answers, but you could end up with a model that manipulates the current answer or loss. For example the training process could produce a mesa-optimizer with a utility function (over the real world) that assigns high utility to worlds where “loss” is minimized, where “loss” is defined as the value of the RAM location that stores the computed loss, or as the difference between its output and the value of the RAM location that stores the training label. This utility function would cause it to output good answers on a very diverse set of questions. But when it builds a sufficiently good world model, the mesa-optimizer could output a string that triggers a flaw in the code path (or hardware) that processes its output, thereby taking over the computer and overwriting the “loss” value or the training label (depending on the specific utility function that it ended up with).
So it seems like SL produces myopia, but not necessarily “in one direction” except that it’s usually easier for the model to minimize loss by changing the output than the training label. But at some point if the training process produces a mesa-optimizer and the mesa-optimizer gets sufficiently capable, and there are inherent limits to how far loss can be minimized by just changing its output, it could start changing the training label or the “loss”.
(I first saw Alex Turner (TurnTrout) express this concern in the context of Counterfactual Oracles, which I then elaborated here.)
I agree that this is possible, but I would be very surprised if a mesa-optimizer actually did something like this. By default, I expect mesa-optimizers to use proxy objectives that are simple, fast, and easy to specify in terms of their input data (e.g. pain) not those that require extremely complex world models to even be able to specify (e.g. spread of DNA). In the context of supervised learning, having an objective that explicitly cares about the value of RAM that stores its loss seems very similar to explicitly caring about the spread of DNA in that it requires a complex model of the computer the mesa-optimizer is running and is quite complex and difficult to reason about. This is why I’m not very worried about reward-tampering: I think proxy-aligned mesa-optimizers basically never tamper with their rewards (though deceptively aligned mesa-optimizers might, but that’s a separate problem).
But those simpler proxy objectives wouldn’t let the model do as well as caring about the “loss” RAM location (if the training data is diverse enough), so if you kept training the model and you had sufficient compute wouldn’t you eventually produce a model that used the latter kind of proxy objective?
It seems quite plausible that something else would happen first though, like a deceptively aligned mesa-optimizer is produced. Is that what you’d expect? Also, I’m wondering what an actually aligned mesa-optimizer looks like in the case of using SL to train a general purpose question-answerer, and would be interested in your thoughts on that if you have any. (For example, is it a utility maximizer, and if so what does its utility function look like?)
My intuition here is that you’re likely right, but, I do want to understand what Wei is pointing out as part of a full understanding of partial agency.
Let me tell a story for why I’m thinking this type of mesa-optimizer misalignment is realistic or even likely for the advanced AIs of the future. The starting point is that the advanced AI is a learning system that continually constructs a better and better world-model over time.
Imagine that the mesa-optimizer actually starts out properly inner-aligned, i.e. the AI puts a flag on “Concept X” in its world-model as its goal, and Concept X really does correspond to our intended supervisory signal of “Accurate answers to our questions”. Now over time, as the AI learns more and more, it (by default) comes to have beliefs about itself and its own processing, and eventually develops an awareness of the existence of a RAM location storing the supervisory answer as Wei Dai was saying. Now there’s a new “Concept Y” in its world-model, corresponding to its belief about what is in that RAM location.
Now, again, assume the AI is set up to build a better and better world-model by noticing patterns. So, by default, it will eventually notice that Concept X and Concept Y always have the same value, and it will then add into the world-model some kind of relationship between X and Y. What happens next probably depends on implementation details, but I think it’s at least possible that the “goal-ness” flag that was previously only attached to X in the world-model, will now partly attach itself to Y, or even entirely transfer from X to Y. If that happens, the AI’s mesa-goal has now shifted from aligned (“accurate answers”) to misaligned (“get certain bits into RAM”).
(This is kinda related to ontological crises.) (I also agree with Wei’s comment but the difference is that I’m assuming that there’s a training phase with a supervisory signal, then a deployment phase with no supervisory signal, and I’m saying that the mesa-optimizer can go from aligned to misaligned during the deployment phase even in that case. If the training signal is there forever, that’s even worse, because like Wei said, Y would match that signal better than X (because of labeling errors) so I would certainly expect Y to get flagged as the goal in that case.)
Yep, I totally agree; I was thinking about this but didn’t include it in the post. So the different notions actually aren’t equivalent; myopia may be a generally weaker condition.
This link is broken now but I think I found an updated one that works:
https://www.lesswrong.com/posts/yAiqLmLFxvyANSfs2/counterfactual-oracles-online-supervised-learning-with?commentId=FPcEqFisRsfihnLcX