I generally agree. In the human case, I sometimes have conversations where we’re discussing (let’s say) the fact that sweet tastes trigger rewards via (blah blah circuits in the brainstem), and the person says “this circuit wants us to eat sweet food, oh wait, maybe I should say that it wants sweet taste on our tongue? Or—” and then I say, “it’s just a simple input-output circuit, it does whatever it does, it doesn’t “want” anything in the real world”.
On the other hand, suppose there’s an intelligent designer (say, a human programmer), and they make a reward function R hoping that they will wind up with a trained AGI that’s trying to do X (where X is some idea in the programmer’s head), but they fail and the AGI is trying to do not-X instead. If R only depends on the AGI’s external behavior (as is often the case in RL these days), then we can imagine two ways that this failure happened:
The AGI was doing the wrong thing but got rewarded anyway (or doing the right thing but got punished)
The AGI was doing the right thing for the wrong reasons but got rewarded anyway (or doing the wrong thing for the right reasons but got punished).
I think it’s useful to catalog possible failures based on whether they involve (1) or (2), and I think it’s reasonable to call them “failures of outer alignment” and “failures of inner alignment” respectively, and I think when (1) is happening rarely or not at all, we can say that the reward function is doing a good job at “representing” the designer’s intention—or at any rate, it’s doing as well as we can possibly hope for from a reward function of that form. The AGI still might fail to acquire the right motivation, and there might be things we can do to help (e.g. change the training environment), but replacing R (which fires exactly to the extent that the AGI’s external behavior involves doing X) by a different external-behavior-based reward function R’ (which sometimes fires when the AGI is doing not-X, and/or sometimes doesn’t fire when the AGI is doing X) seems like it would only make things worse. So in that sense, it seems useful to talk about outer misalignment, a.k.a. situations where the reward function is failing to “represent” the AGI designer’s desired external behavior, and to treat those situations as generally bad.
I think “outer alignment failure” is confusing terminology at this point—always requiring clarification, and then storing “oh yeah, ‘outer alignment failure’ means the wrong thing got rewarded as a matter of empirical fact.” Furthermore, words are sticky, and lend some of their historical connotations to color our thinking. Better to just say “R rewards bad on-training behavior in situations A, B, C” or even “bad action rewarded”, which compactly communicates the anticipation-constraining information.
Similarly, “inner alignment failure” (2) → “undesired inner cognition reinforced when superficially good action performed” (we probably should get a better compact phrase for this one).
I generally agree. In the human case, I sometimes have conversations where we’re discussing (let’s say) the fact that sweet tastes trigger rewards via (blah blah circuits in the brainstem), and the person says “this circuit wants us to eat sweet food, oh wait, maybe I should say that it wants sweet taste on our tongue? Or—” and then I say, “it’s just a simple input-output circuit, it does whatever it does, it doesn’t “want” anything in the real world”.
On the other hand, suppose there’s an intelligent designer (say, a human programmer), and they make a reward function R hoping that they will wind up with a trained AGI that’s trying to do X (where X is some idea in the programmer’s head), but they fail and the AGI is trying to do not-X instead. If R only depends on the AGI’s external behavior (as is often the case in RL these days), then we can imagine two ways that this failure happened:
The AGI was doing the wrong thing but got rewarded anyway (or doing the right thing but got punished)
The AGI was doing the right thing for the wrong reasons but got rewarded anyway (or doing the wrong thing for the right reasons but got punished).
I think it’s useful to catalog possible failures based on whether they involve (1) or (2), and I think it’s reasonable to call them “failures of outer alignment” and “failures of inner alignment” respectively, and I think when (1) is happening rarely or not at all, we can say that the reward function is doing a good job at “representing” the designer’s intention—or at any rate, it’s doing as well as we can possibly hope for from a reward function of that form. The AGI still might fail to acquire the right motivation, and there might be things we can do to help (e.g. change the training environment), but replacing R (which fires exactly to the extent that the AGI’s external behavior involves doing X) by a different external-behavior-based reward function R’ (which sometimes fires when the AGI is doing not-X, and/or sometimes doesn’t fire when the AGI is doing X) seems like it would only make things worse. So in that sense, it seems useful to talk about outer misalignment, a.k.a. situations where the reward function is failing to “represent” the AGI designer’s desired external behavior, and to treat those situations as generally bad.
I think “outer alignment failure” is confusing terminology at this point—always requiring clarification, and then storing “oh yeah, ‘outer alignment failure’ means the wrong thing got rewarded as a matter of empirical fact.” Furthermore, words are sticky, and lend some of their historical connotations to color our thinking. Better to just say “R rewards bad on-training behavior in situations A, B, C” or even “bad action rewarded”, which compactly communicates the anticipation-constraining information.
Similarly, “inner alignment failure” (2) → “undesired inner cognition reinforced when superficially good action performed” (we probably should get a better compact phrase for this one).