The observations I make here have little consequence from the point of view of solving the alignment problem. If anything, they merely highlight the essential nature of the inner alignment problem. I will reject the idea that robust alignment, in the sense described in Risks From Learned Optimization, is possible at all. And I therefore also reject the related idea of ‘internalization of the base objective’, i.e. I do not think it is possible for a mesa-objective to “agree” with a base-objective or for a mesa-objective function to be “adjusted towards the base objective function to the point where it is robustly aligned.” I claim that whenever a learned algorithm is performing optimization, one needs to accept that an objective which one did not explicitly design is being pursued. At present, I refrain from attempting to propose my own adjustments to the framework, or to build on the existing literature or to develop my own theory. I am certainly not against doing any of those things, but they are things to possibly be pursued later; none of them is the purpose of this post.
To make my main point, I will introduce only a bare minimum of mathematical notation. We will show that a mesa-objective always has a different type signature to a base objective and that the default assumption ought to be that there is no way to compare them in general and certainly no general way to interpret what it means for them to ‘agree’. Suppose that an optimizer is searching through a space S of systems. At this time, I do not want to attempt to unpack what it means to ‘search’, but, naively, we can imagine that there is an objective function f:S→R, which determines something that we might call the ‘search criterion’. The idea of course is that the optimizer is a system that is ‘searching’ through the set S and judging different points according to the criterion that higher values of f are better.
In the background, there is some ‘task’ and naively we can think of this as being represented by a ‘task space’ X which consists of all of the different possible ‘presentations’ or ‘instances’ of the task. For example, perhaps the task is choosing the next move in a game of Go or the next action in a real-time strategy video game. In these examples, a given x∈X would represent a board position in Go, say, or a single snapshot of the game-state in the video game. Then, in general, given x∈X and s∈S, we can think that s(x) is the output of s on the task instance x or the action taken by s when presented with x (i.e.s(x) denotes the next board move in Go or the next action to be taken in the video game). So each element of S defines a map from the task space X to some kind of output space or space of possible actions, which we need not notate.
Now, it is possible that there exists m∈S which works in the following way: Whenever the output of m on an instance x of the task needs to be evaluated, i.e. whenever m(x) is computed, what happens is that m searches over another search space Σ and looks for elements that score highly according to some other objective function g:Σ→R. Whenever this is the case, we say that such an m∈S is a mesa-optimizer and that the original optimizer—the one that searches over S - is the base optimizer. Notice that in some way, elements of Σ must in turn correspond to outputs/actions, because given some x, the mesa-optimizer m conducts a search over Σ to determine what output m(x) is, but that is all just part of the internal workings of m and we need not ‘know’ or notate how this correspondence works.
In Risks From Learned Optimization, Hubinger et al. write:
In such a case, we will use base objective to refer to whatever criterion the base optimizer was using to select between different possible systems and mesa-objective to refer to whatever criterion the mesa-optimizer is using to select between different possible outputs.
So: The mesa-objective is the criterion that m is using in its search: It expresses the idea that higher values of g are better. And the base objective refers to the criterion that higher values of f are better.
Inner Alignment, Robust Alignment, and Pseudo Alignment
The domain of f is the space S - the space of systems that the base optimizer is searching over (and which can be represented mathematically as a space of functions, each of which is from X to the output or ‘action’ space). On the other hand, the domain of g is Σ . As mentioned above, we might want to think of Σ as corresponding to (a subset of) the output space, but either way, a priori, there is nothing to suggest that S and Σ are not different spaces. The two objective functions used as criteria in these searches have different domains and it is not clear how to compare them.
In Risks From Learned Optimization, it is written that “The problem posed by misaligned mesa-optimizers is… the gap between the base objective and the mesa-objective… We will call the problem of eliminating the base-mesa objective gap the inner alignment problem...”. I think that they are absolutely right to point to the difference between the base objective and a mesa-objective as being the source of an important issue, but I find referring to it as a “gap”, at least at the level of generality posited, to be somewhat misleading. We are not dealing with two objects that are in principle comparable but just so happen to be separated by a gap (a gap waiting to be narrowed by the correct clever idea, say). Instead, the difference, which is due to the different type signatures of the objective functions, is essential in character and rather means that they are, in general, incomparable.
Consider the definitions of robust alignment and pseudo alignment:
We will use the term robustly aligned to refer to mesa-optimizers with mesa-objectives that robustly agree with the base objective across distributions and the term pseudo-aligned to refer to mesa-optimizers with mesa-objectives that agree with the base objective on past training data, but not robustly across possible future data (either in testing, deployment, or further training).
What might it possibly mean to have mesa-optimizers with mesa-objectives that “agree” with the base objective on past training data or “across distributions”? Again, the base objective refers to a criterion used to select between different systems. How can a mesa-objective, a criterion that a particular one of these systems uses to select between different actions, ‘agree’ or ‘disagree’ with it on any particular set of data or “across distributions”? Without further development of the framework, or further explanation, it’s impossible to know precisely what this could mean. Robust alignment seems at best to be a very odd, extreme case (where somehow we have ended up with something like f=g and/or S=Σ ?) and at worst simply impossible.
Later, Hubinger attempts to clarify the terminology in a separate post: Clarifying Inner Alignment Terminology. This attempt at clarification and increased rigour should obviously be encouraged, but it is immediately clear that some of the main definitions are still unsatisfactory: The last of seven definitions is the definition is that of Inner Alignment:
Inner Alignment: A mesa-optimizer is inner aligned if the optimal policy for its mesa-objective is impact aligned with the base objective it was trained under.
This version of the definition seems to turn crucially on the notion that a policy could be “impact aligned” with the base objective. Let us turn to Hubinger’s own definition of “Impact Alignment”, from the same post, to find out what this means precisely:
(Impact) Alignment: An agent is impact aligned (with humans) if it doesn’t take actions that we would judge to be bad/problematic/dangerous/catastrophic.
It seems that we are only told what impact alignment means in the context of an agent and humanity. So we are still missing what seems to be the very core of this edifice: What does it really mean for a mesa-optimizer to be—in whatever is the appropriate sense of the word - ‘aligned’? What could it mean for a mesa-objective to ‘agree with’ the base objective?
Internalization of the base objective
In the Deceptive Alignment post, the idea of “Internalization of the base objective” is introduced. Arguably this is the point at which one might expect the issues I have raised to be most fleshed out, because in highlighting the possibility of “internalization” of the base objective, i.e. that it is possible for a mesa-objective function to be “adjusted towards the base objective function to the point where it is robustly aligned,” there is an implicit claim that robust alignment really can occur. So to understand this phenomenon, we might look for an explanation as to how this occurs. But the ensuing analysis is somewhat weak and vague, to the point that it is almost just a restatement of the claim that it purports to explain:
information about the base objective flows into the learned algorithm via the optimization performed by the base optimizer—the base objective is built into the mesa-optimizer as it is adapted by the base optimizer.
I could try to give my own interpretation of what happens when information about the base objective “flows into” the learned algorithm “via” the optimization process, but I would be making something up that does not appear in the text. And what follows is really just a discussion of some possible ways by which the mesa-optimizer comes to be able to use information about the base objective (it could get information about the base objective directly via the base optimizer or it could get it from the task inputs). None of it goes towards alleviating the specific concerns laid out above and none of it really explains with any conviction how true “internalization” happens. Moreover, a footnote admits that in fact the two routes by which the mesa-optimizer may come to be able to use information about the base objective do not even neatly correspond to the dichotomy given by ‘internalization of the base objective’ vs. ‘modelling of the base objective’.
Conclusions
My observations here run counter to any argument which suggests it is possible to ‘close the gap’ between the base and mesa objectives. As stated above, this suggests that the inner alignment problem has an essential nature: I claim that whenever mesa-optimization occurs, one needs to accept that internally, there is pressure towards a goal which one did not explicitly design.
Of course a close reading of what has been said here really only shows that we cannot rely on the specific formalization I have used (though it may be no more than a few mathematical functions) while still maintaining the exact theoretical framework described in Risks From Learned Optimization. Therefore, either we can try to revise the framework slightly, essentially omitting the notions of robust alignment and ‘internalization of the base objective’ and focussing more on revised versions of ‘proxy alignment’ and ‘approximate alignment’ as descriptors of what is essentially the best possible situation in terms of alignment. Or, it may be the case that the fault is with my formalization and that what I claim are conceptual issues are little more than notational or mathematical curiosities. If the latter is indeed the case, then at the very least, we need to be explicit about whatever tacit assumptions have been made that imply that formalization along the lines I have outlined cannot provide a permissible analysis. For example, I can certainly imagine that it may be possible to add in details on a case-by-case basis or at least to restrict to a specific explicit class of base objectives and then explicitly define how to compare mesa-objectives to them. Perhaps those who object to my view will claim that this is what is really going on in people’s minds and it’s just that it has not been spelled out. However, at present, I believe that at the level of generality that Risks From Learned Optimization strives for, we simply cannot speak of mesa-objectives ‘agreeing with’ or even really of being ‘adjusted towards’ base objectives.
Remarks
As a final set of remarks, I wanted to briefly discuss the general attitude I have taken here. One might read this and think either that yes, this all seems reasonable, but since it is not about addressing the alignment problem at all, what was the point? Or perhaps one might think that it could all be avoided if only I were to make a more charitable reading of the Risks From Learned Optimization posts in the first place. Am I acting in bad faith?… Surely I “get what they mean”? Indeed, often I do feel like I can see or could guess what the authors are getting at. Why then, have I gone out of my way to take them at their word to such a great extent, just so I can point out inconsistencies?
I want to end by describing some general, if somewhat vague and half-baked, thoughts about this kind of theoretical/conceptual AI Alignment work and hopefully this will help to answer the above questions. In my humble opinion, one of the things that this type of work ought to be ‘worried about’ is that it exists in a kind of no-man’s land between on the one hand more traditional academic work in fields like computer science and philosophy and on the other hand more ‘mainstream’ ML Safety, shall we say. For a while I have been wondering whether or not this kind of theoretical alignment work is doomed to remain in this no man’s land, propped up by a few success stories but mostly fed by a steady stream of informal arguments, futurological speculation, and ‘hand-waving’ on blogs and comment sections. I of course do not fully know, but here are a couple of things that have come to mind when trying to think about this: Firstly, when we do not have the luxury of mathematical proof nor the crutch of being backed up by working code and empirical results, it is even more important to subject arguments to a high level of scrutiny. There should be (and hopefully can be) a high bar of intellectual and academic rigour for theoretical/conceptual work in this area. It needs to strive to be as clean and clear as possible. And it’s worth saying that one reason for this is so that it stands on its own two feet, so to speak, when interrogated outside of communities like this one. Secondly, I feel it is important that the best arguments and ideas we have—and good critiques of them—appear ‘in the literature’. I certainly don’t advocate for a completely traditional model of dissemination and publication (there are many advantages to the Alignment Forum and the prevailing rationalist/EA/longtermist ecosystem and their ways of doing things) and of course many great ideas start out as hand-waving and speculation, but it will ultimately not be enough that some idea is ‘generally known’ in the online/EA alignment communities or can be put together by combing through comment sections and the minds of the relevant people if said idea is never really or cannot be ‘written up’ in a truly convincing way. As I’ve said, these remarks are not fully fleshed out and further discussion here doesn’t really seem appropriate. For now, the idea was to explain some of my motivation for taking time to post something like this. All discussion and comments are welcome.
An observation about Hubinger et al.’s framework for learned optimization
The observations I make here have little consequence from the point of view of solving the alignment problem. If anything, they merely highlight the essential nature of the inner alignment problem. I will reject the idea that robust alignment, in the sense described in Risks From Learned Optimization, is possible at all. And I therefore also reject the related idea of ‘internalization of the base objective’, i.e. I do not think it is possible for a mesa-objective to “agree” with a base-objective or for a mesa-objective function to be “adjusted towards the base objective function to the point where it is robustly aligned.” I claim that whenever a learned algorithm is performing optimization, one needs to accept that an objective which one did not explicitly design is being pursued. At present, I refrain from attempting to propose my own adjustments to the framework, or to build on the existing literature or to develop my own theory. I am certainly not against doing any of those things, but they are things to possibly be pursued later; none of them is the purpose of this post.
To make my main point, I will introduce only a bare minimum of mathematical notation. We will show that a mesa-objective always has a different type signature to a base objective and that the default assumption ought to be that there is no way to compare them in general and certainly no general way to interpret what it means for them to ‘agree’. Suppose that an optimizer is searching through a space S of systems. At this time, I do not want to attempt to unpack what it means to ‘search’, but, naively, we can imagine that there is an objective function f :S→R, which determines something that we might call the ‘search criterion’. The idea of course is that the optimizer is a system that is ‘searching’ through the set S and judging different points according to the criterion that higher values of f are better.
In the background, there is some ‘task’ and naively we can think of this as being represented by a ‘task space’ X which consists of all of the different possible ‘presentations’ or ‘instances’ of the task. For example, perhaps the task is choosing the next move in a game of Go or the next action in a real-time strategy video game. In these examples, a given x∈X would represent a board position in Go, say, or a single snapshot of the game-state in the video game. Then, in general, given x∈X and s∈S, we can think that s(x) is the output of s on the task instance x or the action taken by s when presented with x (i.e.s(x) denotes the next board move in Go or the next action to be taken in the video game). So each element of S defines a map from the task space X to some kind of output space or space of possible actions, which we need not notate.
Now, it is possible that there exists m∈S which works in the following way: Whenever the output of m on an instance x of the task needs to be evaluated, i.e. whenever m(x) is computed, what happens is that m searches over another search space Σ and looks for elements that score highly according to some other objective function g:Σ→R. Whenever this is the case, we say that such an m∈S is a mesa-optimizer and that the original optimizer—the one that searches over S - is the base optimizer. Notice that in some way, elements of Σ must in turn correspond to outputs/actions, because given some x, the mesa-optimizer m conducts a search over Σ to determine what output m(x) is, but that is all just part of the internal workings of m and we need not ‘know’ or notate how this correspondence works.
In Risks From Learned Optimization, Hubinger et al. write:
So: The mesa-objective is the criterion that m is using in its search: It expresses the idea that higher values of g are better. And the base objective refers to the criterion that higher values of f are better.
Inner Alignment, Robust Alignment, and Pseudo Alignment
The domain of f is the space S - the space of systems that the base optimizer is searching over (and which can be represented mathematically as a space of functions, each of which is from X to the output or ‘action’ space). On the other hand, the domain of g is Σ . As mentioned above, we might want to think of Σ as corresponding to (a subset of) the output space, but either way, a priori, there is nothing to suggest that S and Σ are not different spaces. The two objective functions used as criteria in these searches have different domains and it is not clear how to compare them.
In Risks From Learned Optimization, it is written that “The problem posed by misaligned mesa-optimizers is… the gap between the base objective and the mesa-objective… We will call the problem of eliminating the base-mesa objective gap the inner alignment problem...”. I think that they are absolutely right to point to the difference between the base objective and a mesa-objective as being the source of an important issue, but I find referring to it as a “gap”, at least at the level of generality posited, to be somewhat misleading. We are not dealing with two objects that are in principle comparable but just so happen to be separated by a gap (a gap waiting to be narrowed by the correct clever idea, say). Instead, the difference, which is due to the different type signatures of the objective functions, is essential in character and rather means that they are, in general, incomparable.
Consider the definitions of robust alignment and pseudo alignment:
What might it possibly mean to have mesa-optimizers with mesa-objectives that “agree” with the base objective on past training data or “across distributions”? Again, the base objective refers to a criterion used to select between different systems. How can a mesa-objective, a criterion that a particular one of these systems uses to select between different actions, ‘agree’ or ‘disagree’ with it on any particular set of data or “across distributions”? Without further development of the framework, or further explanation, it’s impossible to know precisely what this could mean. Robust alignment seems at best to be a very odd, extreme case (where somehow we have ended up with something like f=g and/or S=Σ ?) and at worst simply impossible.
Later, Hubinger attempts to clarify the terminology in a separate post: Clarifying Inner Alignment Terminology. This attempt at clarification and increased rigour should obviously be encouraged, but it is immediately clear that some of the main definitions are still unsatisfactory: The last of seven definitions is the definition is that of Inner Alignment:
This version of the definition seems to turn crucially on the notion that a policy could be “impact aligned” with the base objective. Let us turn to Hubinger’s own definition of “Impact Alignment”, from the same post, to find out what this means precisely:
It seems that we are only told what impact alignment means in the context of an agent and humanity. So we are still missing what seems to be the very core of this edifice: What does it really mean for a mesa-optimizer to be—in whatever is the appropriate sense of the word - ‘aligned’? What could it mean for a mesa-objective to ‘agree with’ the base objective?
Internalization of the base objective
In the Deceptive Alignment post, the idea of “Internalization of the base objective” is introduced. Arguably this is the point at which one might expect the issues I have raised to be most fleshed out, because in highlighting the possibility of “internalization” of the base objective, i.e. that it is possible for a mesa-objective function to be “adjusted towards the base objective function to the point where it is robustly aligned,” there is an implicit claim that robust alignment really can occur. So to understand this phenomenon, we might look for an explanation as to how this occurs. But the ensuing analysis is somewhat weak and vague, to the point that it is almost just a restatement of the claim that it purports to explain:
I could try to give my own interpretation of what happens when information about the base objective “flows into” the learned algorithm “via” the optimization process, but I would be making something up that does not appear in the text. And what follows is really just a discussion of some possible ways by which the mesa-optimizer comes to be able to use information about the base objective (it could get information about the base objective directly via the base optimizer or it could get it from the task inputs). None of it goes towards alleviating the specific concerns laid out above and none of it really explains with any conviction how true “internalization” happens. Moreover, a footnote admits that in fact the two routes by which the mesa-optimizer may come to be able to use information about the base objective do not even neatly correspond to the dichotomy given by ‘internalization of the base objective’ vs. ‘modelling of the base objective’.
Conclusions
My observations here run counter to any argument which suggests it is possible to ‘close the gap’ between the base and mesa objectives. As stated above, this suggests that the inner alignment problem has an essential nature: I claim that whenever mesa-optimization occurs, one needs to accept that internally, there is pressure towards a goal which one did not explicitly design.
Of course a close reading of what has been said here really only shows that we cannot rely on the specific formalization I have used (though it may be no more than a few mathematical functions) while still maintaining the exact theoretical framework described in Risks From Learned Optimization. Therefore, either we can try to revise the framework slightly, essentially omitting the notions of robust alignment and ‘internalization of the base objective’ and focussing more on revised versions of ‘proxy alignment’ and ‘approximate alignment’ as descriptors of what is essentially the best possible situation in terms of alignment. Or, it may be the case that the fault is with my formalization and that what I claim are conceptual issues are little more than notational or mathematical curiosities. If the latter is indeed the case, then at the very least, we need to be explicit about whatever tacit assumptions have been made that imply that formalization along the lines I have outlined cannot provide a permissible analysis. For example, I can certainly imagine that it may be possible to add in details on a case-by-case basis or at least to restrict to a specific explicit class of base objectives and then explicitly define how to compare mesa-objectives to them. Perhaps those who object to my view will claim that this is what is really going on in people’s minds and it’s just that it has not been spelled out. However, at present, I believe that at the level of generality that Risks From Learned Optimization strives for, we simply cannot speak of mesa-objectives ‘agreeing with’ or even really of being ‘adjusted towards’ base objectives.
Remarks
As a final set of remarks, I wanted to briefly discuss the general attitude I have taken here. One might read this and think either that yes, this all seems reasonable, but since it is not about addressing the alignment problem at all, what was the point? Or perhaps one might think that it could all be avoided if only I were to make a more charitable reading of the Risks From Learned Optimization posts in the first place. Am I acting in bad faith?… Surely I “get what they mean”? Indeed, often I do feel like I can see or could guess what the authors are getting at. Why then, have I gone out of my way to take them at their word to such a great extent, just so I can point out inconsistencies?
I want to end by describing some general, if somewhat vague and half-baked, thoughts about this kind of theoretical/conceptual AI Alignment work and hopefully this will help to answer the above questions. In my humble opinion, one of the things that this type of work ought to be ‘worried about’ is that it exists in a kind of no-man’s land between on the one hand more traditional academic work in fields like computer science and philosophy and on the other hand more ‘mainstream’ ML Safety, shall we say. For a while I have been wondering whether or not this kind of theoretical alignment work is doomed to remain in this no man’s land, propped up by a few success stories but mostly fed by a steady stream of informal arguments, futurological speculation, and ‘hand-waving’ on blogs and comment sections. I of course do not fully know, but here are a couple of things that have come to mind when trying to think about this: Firstly, when we do not have the luxury of mathematical proof nor the crutch of being backed up by working code and empirical results, it is even more important to subject arguments to a high level of scrutiny. There should be (and hopefully can be) a high bar of intellectual and academic rigour for theoretical/conceptual work in this area. It needs to strive to be as clean and clear as possible. And it’s worth saying that one reason for this is so that it stands on its own two feet, so to speak, when interrogated outside of communities like this one. Secondly, I feel it is important that the best arguments and ideas we have—and good critiques of them—appear ‘in the literature’. I certainly don’t advocate for a completely traditional model of dissemination and publication (there are many advantages to the Alignment Forum and the prevailing rationalist/EA/longtermist ecosystem and their ways of doing things) and of course many great ideas start out as hand-waving and speculation, but it will ultimately not be enough that some idea is ‘generally known’ in the online/EA alignment communities or can be put together by combing through comment sections and the minds of the relevant people if said idea is never really or cannot be ‘written up’ in a truly convincing way. As I’ve said, these remarks are not fully fleshed out and further discussion here doesn’t really seem appropriate. For now, the idea was to explain some of my motivation for taking time to post something like this. All discussion and comments are welcome.