The post is about the complexity of what needs to be gotten inside the AI. If you had a perfect blackbox that exactly evaluated the thing-that-needs-to-be-inside-the-AI, this could possibly simplify some particular approaches to alignment, that would still in fact be too hard because nobody has a way of getting an AI to point at anything. But it would not change the complexity of what needs to be moved inside the AI, which is the narrow point that this post is about; and if you think that some larger thing is not correct, you should not confuse that with saying that the narrow point this post is about, is incorrect.
I claim that having such a function would simplify the AI alignment problem by reducing it from the hard problem of getting an AI to care about something complex (human value) to the easier problem of getting the AI to care about that particular function (which is simple, as the function can be hooked up to the AI directly).
One cannot hook up a function to an AI directly; it has to be physically instantiated somehow. For example, the function could be a human pressing a button; and then, any experimentation on the AI’s part to determine what “really” controls the button, will find that administering drugs to the human, or building a robot to seize control of the reward button, is “really” (from the AI’s perspective) the true meaning of the reward button after all! Perhaps you do not have this exact scenario in mind. So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?
The post is about the complexity of what needs to be gotten inside the AI. If you had a perfect blackbox that exactly evaluated the thing-that-needs-to-be-inside-the-AI, this could possibly simplify some particular approaches to alignment, that would still in fact be too hard because nobody has a way of getting an AI to point at anything.
I think it’s important to be able to make a narrow point about outer alignment without needing to defend a broader thesis about the entire alignment problem. To the extent my argument is “outer alignment seems easier than you portrayed it to be in this post, and elsewhere”, then your reply here that inner alignment is still hard doesn’t seem like it particularly rebuts my narrow point.
This post definitely seems to relevantly touch on the question of outer alignment, given the premise that we are explicitly specifying the conditions that the outcome pump needs to satisfy in order for the outcome pump to produce a safe outcome. Explicitly specifying a function that delineates safe from unsafe outcomes is essentially the prototypical case of an outer alignment problem. I was making a point about this aspect of the post, rather than a more general point about how all of alignment is easy.
(It’s possible that you’ll reply to me by saying “I never intended people to interpret me as saying anything about outer alignment in this post” despite the clear portrayal of an outer alignment problem in the post. Even so, I don’t think what you intended really matters that much here. I’m responding to what was clearly and explicitly written, rather than what was in your head at the time, which is unknowable to me.)
One cannot hook up a function to an AI directly; it has to be physically instantiated somehow. For example, the function could be a human pressing a button; and then, any experimentation on the AI’s part to determine what “really” controls the button, will find that administering drugs to the human, or building a robot to seize control of the reward button, is “really” (from the AI’s perspective) the true meaning of the reward button after all! Perhaps you do not have this exact scenario in mind.
It seems you’re assuming here that something like iterated amplification and distillation will simply fail, because the supervisor function that provides rewards to the model can be hacked or deceived. I think my response to this is that I just tend to be more optimistic than you are that we can end up doing safe supervision where the supervisor ~always remains in control, and they can evaluate the AI’s outputs accurately, more-or-less sidestepping the issues you mention here.
I think my reasons for believing this are pretty mundane: I’d point to the fact that evaluation tends to be easier than generation, and the fact that we can employ non-agentic tools to help evaluate, monitor, and control our models to provide them accurate rewards without getting hacked. I think your general pessimism about these things is fairly unwarranted, and my guess is that if you had made specific predictions about this question in the past, about what will happen prior to world-ending AI, these predictions would largely have fared worse than predictions from someone like Paul Christiano.
Your distinction between “outer alignment” and “inner alignment” is both ahistorical and unYudkowskian. It was invented years after this post was written, by someone who wasn’t me; and though I’ve sometimes used the terms in occasions where they seem to fit unambiguously, it’s not something I see as a clear ontological division, especially if you’re talking about questions like “If we own the following kind of blackbox, would alignment get any easier?” which on my view breaks that ontology. So I strongly reject your frame that this post was “clearly portraying an outer alignment problem” and can be criticized on those grounds by you; that is anachronistic.
You are now dragging in a very large number of further inferences about “what I meant”, and other implications that you think this post has, which are about Christiano-style proposals that were developed years after this post. I have disagreements with those, many disagreements. But it is definitely not what this post is about, one way or another, because this post predates Christiano being on the scene.
What this post is trying to illustrate is that if you try putting crisp physical predicates on reality, that won’t work to say what you want. This point is true! If you then want to take in a bunch of anachronistic ideas developed later, and claim (wrongly imo) that this renders irrelevant the simple truth of what this post actually literally says, that would be a separate conversation. But if you’re doing that, please distinguish the truth of what this post actually says versus how you think these other later clever ideas evade or bypass that truth.
What this post is trying to illustrate is that if you try putting crisp physical predicates on reality, that won’t work to say what you want. This point is true!
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
You have exhorted him several times to distinguish between that larger argument and the narrow point made by this post:
[...] and if you think that some larger thing is not correct, you should not confuse that with saying that the narrow point this post is about, is incorrect [...]
But if you’re doing that, please distinguish the truth of what this post actually says versus how you think these other later clever ideas evade or bypass that truth.
But it seems to me that he’s already doing this. He’s not alleging that this post is incorrect in isolation.
The only reason this discussion is happened on the comments of this post at all is the May 2024 update at the start of it, which Matthew used as a jumping-off point for saying “my critique of the ‘larger argument’ does not make the mistake referred to in the May 2024 update[2], but people keep saying it does[3], so I’ll try restating that critique again in the hopes it will be clearer this time.”
I say “some version of” to allow for a distinction between (a) the “larger argument” of Eliezer_2007′s which this post was meant to support in 2007, and (b) whatever version of the same “larger argument” was a standard MIRI position as of roughly 2016-2017.
As far as I can tell, Matthew is only interested in evaluating the 2016-2017 MIRI position, not the 2007 EY position (insofar as the latter is different, if it fact it is). When he cites older EY material, he does so as a means to an end – either as indirect evidence of later MIRI positions, because it was itself cited in the later MIRI material which is his main topic.
Note that the current version of Matthew’s 2023 post includes multiple caveats that he’s not making the mistake referred to in the May 2024 update.
Note also that Matthew’s post only mentions this post in two relatively minor ways, first to clarify that he doesn’t make the mistake referred to in the update (unlike some “Non-MIRI people” who do make the mistake), and second to support an argument about whether “Yudkowsky and other MIRI people” believe that it could be sufficient to get a single human’s values into the AI, or whether something like CEV would be required instead.
I bring up the mentions of this post in Matthew’s post in order to clarifies what role “is ‘The Hidden Complexity of Wishes’ correct in isolation, considered apart from anything outside it?” plays in Matthew’s critique – namely, none at all, IIUC.
(I realize that Matthew’s post has been edited over time, so I can only speak to the current version.)
To be fully explicit: I’m not claiming anything about whether or not the May 2024 update was about Matthew’s 2023 post (alone or in combination with anything else) or not. I’m just rephrasing what Matthew said in the first comment of this thread (which was also agnostic on the topic of whether the update referred to him).
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
I’ll confirm that I’m not saying this post’s exact thesis is false. This post seems to be largely a parable about a fictional device, rather than an explicit argument with premises and clear conclusions. I’m not saying the parable is wrong. Parables are rarely “wrong” in a strict sense, and I am not disputing this parable’s conclusion.
However, I am saying: this parable presumably played some role in the “larger” argument that MIRI has made in the past. What role did it play? Well, I think a good guess is that it portrayed the difficulty of precisely specifying what you want or intend, for example when explicitly designing a utility function. This problem was often alleged to be difficult because, when you want something complex, it’s difficult to perfectly delineate potential “good” scenarios and distinguish them from all potential “bad” scenarios. This is the problem I was analyzing in my original comment.
While the term “outer alignment” was not invented to describe this exact problem until much later, I was using that term purely as descriptive terminology for the problem this post clearly describes, rather than claiming that Eliezer in 2007 was deliberately describing something that he called “outer alignment” at the time. Because my usage of “outer alignment” was merely descriptive in this sense, I reject the idea that my comment was anachronistic.
And again: I am not claiming that this post is inaccurate in isolation. In both my above comment, and in my 2023 post, I merely cited this post as portraying an aspect of the problem that I was talking about, rather than saying something like “this particular post’s conclusion is wrong”. I think the fact that the post doesn’t really have a clear thesis in the first place means that it can’t be wrong in a strong sense at all. However, the post was definitely interpreted as explaining some part of why alignment is hard — for a long time by many people — and I was critiquing the particular application of the post to this argument, rather than the post itself in isolation.
Here’s an argument that alignment is difficult which uses complexity of value as a subpoint:
A1. If you try to manually specify what you want, you fail.
A2. Therefore, you want something algorithmically complex.
B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B3. We don’t understand how to affect the values-distribution toward something specific.
B4. If we don’t affect the value-distribution toward something specific, then the values-distribution probably puts large penalties for absolute algorithmic complexity; any specific utility function with higher absolute algorithmic complexity will be less likely to be the one that the AGI ends up with.
C1. Because of A2 (our values are algorithmically complex) and B4 (a complex utility function is unlikely to show up in an AGI without us skillfully intervening), an AGI is unlikely to have our values without us skillfully intervening.
C2. Because of B3 (we don’t know how to skillfully intervene on an AGI’s values) and C1, an AGI is unlikely to have our values.
I think that you think that the argument under discussion is something like:
(same) A1. If you try to manually specify what you want, you fail.
(same) A2. Therefore, you want something algorithmically complex.
(same) B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
(same) B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B′3. The greater the complexity of our values, the harder it is to point at our values.
B′4. The harder it is to point at our values, the more work or difficulty is involved in B2.
C′1. By B′3 and B′4: the greater the complexity of our values, the more work or difficulty is involved in B2 (determining the AGI’s values).
C′2. Because of A2 (our values are algorithmically complex) and C′1, it would take a lot of work to make an AGI pursue our values.
These are different arguments, which make use of the complexity of values in different ways. You dispute B′3 on the grounds that it can be easy to point at complex values. B′3 isn’t used in the first argument though.
In the situation assumed by your first argument, AGI would be very unlikely to share our values even if our values were much simpler than they are.
Complexity makes things worse, yes, but the conclusion “AGI is unlikely to have our values” is already entailed by the other premises even if we drop the stuff about complexity.
Why: if we’re just sampling some function from a simplicity prior, we’re very unlikely to get any particular nontrivial function that we’ve decided to care about in advance of the sampling event. There are just too many possible functions, and probability mass has to get divided among them all.
In other words, if it takes N bits to specify human values, there are 2N ways that a bitstring of the same length could be set, and we’re hoping to land on just one of those through luck alone. (And to land on a bitstring of this specific length in the first place, of course.) Unless N is very small, such a coincidence is extremely unlikely.
And N is not going to be that small; even in the sort of naive and overly simple “hand-crafted” value specifications which EY has critiqued in this post and elsewhere, a lot of details have to be specified. (E.g. some proposals refer to “humans” and so a full algorithmic description of them would require an account of what is and isn’t a human.)
One could devise a variant of this argument that doesn’t have this issue, by “relaxing the problem” so that we have some control, just not enough to pin down the sampled function exactly. And then the remaining freedom is filled randomly with a simplicity bias. This partial control might be enough to make a simple function likely, while not being able to make a more complex function likely. (Hmm, perhaps this is just your second argument, or a version of it.)
This kind of reasoning might be applicable in a world where its premises are true, but I don’t think it’s premises are true in our world.
In practice, we apparently have no trouble getting machines to compute very complex functions, including (as Matthew points out) specifications of human value whose robustness would have seemed like impossible magic back in 2007. The main difficulty, if there is one, is in “getting the function to play the role of the AGI values,” not in getting the AGI to compute the particular function we want in the first place.
The main difficulty, if there is one, is in “getting the function to play the role of the AGI values,” not in getting the AGI to compute the particular function we want in the first place.
Right, that is the problem (and IDK of anyone discussing this who says otherwise).
Another position would be that it’s probably easy to influence a few bits of the AI’s utility function, but not others. For example, it’s conceivable that, by doing capabilities research in different ways, you could increase the probability that the AGI is highly ambitious—e.g. tries to take over the whole lightcone, tries to acausally bargain, etc., rather than being more satisficy. (IDK how to do that, but plausibly it’s qualitatively easier than alignment.) Then you could claim that it’s half a bit more likely that you’ve made an FAI, given that an FAI would probably be ambitious. In this case, it does matter that the utility function is complex.
While the term “outer alignment” wasn’t coined until later to describe the exact issue that I’m talking about, I was using that term purely as a descriptive label for the problem this post clearly highlights, rather than implying that you were using or aware of the term in 2007.
Because I was simply using “outer alignment” in this descriptive sense, I reject the notion that my comment was anachronistic. I used that term as shorthand for the thing I was talking about, which is clearly and obviously portrayed by your post, that’s all.
To be very clear: the exact problem I am talking about is the inherent challenge of precisely defining what you want or intend, especially (though not exclusively) in the context of designing a utility function. This difficulty arises because, when the desired outcome is complex, it becomes nearly impossible to perfectly delineate between all potential ‘good’ scenarios and all possible ‘bad’ scenarios. This challenge has been a recurring theme in discussions of alignment, as it’s considered hard to capture every nuance of what you want in your specification without missing an edge case.
This problem is manifestly portrayed by your post, using the example of an outcome pump to illustrate. I was responding to this portrayal of the problem, and specifically saying that this specific narrow problem seems easier in light of LLMs, for particular reasons.
It is frankly frustrating to me that, from my perspective, you seem to have reliably missed the point of what I am trying to convey here.
I only brought up Christiano-style proposals because I thought you were changing the topic to a broader discussion, specifically to ask me what methodologies I had in mind when I made particular points. If you had not asked me “So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?” then I would not have mentioned those things. In any case, none of the things I said about Christiano-style proposals were intended to critique this post’s narrow point. I was responding to that particular part of your comment instead.
As far as the actual content of this post, I do not dispute its exact thesis. The post seems to be a parable, not a detailed argument with a clear conclusion. The parable seems interesting to me. It also doesn’t seem wrong, in any strict sense. However, I do think that some of the broader conclusions that many people have drawn from the parable seem false, in context. I was responding to the specific way that this post had been applied and interpreted in broader arguments about AI alignment.
My central thesis in regards to this post is simply: the post clearly portrays a specific problem that was later called the “outer alignment” problem by other people. This post portrays this problem as being difficult in a particular way. And I think this portrayal is misleading, even if the literal parable holds up in pure isolation.
I think it’s important to be able to make a narrow point about outer alignment without needing to defend a broader thesis about the entire alignment problem.
A mind that ever wishes to learn anything complicated, must learn to cultivate an interest in which particular exact argument steps are valid, apart from whether you yet agree or disagree with the final conclusion, because only in this way can you sort through all the arguments and finally sum them.
Algorithmic complexity is precisely analogous to difficulty-of-learning-to-predict, so saying “it’s not about learning to predict, it’s about algorithmic complexity” doesn’t make sense. One read of the original is: learning to respect common sense moral side constraints is tricky[1], but AI systems will learn how to do it in the end. I’d be happy to call this read correct, and is consistent with the observation that today’s AI systems do respect common sense moral side constraints given straightforward requests, and that it took a few years to figure out how to do it. That read doesn’t really jive with your commentary.
Your commentary seems to situate this post within a larger argument: teaching a system to “act” is different to teaching it to “predict” because in the former case a sufficiently capable learner’s behaviour can collapse to a pathological policy, whereas teaching a capable learner to predict does not risk such collapse. Thus “prediction” is distinguished from “algorithmic complexity”. Furthermore, commonsense moral side constraints are complex enough to risk such collapse when we train an “actor” but not a “predictor”. This seems confused.
First, all we need to turn a language model prediction into an action is a means of turning text into action, and we have many such means. So the distinction between text predictor and actor is suspect. We could consider an alternative knows/cares distinction: does a system act properly when properly incentivised (“knows”) vs does it act properly when presented with whatever context we are practically able to give it (“”“cares”””)? Language models usually act properly given simple prompts, so in this sense they “care”. So rejecting evidence from language models does not seem well justified.
Second, there’s no need to claim that commonsense moral side constraints in particular are so hard that trying to develop AI systems that respect them leads to policy collapse. It need only be the case that one of the things we try to teach them to do leads to policy collapse. Teaching values is not particularly notable among all the things we might want AI systems to do; it certainly does not seem to be among the hardest. Focussing on values makes the argument unnecessarily weak.
Third, algorithmic complexity is measured with respect to a prior. The post invokes (but does not justify) an “English speaking evil genie” prior. I don’t think anyone thinks this is a serious prior for reasoning about advanced AI system behaviour. But the post is (according to your commentary, if not the post itself) making a quantitative point—values are sufficiently complex to induce policy collapse—but it’s measuring this quantity using a nonsense prior. If the quantitative argument was indeed the original point, it is mystifying why a nonsense prior was chosen to make it, and also why no effort was made to justify the prior.
My question is why is the following statement below true, exactly?
Second, there’s no need to claim that commonsense moral side constraints in particular are so hard that trying to develop AI systems that respect them leads to policy collapse. It need only be the case that one of the things we try to teach them to do leads to policy collapse.
Here’s a basic model of policy collapse: suppose there exist pathological policies of low prior probability (/high algorithmic complexity) such that they play the training game when it is strategically wise to do so, and when they get a good opportunity they defect in order to pursue some unknown aim.
Because they play the training game, a wide variety of training objectives will collapse to one of these policies if the system in training starts exploring policies of sufficiently high algorithmic complexity. So, according to this crude model, there’s a complexity bound: stay under it and you’re fine, go over it and you get pathological behaviour. Roughly, whatever desired behaviour requires the most algorithmically complex policy is the one that is most pertinent for assessing policy collapse risk (because that’s the one that contributes most of the algorithmic complexity, and so it give your first order estimate of whether or not you’re crossing the collapse threshold). So, which desired behaviour requires the most complex policy: is it, for example, respecting commonsense moral constraints, or is it inventing molecular nanotechnology?
Tangentially, the policy collapse theory does not predict outcomes that look anything like malicious compliance. It predicts that, if you’re in a position of power over the AI system, your mother is saved exactly as you want her to be. If you are not in such a position then your mother is not saved at all and you get a nanobot war instead or something. That is, if you do run afoul of policy collapse, it doesn’t matter if you want your system to pursue simple or complex goals, you’re up shit creek either way.
The post is about the complexity of what needs to be gotten inside the AI. If you had a perfect blackbox that exactly evaluated the thing-that-needs-to-be-inside-the-AI, this could possibly simplify some particular approaches to alignment, that would still in fact be too hard because nobody has a way of getting an AI to point at anything. But it would not change the complexity of what needs to be moved inside the AI, which is the narrow point that this post is about; and if you think that some larger thing is not correct, you should not confuse that with saying that the narrow point this post is about, is incorrect.
One cannot hook up a function to an AI directly; it has to be physically instantiated somehow. For example, the function could be a human pressing a button; and then, any experimentation on the AI’s part to determine what “really” controls the button, will find that administering drugs to the human, or building a robot to seize control of the reward button, is “really” (from the AI’s perspective) the true meaning of the reward button after all! Perhaps you do not have this exact scenario in mind. So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?
I think it’s important to be able to make a narrow point about outer alignment without needing to defend a broader thesis about the entire alignment problem. To the extent my argument is “outer alignment seems easier than you portrayed it to be in this post, and elsewhere”, then your reply here that inner alignment is still hard doesn’t seem like it particularly rebuts my narrow point.
This post definitely seems to relevantly touch on the question of outer alignment, given the premise that we are explicitly specifying the conditions that the outcome pump needs to satisfy in order for the outcome pump to produce a safe outcome. Explicitly specifying a function that delineates safe from unsafe outcomes is essentially the prototypical case of an outer alignment problem. I was making a point about this aspect of the post, rather than a more general point about how all of alignment is easy.
(It’s possible that you’ll reply to me by saying “I never intended people to interpret me as saying anything about outer alignment in this post” despite the clear portrayal of an outer alignment problem in the post. Even so, I don’t think what you intended really matters that much here. I’m responding to what was clearly and explicitly written, rather than what was in your head at the time, which is unknowable to me.)
It seems you’re assuming here that something like iterated amplification and distillation will simply fail, because the supervisor function that provides rewards to the model can be hacked or deceived. I think my response to this is that I just tend to be more optimistic than you are that we can end up doing safe supervision where the supervisor ~always remains in control, and they can evaluate the AI’s outputs accurately, more-or-less sidestepping the issues you mention here.
I think my reasons for believing this are pretty mundane: I’d point to the fact that evaluation tends to be easier than generation, and the fact that we can employ non-agentic tools to help evaluate, monitor, and control our models to provide them accurate rewards without getting hacked. I think your general pessimism about these things is fairly unwarranted, and my guess is that if you had made specific predictions about this question in the past, about what will happen prior to world-ending AI, these predictions would largely have fared worse than predictions from someone like Paul Christiano.
Your distinction between “outer alignment” and “inner alignment” is both ahistorical and unYudkowskian. It was invented years after this post was written, by someone who wasn’t me; and though I’ve sometimes used the terms in occasions where they seem to fit unambiguously, it’s not something I see as a clear ontological division, especially if you’re talking about questions like “If we own the following kind of blackbox, would alignment get any easier?” which on my view breaks that ontology. So I strongly reject your frame that this post was “clearly portraying an outer alignment problem” and can be criticized on those grounds by you; that is anachronistic.
You are now dragging in a very large number of further inferences about “what I meant”, and other implications that you think this post has, which are about Christiano-style proposals that were developed years after this post. I have disagreements with those, many disagreements. But it is definitely not what this post is about, one way or another, because this post predates Christiano being on the scene.
What this post is trying to illustrate is that if you try putting crisp physical predicates on reality, that won’t work to say what you want. This point is true! If you then want to take in a bunch of anachronistic ideas developed later, and claim (wrongly imo) that this renders irrelevant the simple truth of what this post actually literally says, that would be a separate conversation. But if you’re doing that, please distinguish the truth of what this post actually says versus how you think these other later clever ideas evade or bypass that truth.
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
You have exhorted him several times to distinguish between that larger argument and the narrow point made by this post:
But it seems to me that he’s already doing this. He’s not alleging that this post is incorrect in isolation.
The only reason this discussion is happened on the comments of this post at all is the May 2024 update at the start of it, which Matthew used as a jumping-off point for saying “my critique of the ‘larger argument’ does not make the mistake referred to in the May 2024 update[2], but people keep saying it does[3], so I’ll try restating that critique again in the hopes it will be clearer this time.”
I say “some version of” to allow for a distinction between (a) the “larger argument” of Eliezer_2007′s which this post was meant to support in 2007, and (b) whatever version of the same “larger argument” was a standard MIRI position as of roughly 2016-2017.
As far as I can tell, Matthew is only interested in evaluating the 2016-2017 MIRI position, not the 2007 EY position (insofar as the latter is different, if it fact it is). When he cites older EY material, he does so as a means to an end – either as indirect evidence of later MIRI positions, because it was itself cited in the later MIRI material which is his main topic.
Note that the current version of Matthew’s 2023 post includes multiple caveats that he’s not making the mistake referred to in the May 2024 update.
Note also that Matthew’s post only mentions this post in two relatively minor ways, first to clarify that he doesn’t make the mistake referred to in the update (unlike some “Non-MIRI people” who do make the mistake), and second to support an argument about whether “Yudkowsky and other MIRI people” believe that it could be sufficient to get a single human’s values into the AI, or whether something like CEV would be required instead.
I bring up the mentions of this post in Matthew’s post in order to clarifies what role “is ‘The Hidden Complexity of Wishes’ correct in isolation, considered apart from anything outside it?” plays in Matthew’s critique – namely, none at all, IIUC.
(I realize that Matthew’s post has been edited over time, so I can only speak to the current version.)
To be fully explicit: I’m not claiming anything about whether or not the May 2024 update was about Matthew’s 2023 post (alone or in combination with anything else) or not. I’m just rephrasing what Matthew said in the first comment of this thread (which was also agnostic on the topic of whether the update referred to him).
I’ll confirm that I’m not saying this post’s exact thesis is false. This post seems to be largely a parable about a fictional device, rather than an explicit argument with premises and clear conclusions. I’m not saying the parable is wrong. Parables are rarely “wrong” in a strict sense, and I am not disputing this parable’s conclusion.
However, I am saying: this parable presumably played some role in the “larger” argument that MIRI has made in the past. What role did it play? Well, I think a good guess is that it portrayed the difficulty of precisely specifying what you want or intend, for example when explicitly designing a utility function. This problem was often alleged to be difficult because, when you want something complex, it’s difficult to perfectly delineate potential “good” scenarios and distinguish them from all potential “bad” scenarios. This is the problem I was analyzing in my original comment.
While the term “outer alignment” was not invented to describe this exact problem until much later, I was using that term purely as descriptive terminology for the problem this post clearly describes, rather than claiming that Eliezer in 2007 was deliberately describing something that he called “outer alignment” at the time. Because my usage of “outer alignment” was merely descriptive in this sense, I reject the idea that my comment was anachronistic.
And again: I am not claiming that this post is inaccurate in isolation. In both my above comment, and in my 2023 post, I merely cited this post as portraying an aspect of the problem that I was talking about, rather than saying something like “this particular post’s conclusion is wrong”. I think the fact that the post doesn’t really have a clear thesis in the first place means that it can’t be wrong in a strong sense at all. However, the post was definitely interpreted as explaining some part of why alignment is hard — for a long time by many people — and I was critiquing the particular application of the post to this argument, rather than the post itself in isolation.
Here’s an argument that alignment is difficult which uses complexity of value as a subpoint:
A1. If you try to manually specify what you want, you fail.
A2. Therefore, you want something algorithmically complex.
B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B3. We don’t understand how to affect the values-distribution toward something specific.
B4. If we don’t affect the value-distribution toward something specific, then the values-distribution probably puts large penalties for absolute algorithmic complexity; any specific utility function with higher absolute algorithmic complexity will be less likely to be the one that the AGI ends up with.
C1. Because of A2 (our values are algorithmically complex) and B4 (a complex utility function is unlikely to show up in an AGI without us skillfully intervening), an AGI is unlikely to have our values without us skillfully intervening.
C2. Because of B3 (we don’t know how to skillfully intervene on an AGI’s values) and C1, an AGI is unlikely to have our values.
I think that you think that the argument under discussion is something like:
(same) A1. If you try to manually specify what you want, you fail.
(same) A2. Therefore, you want something algorithmically complex.
(same) B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
(same) B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B′3. The greater the complexity of our values, the harder it is to point at our values.
B′4. The harder it is to point at our values, the more work or difficulty is involved in B2.
C′1. By B′3 and B′4: the greater the complexity of our values, the more work or difficulty is involved in B2 (determining the AGI’s values).
C′2. Because of A2 (our values are algorithmically complex) and C′1, it would take a lot of work to make an AGI pursue our values.
These are different arguments, which make use of the complexity of values in different ways. You dispute B′3 on the grounds that it can be easy to point at complex values. B′3 isn’t used in the first argument though.
In the situation assumed by your first argument, AGI would be very unlikely to share our values even if our values were much simpler than they are.
Complexity makes things worse, yes, but the conclusion “AGI is unlikely to have our values” is already entailed by the other premises even if we drop the stuff about complexity.
Why: if we’re just sampling some function from a simplicity prior, we’re very unlikely to get any particular nontrivial function that we’ve decided to care about in advance of the sampling event. There are just too many possible functions, and probability mass has to get divided among them all.
In other words, if it takes N bits to specify human values, there are 2N ways that a bitstring of the same length could be set, and we’re hoping to land on just one of those through luck alone. (And to land on a bitstring of this specific length in the first place, of course.) Unless N is very small, such a coincidence is extremely unlikely.
And N is not going to be that small; even in the sort of naive and overly simple “hand-crafted” value specifications which EY has critiqued in this post and elsewhere, a lot of details have to be specified. (E.g. some proposals refer to “humans” and so a full algorithmic description of them would require an account of what is and isn’t a human.)
One could devise a variant of this argument that doesn’t have this issue, by “relaxing the problem” so that we have some control, just not enough to pin down the sampled function exactly. And then the remaining freedom is filled randomly with a simplicity bias. This partial control might be enough to make a simple function likely, while not being able to make a more complex function likely. (Hmm, perhaps this is just your second argument, or a version of it.)
This kind of reasoning might be applicable in a world where its premises are true, but I don’t think it’s premises are true in our world.
In practice, we apparently have no trouble getting machines to compute very complex functions, including (as Matthew points out) specifications of human value whose robustness would have seemed like impossible magic back in 2007. The main difficulty, if there is one, is in “getting the function to play the role of the AGI values,” not in getting the AGI to compute the particular function we want in the first place.
Right, that is the problem (and IDK of anyone discussing this who says otherwise).
Another position would be that it’s probably easy to influence a few bits of the AI’s utility function, but not others. For example, it’s conceivable that, by doing capabilities research in different ways, you could increase the probability that the AGI is highly ambitious—e.g. tries to take over the whole lightcone, tries to acausally bargain, etc., rather than being more satisficy. (IDK how to do that, but plausibly it’s qualitatively easier than alignment.) Then you could claim that it’s half a bit more likely that you’ve made an FAI, given that an FAI would probably be ambitious. In this case, it does matter that the utility function is complex.
While the term “outer alignment” wasn’t coined until later to describe the exact issue that I’m talking about, I was using that term purely as a descriptive label for the problem this post clearly highlights, rather than implying that you were using or aware of the term in 2007.
Because I was simply using “outer alignment” in this descriptive sense, I reject the notion that my comment was anachronistic. I used that term as shorthand for the thing I was talking about, which is clearly and obviously portrayed by your post, that’s all.
To be very clear: the exact problem I am talking about is the inherent challenge of precisely defining what you want or intend, especially (though not exclusively) in the context of designing a utility function. This difficulty arises because, when the desired outcome is complex, it becomes nearly impossible to perfectly delineate between all potential ‘good’ scenarios and all possible ‘bad’ scenarios. This challenge has been a recurring theme in discussions of alignment, as it’s considered hard to capture every nuance of what you want in your specification without missing an edge case.
This problem is manifestly portrayed by your post, using the example of an outcome pump to illustrate. I was responding to this portrayal of the problem, and specifically saying that this specific narrow problem seems easier in light of LLMs, for particular reasons.
It is frankly frustrating to me that, from my perspective, you seem to have reliably missed the point of what I am trying to convey here.
I only brought up Christiano-style proposals because I thought you were changing the topic to a broader discussion, specifically to ask me what methodologies I had in mind when I made particular points. If you had not asked me “So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?” then I would not have mentioned those things. In any case, none of the things I said about Christiano-style proposals were intended to critique this post’s narrow point. I was responding to that particular part of your comment instead.
As far as the actual content of this post, I do not dispute its exact thesis. The post seems to be a parable, not a detailed argument with a clear conclusion. The parable seems interesting to me. It also doesn’t seem wrong, in any strict sense. However, I do think that some of the broader conclusions that many people have drawn from the parable seem false, in context. I was responding to the specific way that this post had been applied and interpreted in broader arguments about AI alignment.
My central thesis in regards to this post is simply: the post clearly portrays a specific problem that was later called the “outer alignment” problem by other people. This post portrays this problem as being difficult in a particular way. And I think this portrayal is misleading, even if the literal parable holds up in pure isolation.
Indeed. For it is written:
Algorithmic complexity is precisely analogous to difficulty-of-learning-to-predict, so saying “it’s not about learning to predict, it’s about algorithmic complexity” doesn’t make sense. One read of the original is: learning to respect common sense moral side constraints is tricky[1], but AI systems will learn how to do it in the end. I’d be happy to call this read correct, and is consistent with the observation that today’s AI systems do respect common sense moral side constraints given straightforward requests, and that it took a few years to figure out how to do it. That read doesn’t really jive with your commentary.
Your commentary seems to situate this post within a larger argument: teaching a system to “act” is different to teaching it to “predict” because in the former case a sufficiently capable learner’s behaviour can collapse to a pathological policy, whereas teaching a capable learner to predict does not risk such collapse. Thus “prediction” is distinguished from “algorithmic complexity”. Furthermore, commonsense moral side constraints are complex enough to risk such collapse when we train an “actor” but not a “predictor”. This seems confused.
First, all we need to turn a language model prediction into an action is a means of turning text into action, and we have many such means. So the distinction between text predictor and actor is suspect. We could consider an alternative knows/cares distinction: does a system act properly when properly incentivised (“knows”) vs does it act properly when presented with whatever context we are practically able to give it (“”“cares”””)? Language models usually act properly given simple prompts, so in this sense they “care”. So rejecting evidence from language models does not seem well justified.
Second, there’s no need to claim that commonsense moral side constraints in particular are so hard that trying to develop AI systems that respect them leads to policy collapse. It need only be the case that one of the things we try to teach them to do leads to policy collapse. Teaching values is not particularly notable among all the things we might want AI systems to do; it certainly does not seem to be among the hardest. Focussing on values makes the argument unnecessarily weak.
Third, algorithmic complexity is measured with respect to a prior. The post invokes (but does not justify) an “English speaking evil genie” prior. I don’t think anyone thinks this is a serious prior for reasoning about advanced AI system behaviour. But the post is (according to your commentary, if not the post itself) making a quantitative point—values are sufficiently complex to induce policy collapse—but it’s measuring this quantity using a nonsense prior. If the quantitative argument was indeed the original point, it is mystifying why a nonsense prior was chosen to make it, and also why no effort was made to justify the prior.
the text proposes full value alignment as a solution to the commonsense side constraints problem, but this turned out to be stronger than necessary.
My question is why is the following statement below true, exactly?
Here’s a basic model of policy collapse: suppose there exist pathological policies of low prior probability (/high algorithmic complexity) such that they play the training game when it is strategically wise to do so, and when they get a good opportunity they defect in order to pursue some unknown aim.
Because they play the training game, a wide variety of training objectives will collapse to one of these policies if the system in training starts exploring policies of sufficiently high algorithmic complexity. So, according to this crude model, there’s a complexity bound: stay under it and you’re fine, go over it and you get pathological behaviour. Roughly, whatever desired behaviour requires the most algorithmically complex policy is the one that is most pertinent for assessing policy collapse risk (because that’s the one that contributes most of the algorithmic complexity, and so it give your first order estimate of whether or not you’re crossing the collapse threshold). So, which desired behaviour requires the most complex policy: is it, for example, respecting commonsense moral constraints, or is it inventing molecular nanotechnology?
Tangentially, the policy collapse theory does not predict outcomes that look anything like malicious compliance. It predicts that, if you’re in a position of power over the AI system, your mother is saved exactly as you want her to be. If you are not in such a position then your mother is not saved at all and you get a nanobot war instead or something. That is, if you do run afoul of policy collapse, it doesn’t matter if you want your system to pursue simple or complex goals, you’re up shit creek either way.