It has come to my attention that this article is currently being misrepresented as proof that I/MIRI previously advocated that it would be very difficult to get machine superintelligences to understand or predict human values. This would obviously be false, and also, is not what is being argued below. The example in the post below is not about an Artificial Intelligence literally at all! If the post were about what AIs supposedly can’t do, the central example would have used an AI! The point that is made below will be about the algorithmic complexity of human values. This point is relevant within a larger argument, because it bears on the complexity of what you need to get an artificial superintelligence to want or value; rather than bearing on what a superintelligence supposedly could not predict or understand. -- EY, May 2024.
I can’t tell whether this update to the post is addressed towards me. However, it seems possible that it is addressed towards me, since I wrote a post last year criticizing some of the ideas behind this post. In either case, whether it’s addressed towards me or not, I’d like to reply to the update.
For the record, I want to definitively clarify that I never interpreted MIRI as arguing that it would be difficult to get a machine superintelligence to understand or predict human values. That was never my thesis, and I spent considerable effort clarifying the fact that this was not my thesis in my post, stating multiple times that I never thought MIRI predicted it would be hard to get an AI to understand human values.
My thesis instead was about a subtly different thing, which is easy to misinterpret if you aren’t reading carefully. I was talking about something which Eliezer called the “value identification problem”, and which had been referenced on Arbital, and in other essays by MIRI, including under a different name than the “value identification problem”. These other names included the “value specification” problem and the problem of “outer alignment” (at least in narrow contexts).
I didn’t expect as much confusion at the time when I wrote the post, because I thought clarifying what I meant and distinguishing it from other things that I did not mean multiple times would be sufficient to prevent rampant misinterpretation by so many people. However, evidently, such clarifications were insufficient, and I should have instead gone overboard in my precision and clarity. I think if I re-wrote the post now, I would try to provide like 5 different independent examples demonstrating how I was talking about a different thing than the problem of getting an AI to “understand” or “predict” human values.
At the very least, I can try now to give a bit more clarification about what I meant, just in case doing this one more time causes the concept to “click” in someone’s mind:
Eliezer doesn’t actually say this in the above post, but his general argument expressed here and elsewhere seems to be that the premise “human value is complex” implies the conclusion: “therefore, it’s hard to get an AI to care about human value”. At least, he seems to think that this premise makes this conclusion significantly more likely.[1]
This seems to be his argument, as otherwise it would be unclear why Eliezer would bring up “complexity of values” in the first place. If the complexity of values had nothing to do with the difficulty of getting an AI to care about human values, then it is baffling why he would bring it up. Clearly, there must be some connection, and I think I am interpreting the connection made here correctly.
However, suppose you have a function that inputs a state of the world and outputs a number corresponding to how “good” the state of the world is. And further suppose that this function is transparent, legible, and can actually be used in practice to reliably determine the value of a given world state. In other words, you can give the function a world state, and it will spit out a number, which reliably informs you about the value of the world state. I claim that having such a function would simplify the AI alignment problem by reducing it from the hard problem of getting an AI to care about something complex (human value) to the easier problem of getting the AI to care about that particular function (which is simple, as the function can be hooked up to the AI directly).
In other words, if you have a solution to the value identification problem (i.e., you have the function that correctly and transparently rates the value of world states, as I just described), this almost completely sidesteps the problem that “human value is complex and therefore it’s difficult to get an AI to care about human value”. That’s because, if we have a function that directly encodes human value, and can be simply referenced or directly inputted into a computer, then all the AI needs to do is care about maximizing that function rather than maximizing a more complex referent of “human values”. The pointer to “this function” is clearly simple, and in any case, simpler than the idea of all of human value.
(This was supposed to narrowly reply to MIRI, by the way. If I were writing a more general point about how LLMs were evidence that alignment might be easy, I would not have focused so heavily on the historical questions about what people said, and I would have instead made simpler points about how GPT-4 seems to straightforwardly try do what you want, when you tell it to do things.)
My main point was that I thought recent progress in LLMs had demonstrated progress at the problem of building such a function, and solving the value identification problem, and that this progress goes beyond the problem of getting an AI to understand or predict human values. For one thing, an AI that merely understands human values will not necessarily act as a transparent, legible function that will tell you the value of any outcome. However, by contrast, solving the value identification problem would give you such a function. This strongly distinguishes the two problems. These problems are not the same thing. I’d appreciate if people stopped interpreting me as saying one thing when I clearly meant another, separate thing.
This interpretation is supported by the following quote, on Arbital,
Complexity of value is a further idea above and beyond the orthogonality thesis which states that AIs don’t automatically do the right thing and that we can have, e.g., paperclip maximizers. Even if we accept that paperclip maximizers are possible, and simple and nonforced, this wouldn’t yet imply that it’s very difficult to make AIs that do the right thing. If the right thing is very simple to encode—if there are value optimizers that are scarcely more complex than diamond maximizers—then it might not be especially hard to build a nice AI even if not all AIs are nice. Complexity of Value is the further proposition that says, no, this is forseeably quite hard—not because AIs have ‘natural’ anti-nice desires, but because niceness requires a lot of work to specify. [emphasis mine]
The post is about the complexity of what needs to be gotten inside the AI. If you had a perfect blackbox that exactly evaluated the thing-that-needs-to-be-inside-the-AI, this could possibly simplify some particular approaches to alignment, that would still in fact be too hard because nobody has a way of getting an AI to point at anything. But it would not change the complexity of what needs to be moved inside the AI, which is the narrow point that this post is about; and if you think that some larger thing is not correct, you should not confuse that with saying that the narrow point this post is about, is incorrect.
I claim that having such a function would simplify the AI alignment problem by reducing it from the hard problem of getting an AI to care about something complex (human value) to the easier problem of getting the AI to care about that particular function (which is simple, as the function can be hooked up to the AI directly).
One cannot hook up a function to an AI directly; it has to be physically instantiated somehow. For example, the function could be a human pressing a button; and then, any experimentation on the AI’s part to determine what “really” controls the button, will find that administering drugs to the human, or building a robot to seize control of the reward button, is “really” (from the AI’s perspective) the true meaning of the reward button after all! Perhaps you do not have this exact scenario in mind. So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?
The post is about the complexity of what needs to be gotten inside the AI. If you had a perfect blackbox that exactly evaluated the thing-that-needs-to-be-inside-the-AI, this could possibly simplify some particular approaches to alignment, that would still in fact be too hard because nobody has a way of getting an AI to point at anything.
I think it’s important to be able to make a narrow point about outer alignment without needing to defend a broader thesis about the entire alignment problem. To the extent my argument is “outer alignment seems easier than you portrayed it to be in this post, and elsewhere”, then your reply here that inner alignment is still hard doesn’t seem like it particularly rebuts my narrow point.
This post definitely seems to relevantly touch on the question of outer alignment, given the premise that we are explicitly specifying the conditions that the outcome pump needs to satisfy in order for the outcome pump to produce a safe outcome. Explicitly specifying a function that delineates safe from unsafe outcomes is essentially the prototypical case of an outer alignment problem. I was making a point about this aspect of the post, rather than a more general point about how all of alignment is easy.
(It’s possible that you’ll reply to me by saying “I never intended people to interpret me as saying anything about outer alignment in this post” despite the clear portrayal of an outer alignment problem in the post. Even so, I don’t think what you intended really matters that much here. I’m responding to what was clearly and explicitly written, rather than what was in your head at the time, which is unknowable to me.)
One cannot hook up a function to an AI directly; it has to be physically instantiated somehow. For example, the function could be a human pressing a button; and then, any experimentation on the AI’s part to determine what “really” controls the button, will find that administering drugs to the human, or building a robot to seize control of the reward button, is “really” (from the AI’s perspective) the true meaning of the reward button after all! Perhaps you do not have this exact scenario in mind.
It seems you’re assuming here that something like iterated amplification and distillation will simply fail, because the supervisor function that provides rewards to the model can be hacked or deceived. I think my response to this is that I just tend to be more optimistic than you are that we can end up doing safe supervision where the supervisor ~always remains in control, and they can evaluate the AI’s outputs accurately, more-or-less sidestepping the issues you mention here.
I think my reasons for believing this are pretty mundane: I’d point to the fact that evaluation tends to be easier than generation, and the fact that we can employ non-agentic tools to help evaluate, monitor, and control our models to provide them accurate rewards without getting hacked. I think your general pessimism about these things is fairly unwarranted, and my guess is that if you had made specific predictions about this question in the past, about what will happen prior to world-ending AI, these predictions would largely have fared worse than predictions from someone like Paul Christiano.
Your distinction between “outer alignment” and “inner alignment” is both ahistorical and unYudkowskian. It was invented years after this post was written, by someone who wasn’t me; and though I’ve sometimes used the terms in occasions where they seem to fit unambiguously, it’s not something I see as a clear ontological division, especially if you’re talking about questions like “If we own the following kind of blackbox, would alignment get any easier?” which on my view breaks that ontology. So I strongly reject your frame that this post was “clearly portraying an outer alignment problem” and can be criticized on those grounds by you; that is anachronistic.
You are now dragging in a very large number of further inferences about “what I meant”, and other implications that you think this post has, which are about Christiano-style proposals that were developed years after this post. I have disagreements with those, many disagreements. But it is definitely not what this post is about, one way or another, because this post predates Christiano being on the scene.
What this post is trying to illustrate is that if you try putting crisp physical predicates on reality, that won’t work to say what you want. This point is true! If you then want to take in a bunch of anachronistic ideas developed later, and claim (wrongly imo) that this renders irrelevant the simple truth of what this post actually literally says, that would be a separate conversation. But if you’re doing that, please distinguish the truth of what this post actually says versus how you think these other later clever ideas evade or bypass that truth.
What this post is trying to illustrate is that if you try putting crisp physical predicates on reality, that won’t work to say what you want. This point is true!
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
You have exhorted him several times to distinguish between that larger argument and the narrow point made by this post:
[...] and if you think that some larger thing is not correct, you should not confuse that with saying that the narrow point this post is about, is incorrect [...]
But if you’re doing that, please distinguish the truth of what this post actually says versus how you think these other later clever ideas evade or bypass that truth.
But it seems to me that he’s already doing this. He’s not alleging that this post is incorrect in isolation.
The only reason this discussion is happened on the comments of this post at all is the May 2024 update at the start of it, which Matthew used as a jumping-off point for saying “my critique of the ‘larger argument’ does not make the mistake referred to in the May 2024 update[2], but people keep saying it does[3], so I’ll try restating that critique again in the hopes it will be clearer this time.”
I say “some version of” to allow for a distinction between (a) the “larger argument” of Eliezer_2007′s which this post was meant to support in 2007, and (b) whatever version of the same “larger argument” was a standard MIRI position as of roughly 2016-2017.
As far as I can tell, Matthew is only interested in evaluating the 2016-2017 MIRI position, not the 2007 EY position (insofar as the latter is different, if it fact it is). When he cites older EY material, he does so as a means to an end – either as indirect evidence of later MIRI positions, because it was itself cited in the later MIRI material which is his main topic.
Note that the current version of Matthew’s 2023 post includes multiple caveats that he’s not making the mistake referred to in the May 2024 update.
Note also that Matthew’s post only mentions this post in two relatively minor ways, first to clarify that he doesn’t make the mistake referred to in the update (unlike some “Non-MIRI people” who do make the mistake), and second to support an argument about whether “Yudkowsky and other MIRI people” believe that it could be sufficient to get a single human’s values into the AI, or whether something like CEV would be required instead.
I bring up the mentions of this post in Matthew’s post in order to clarifies what role “is ‘The Hidden Complexity of Wishes’ correct in isolation, considered apart from anything outside it?” plays in Matthew’s critique – namely, none at all, IIUC.
(I realize that Matthew’s post has been edited over time, so I can only speak to the current version.)
To be fully explicit: I’m not claiming anything about whether or not the May 2024 update was about Matthew’s 2023 post (alone or in combination with anything else) or not. I’m just rephrasing what Matthew said in the first comment of this thread (which was also agnostic on the topic of whether the update referred to him).
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
I’ll confirm that I’m not saying this post’s exact thesis is false. This post seems to be largely a parable about a fictional device, rather than an explicit argument with premises and clear conclusions. I’m not saying the parable is wrong. Parables are rarely “wrong” in a strict sense, and I am not disputing this parable’s conclusion.
However, I am saying: this parable presumably played some role in the “larger” argument that MIRI has made in the past. What role did it play? Well, I think a good guess is that it portrayed the difficulty of precisely specifying what you want or intend, for example when explicitly designing a utility function. This problem was often alleged to be difficult because, when you want something complex, it’s difficult to perfectly delineate potential “good” scenarios and distinguish them from all potential “bad” scenarios. This is the problem I was analyzing in my original comment.
While the term “outer alignment” was not invented to describe this exact problem until much later, I was using that term purely as descriptive terminology for the problem this post clearly describes, rather than claiming that Eliezer in 2007 was deliberately describing something that he called “outer alignment” at the time. Because my usage of “outer alignment” was merely descriptive in this sense, I reject the idea that my comment was anachronistic.
And again: I am not claiming that this post is inaccurate in isolation. In both my above comment, and in my 2023 post, I merely cited this post as portraying an aspect of the problem that I was talking about, rather than saying something like “this particular post’s conclusion is wrong”. I think the fact that the post doesn’t really have a clear thesis in the first place means that it can’t be wrong in a strong sense at all. However, the post was definitely interpreted as explaining some part of why alignment is hard — for a long time by many people — and I was critiquing the particular application of the post to this argument, rather than the post itself in isolation.
Here’s an argument that alignment is difficult which uses complexity of value as a subpoint:
A1. If you try to manually specify what you want, you fail.
A2. Therefore, you want something algorithmically complex.
B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B3. We don’t understand how to affect the values-distribution toward something specific.
B4. If we don’t affect the value-distribution toward something specific, then the values-distribution probably puts large penalties for absolute algorithmic complexity; any specific utility function with higher absolute algorithmic complexity will be less likely to be the one that the AGI ends up with.
C1. Because of A2 (our values are algorithmically complex) and B4 (a complex utility function is unlikely to show up in an AGI without us skillfully intervening), an AGI is unlikely to have our values without us skillfully intervening.
C2. Because of B3 (we don’t know how to skillfully intervene on an AGI’s values) and C1, an AGI is unlikely to have our values.
I think that you think that the argument under discussion is something like:
(same) A1. If you try to manually specify what you want, you fail.
(same) A2. Therefore, you want something algorithmically complex.
(same) B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
(same) B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B′3. The greater the complexity of our values, the harder it is to point at our values.
B′4. The harder it is to point at our values, the more work or difficulty is involved in B2.
C′1. By B′3 and B′4: the greater the complexity of our values, the more work or difficulty is involved in B2 (determining the AGI’s values).
C′2. Because of A2 (our values are algorithmically complex) and C′1, it would take a lot of work to make an AGI pursue our values.
These are different arguments, which make use of the complexity of values in different ways. You dispute B′3 on the grounds that it can be easy to point at complex values. B′3 isn’t used in the first argument though.
In the situation assumed by your first argument, AGI would be very unlikely to share our values even if our values were much simpler than they are.
Complexity makes things worse, yes, but the conclusion “AGI is unlikely to have our values” is already entailed by the other premises even if we drop the stuff about complexity.
Why: if we’re just sampling some function from a simplicity prior, we’re very unlikely to get any particular nontrivial function that we’ve decided to care about in advance of the sampling event. There are just too many possible functions, and probability mass has to get divided among them all.
In other words, if it takes N bits to specify human values, there are 2N ways that a bitstring of the same length could be set, and we’re hoping to land on just one of those through luck alone. (And to land on a bitstring of this specific length in the first place, of course.) Unless N is very small, such a coincidence is extremely unlikely.
And N is not going to be that small; even in the sort of naive and overly simple “hand-crafted” value specifications which EY has critiqued in this post and elsewhere, a lot of details have to be specified. (E.g. some proposals refer to “humans” and so a full algorithmic description of them would require an account of what is and isn’t a human.)
One could devise a variant of this argument that doesn’t have this issue, by “relaxing the problem” so that we have some control, just not enough to pin down the sampled function exactly. And then the remaining freedom is filled randomly with a simplicity bias. This partial control might be enough to make a simple function likely, while not being able to make a more complex function likely. (Hmm, perhaps this is just your second argument, or a version of it.)
This kind of reasoning might be applicable in a world where its premises are true, but I don’t think it’s premises are true in our world.
In practice, we apparently have no trouble getting machines to compute very complex functions, including (as Matthew points out) specifications of human value whose robustness would have seemed like impossible magic back in 2007. The main difficulty, if there is one, is in “getting the function to play the role of the AGI values,” not in getting the AGI to compute the particular function we want in the first place.
The main difficulty, if there is one, is in “getting the function to play the role of the AGI values,” not in getting the AGI to compute the particular function we want in the first place.
Right, that is the problem (and IDK of anyone discussing this who says otherwise).
Another position would be that it’s probably easy to influence a few bits of the AI’s utility function, but not others. For example, it’s conceivable that, by doing capabilities research in different ways, you could increase the probability that the AGI is highly ambitious—e.g. tries to take over the whole lightcone, tries to acausally bargain, etc., rather than being more satisficy. (IDK how to do that, but plausibly it’s qualitatively easier than alignment.) Then you could claim that it’s half a bit more likely that you’ve made an FAI, given that an FAI would probably be ambitious. In this case, it does matter that the utility function is complex.
While the term “outer alignment” wasn’t coined until later to describe the exact issue that I’m talking about, I was using that term purely as a descriptive label for the problem this post clearly highlights, rather than implying that you were using or aware of the term in 2007.
Because I was simply using “outer alignment” in this descriptive sense, I reject the notion that my comment was anachronistic. I used that term as shorthand for the thing I was talking about, which is clearly and obviously portrayed by your post, that’s all.
To be very clear: the exact problem I am talking about is the inherent challenge of precisely defining what you want or intend, especially (though not exclusively) in the context of designing a utility function. This difficulty arises because, when the desired outcome is complex, it becomes nearly impossible to perfectly delineate between all potential ‘good’ scenarios and all possible ‘bad’ scenarios. This challenge has been a recurring theme in discussions of alignment, as it’s considered hard to capture every nuance of what you want in your specification without missing an edge case.
This problem is manifestly portrayed by your post, using the example of an outcome pump to illustrate. I was responding to this portrayal of the problem, and specifically saying that this specific narrow problem seems easier in light of LLMs, for particular reasons.
It is frankly frustrating to me that, from my perspective, you seem to have reliably missed the point of what I am trying to convey here.
I only brought up Christiano-style proposals because I thought you were changing the topic to a broader discussion, specifically to ask me what methodologies I had in mind when I made particular points. If you had not asked me “So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?” then I would not have mentioned those things. In any case, none of the things I said about Christiano-style proposals were intended to critique this post’s narrow point. I was responding to that particular part of your comment instead.
As far as the actual content of this post, I do not dispute its exact thesis. The post seems to be a parable, not a detailed argument with a clear conclusion. The parable seems interesting to me. It also doesn’t seem wrong, in any strict sense. However, I do think that some of the broader conclusions that many people have drawn from the parable seem false, in context. I was responding to the specific way that this post had been applied and interpreted in broader arguments about AI alignment.
My central thesis in regards to this post is simply: the post clearly portrays a specific problem that was later called the “outer alignment” problem by other people. This post portrays this problem as being difficult in a particular way. And I think this portrayal is misleading, even if the literal parable holds up in pure isolation.
I think it’s important to be able to make a narrow point about outer alignment without needing to defend a broader thesis about the entire alignment problem.
A mind that ever wishes to learn anything complicated, must learn to cultivate an interest in which particular exact argument steps are valid, apart from whether you yet agree or disagree with the final conclusion, because only in this way can you sort through all the arguments and finally sum them.
Algorithmic complexity is precisely analogous to difficulty-of-learning-to-predict, so saying “it’s not about learning to predict, it’s about algorithmic complexity” doesn’t make sense. One read of the original is: learning to respect common sense moral side constraints is tricky[1], but AI systems will learn how to do it in the end. I’d be happy to call this read correct, and is consistent with the observation that today’s AI systems do respect common sense moral side constraints given straightforward requests, and that it took a few years to figure out how to do it. That read doesn’t really jive with your commentary.
Your commentary seems to situate this post within a larger argument: teaching a system to “act” is different to teaching it to “predict” because in the former case a sufficiently capable learner’s behaviour can collapse to a pathological policy, whereas teaching a capable learner to predict does not risk such collapse. Thus “prediction” is distinguished from “algorithmic complexity”. Furthermore, commonsense moral side constraints are complex enough to risk such collapse when we train an “actor” but not a “predictor”. This seems confused.
First, all we need to turn a language model prediction into an action is a means of turning text into action, and we have many such means. So the distinction between text predictor and actor is suspect. We could consider an alternative knows/cares distinction: does a system act properly when properly incentivised (“knows”) vs does it act properly when presented with whatever context we are practically able to give it (“”“cares”””)? Language models usually act properly given simple prompts, so in this sense they “care”. So rejecting evidence from language models does not seem well justified.
Second, there’s no need to claim that commonsense moral side constraints in particular are so hard that trying to develop AI systems that respect them leads to policy collapse. It need only be the case that one of the things we try to teach them to do leads to policy collapse. Teaching values is not particularly notable among all the things we might want AI systems to do; it certainly does not seem to be among the hardest. Focussing on values makes the argument unnecessarily weak.
Third, algorithmic complexity is measured with respect to a prior. The post invokes (but does not justify) an “English speaking evil genie” prior. I don’t think anyone thinks this is a serious prior for reasoning about advanced AI system behaviour. But the post is (according to your commentary, if not the post itself) making a quantitative point—values are sufficiently complex to induce policy collapse—but it’s measuring this quantity using a nonsense prior. If the quantitative argument was indeed the original point, it is mystifying why a nonsense prior was chosen to make it, and also why no effort was made to justify the prior.
My question is why is the following statement below true, exactly?
Second, there’s no need to claim that commonsense moral side constraints in particular are so hard that trying to develop AI systems that respect them leads to policy collapse. It need only be the case that one of the things we try to teach them to do leads to policy collapse.
Here’s a basic model of policy collapse: suppose there exist pathological policies of low prior probability (/high algorithmic complexity) such that they play the training game when it is strategically wise to do so, and when they get a good opportunity they defect in order to pursue some unknown aim.
Because they play the training game, a wide variety of training objectives will collapse to one of these policies if the system in training starts exploring policies of sufficiently high algorithmic complexity. So, according to this crude model, there’s a complexity bound: stay under it and you’re fine, go over it and you get pathological behaviour. Roughly, whatever desired behaviour requires the most algorithmically complex policy is the one that is most pertinent for assessing policy collapse risk (because that’s the one that contributes most of the algorithmic complexity, and so it give your first order estimate of whether or not you’re crossing the collapse threshold). So, which desired behaviour requires the most complex policy: is it, for example, respecting commonsense moral constraints, or is it inventing molecular nanotechnology?
Tangentially, the policy collapse theory does not predict outcomes that look anything like malicious compliance. It predicts that, if you’re in a position of power over the AI system, your mother is saved exactly as you want her to be. If you are not in such a position then your mother is not saved at all and you get a nanobot war instead or something. That is, if you do run afoul of policy collapse, it doesn’t matter if you want your system to pursue simple or complex goals, you’re up shit creek either way.
Alice: I want to make a bovine stem cell that can be cultured at scale in vats to make meat-like tissue. I could use directed evolution. But in my alternate universe, genome sequencing costs $1 billion per genome, so I can’t straightforwardly select cells to amplify based on whether their genome looks culturable. Currently the only method I have is to do end-to-end testing: I take a cell line, I try to culture a great big batch, and then see if the result is good quality edible tissue, and see if the cell line can last for a year without mutating beyond repair. This is very expensive, but more importantly, it doesn’t work. I can select for cells that make somewhat more meat-like tissue; but when I do that, I also heavily select for other very bad traits, such as forming cancer-like growths. I estimate that it takes on the order of 500 alleles optimized relative to the wild type to get a cell that can be used for high-quality, culturable-at-scale edible tissue. Because that’s a large complex change, it won’t just happen by accident; something about our process for making the cells has to put those bits there.
Bob: In a recent paper, a polygenic score for culturable meat is given. Since we now have the relevant polygenic score, we actually have a short handle for the target: namely, a pointer to an implementation of this polygenic score as a computer program.
Alice: That seems of limited relevance. It’s definitely relevant in that, if I grant the premise that this is actually the right polygenic score (which I don’t), we now know what exactly we would put in the genome if we could. That’s one part of the problem solved, but it’s not the part I was talking about. I’m talking about the part where I don’t know how to steer the genome precisely enough to get anywhere complex.
Bob: You’ve been bringing up the complexity of the genomic target. I’m saying that actually the target isn’t that complex, because it’s just a function call to the PGS.
Alice: Ok, yes, we’ve greatly decreased the relative algorithmic complexity of the right genome, in some sense. It is indeed the case that if I ran a computer program randomly sampled from strings I could type into a python file, it would be far more likely to output the right genome if I have the PGS file on my computer compared to if I don’t. True. But that’s not very relevant because that’s not the process we’re discussing. We’re discussing the process that creates a cell with its genome, not the process that randomly samples computer programs weighted by [algorithmic complexity in the python language on my computer]. The problem is that I don’t know how to interface with the cell-creation process in a way that lets me push bits of selection into it. Instead, the cell-creation process just mostly does its own thing. Even if I do end-to-end phenotype selection, I’m not really steering the core process of cell-genome-selection.
Bob: I understand, but you were saying that the complexity of the target makes the whole task harder. Now that we have the PGS, the target is not very complex; we just point at the PGS.
Alice: The point about the complexity is to say that cells growing in my lab won’t just spontaneously start having the 500 alleles I want. I’d have to do something to them—I’d have to know how to pump selection power into them. It’s some specific technique I need to have but don’t have, for dealing with cells. It doesn’t matter that the random-program complexity has decreased, because we’re not talking about random programs, we’re talking about cell-genome-selection. Cell-genome-selection is the process where I don’t know how to consistently pump bits into, and it’s the process that doesn’t by chance get the 500 alleles. It’s the process against which I’m measuring complexity.
This analogy is valid in the case where we have absolutely no idea how to use a system’s representations or “knowledge” to direct an AIs behavior. That is the world Yudkowsky wrote the sequences in. It is not the world we currently live in. There are several, perhaps many, plausible plans to direct a competent AGIs actions and its “thoughts” and “values”′ toward either its own or a subsystem’s “understanding” of human values. See Goals selected from learned knowledge: an alternative to RL alignment for some of those plans. Critiques need to go beyond the old “we have no idea” argument and actually address the ideas we have.
That’s incorrect, but more importantly it’s off topic. The topic is “what does the complexity of value have to do with the difficulty of alignment”. Barnett AFAIK in this comment is not saying (though he might agree, and maybe he should be taken as saying so implicitly or something) “we have lots of ideas for getting an AI to care about some given values”. Rather he’s saying “if you have a simple pointer to our values, then the complexity of values no longer implies anything about the difficulty of alignment because values effectively aren’t complex anymore”.
I’m not sure you could be as confident as Yudkowsky was at the time, but yeah there was a serious probability in the epistemic state of 2008 that human values were so complicated and that simple techniques made AIs so completely goodhart on the task that’s intended that controlling smart AI was essentially hopeless.
We now know that a lot of the old Lesswrong lore on how complicated human values and wishes are, at least in the code section are either incorrect or irrelevant, and we also know that the standard LW story of how humans came to dominate other animals is incorrect to a degree that impacts AI alignment.
I have my own comments on the ideas below, but people really should try to update on the evidence we gained from LLMs, as we learned a lot about ourselves and LLMs in the process, because there’s a lot of evidence that generalizes from LLMs to future AGI/ASI, and IMO LW updated way, way too slowly on AI safety.
a) I think at least part of what’s gone on is that Eliezer has been misunderstood and facing the same actually quite dumb arguments a lot, and he is now (IMO) too quick to round new arguments off to something he’s got cached arguments for. (I’m not sure whether this is exactly what went on in this case, but seems plausible without carefully rereading everything)
b) I do think when Eliezer wrote this post, there were literally a bunch of people making quite dumb arguments that were literally “the solution to AI ethics/alignment is [my preferred elegant system of ethics] / [just have it track smiling faces] / [other explicit hardcoded solutions that were genuinely impractical]”
I think I personally did also not get what you were trying to say for awhile, so I don’t think the problem here is just Eliezer (although it might be me making a similar mistake to what I hypothesize Eliezer to have made, for reasons that are correlated with him)
I do generally think a criticism I have of Eliezer is that he has spent too much time comparatively focused on the dumber 3⁄4 of arguments, instead of engaging directly with top critics which are often actually making more subtle points (and being a bit too slow to update that this is what’s going on)
Wish there was a system where people could pay money to bid up what they believed were the “top arguments” that they wanted me to respond to. Possibly a system where I collect the money for writing a diligent response (albeit note that in this case I’d weigh the time-cost of responding as well as the bid for a response); but even aside from that, some way of canonizing what “people who care enough to spend money on that” think are the Super Best Arguments That I Should Definitely Respond To. As it stands, whatever I respond to, there’s somebody else to say that it wasn’t the real argument, and this mainly incentivizes me to sigh and go on responding to whatever I happen to care about more.
(I also wish this system had been in place 24 years ago so you could scroll back and check out the wacky shit that used to be on that system earlier, but too late now.)
I do think such a system would be really valuable, and is the sort of the thing the LW team should try to build. (I’m mostly not going to respond to this idea right now but I’ve filed it away as something to revisit more seriously with Lightcone. Seems straightforwardly good)
But it feels slightly orthogonal to what I was trying to say. Let me try again.
(this is now official a tangent from the original point, but, feels important to me)
It would be good if the world could (deservedly) trust, that the best x-risk thinkers have a good group epistemic process for resolving disagreements.
At least two steps that seem helpful for that process are:
Articulating clear lists of the best arguments, such that people can prioritize refuting them (or updating on them).
But, before that, there is a messier process of “people articulating half formed versions of those arguments, struggling to communicate through different ontologies, being slightly confused.” And there is some back-and-forth process typically needed to make progress.
It is that “before” step where it feels like things seem to be going wrong, to me. (I haven’t re-read Matthew’s post or your response comment from a year ago in enough detail to have a clear sense of what, if anything, went wrong. But to illustrate the ontology: I that instance was roughly in the liminal space between the two steps)
Half-formed confused arguments in different ontologies are probably “wrong”, but that isn’t necessarily because they are completely stupid, it can be because they are half-formed. And maybe the final version of the argument is good, or maybe not, but it’s at least a less stupid version of that argument. And if Alice rejects a confused, stupid argument in a loud way, without understanding the generator that Bob was trying to pursue, Bob’s often rightly annoyed that Alice didn’t really hear them and didn’t really engage.
Dealing with confused half-formed arguments is expensive, and I’m not sure it’s worth people’s time, especially given that confused half-formed arguments are hard to distinguish from “just wrong” ones.
But, I think we can reduce wasted-motion on the margin.
A hopefully cheap-enough TAP that might help if more people did, might be something like:
<TAP> When responding to a wrong argument (which might be completely stupid, or might be a half-formed thing going in an eventually interesting direction)
<ACTION> Preface response with something like: “I think you’re saying X. Assuming so, I think this is wrong because [insert argument].” End the argument with “If this seemed to be missing the point, can you try saying your thing in different words, or clarify?”
(if it feels too expensive to articulate what X is, instead one could start with something more like “It looks at first glance like this is wrong because [insert argument]” and then still end with the “check if missing the point?” closing note)
I think more-of-that-on-the-margin from a bunch of people would save a lot of time spent in aggro-y escalation spirals.
I’d have to think more about what to do for that case, but, the sort of thing I’m imagining is a bit more scaffolding that builds towards “having a well indexed list of the best arguments.” Maybe briefly noting early on “This essay is arguing for [this particular item in List of Lethalities]” or “This argument is adding a new item to List of Lethalities” (and then maybe update that post, since it’s nice to have a comprehensive list).
This doesn’t feel like a complete solution, but, the sort of things I’d be looking for a cheap things you can add to posts that help bootstrap towards a clearer-list-of-the-best-arguments existing.
I would suggest formulating this like a literal attention economy.
You set a price for your attention (probably like $1). The price at which even if the post is a waste of time, the money makes it worth it.
“Recommenders” can recommend content to you by paying the price.
If the content was worth your time, you pay the recommender the $1 back plus a couple cents.
The idea is that the recommenders would get good at predicting what posts you’d pay them for. And since you aren’t a causal decision theorist they know you won’t scam them. In particular, on average you should be losing money (but in exchange you get good content).
This doesn’t necessarily require new software. Just tell people to send PayPals with a link to the content.
With custom software, theoretically there could exist a secondary market for “shares” in the payout from step 3 to make things more efficient. That way the best recommenders could sell their shares and then use that money to recommend more content before you payout.
If the system is bad at recommending content, at least you get paid!
I think this is worth a new top-level post. I think the discussion on your Evaluating the historical value misspecification argument was a high-water mark for resolving the disagreement on alignment difficulty between old-schoolers and new prosaic alignment thinkers. But that discussion didn’t make it past the point you raise here: if we can identify human values, shouldn’t that help (a lot) in making an AGI that pursues those values?
One key factor is whether the understanding of human values is available while the AGI is still dumb enough to remain in your control.
My main point was that I thought recent progress in LLMs had demonstrated progress at the problem of building such a function, and solving the value identification problem, and that this progress goes beyond the problem of getting an AI to understand or predict human values.
I want to push back on this a bit. I suspect that “demonstrated progress” is doing a lot of work here, and smuggling an assumption that current trends with LLMs will continue and can be extrapolated straightforwardly.
It’s true that LLMs have some nice properties for encapsulating fuzzy and complex concepts like human values, but I wouldn’t actually want to use any current LLMs as a referent or in a rating system like the one you propose, for obvious reasons.
Maybe future LLMs will retain all the nice properties of current LLMs while also solving various issues with jailbreaking, hallucination, robustness, reasoning about edge cases, etc. but declaring victory already (even on a particular and narrow point about value identification) seems premature to me.
Separately, I think some of the nice properties you list don’t actually buy you that much in practice, even if LLM progress does continue straightforwardly.
A lot of the properties you list follow from the fact that LLMs are pure functions of their input (at least with a temperature of 0).
Functional purity is a very nice property, and traditional software that encapsulates complex logic in pure functions is often easier to reason about, debug, and formally verify vs. software that uses lots of global mutable state and / or interacts with the outside world through a complex I/O interface. But when the function in question is 100s of GB of opaque floats, I think it’s a bit of a stretch to call it transparent and legible just because it can be evaluated outside of the IO monad.
Aside from purity, I don’t think your point about an LLM being a “particular function” that can be “hooked up to the AI directly” is doing much work - input() (i.e. asking actual humans) seems just as direct and particular as llm(). If you want your AI system to actually do something in the messy real world, you have to break down the nice theoretical boundary and guarantees you get from functional purity somewhere.
More concretely, given your proposed rating system, simply replace any LLM calls with a call that just asks actual humans to rate a world state given some description, and it seems like you get something that is at least as legible and transparent (in an informal sense) as the LLM version. The main advantage with using an LLM here is that you could potentially get lots of such ratings cheaply and quickly. Replay-ability, determinism and the relative ease of interpretability vs. doing neuroscience on the human raters are also nice, but none of these properties are very reassuring or helpful if the ratings themselves aren’t all that good. (Also, if you’re doing something with such low sample efficiency that you can’t just use actual humans, you’re probably on the wrong track anyway.)
I can’t tell whether this update to the post is addressed towards me. However, it seems possible that it is addressed towards me, since I wrote a post last year criticizing some of the ideas behind this post. In either case, whether it’s addressed towards me or not, I’d like to reply to the update.
For the record, I want to definitively clarify that I never interpreted MIRI as arguing that it would be difficult to get a machine superintelligence to understand or predict human values. That was never my thesis, and I spent considerable effort clarifying the fact that this was not my thesis in my post, stating multiple times that I never thought MIRI predicted it would be hard to get an AI to understand human values.
My thesis instead was about a subtly different thing, which is easy to misinterpret if you aren’t reading carefully. I was talking about something which Eliezer called the “value identification problem”, and which had been referenced on Arbital, and in other essays by MIRI, including under a different name than the “value identification problem”. These other names included the “value specification” problem and the problem of “outer alignment” (at least in narrow contexts).
I didn’t expect as much confusion at the time when I wrote the post, because I thought clarifying what I meant and distinguishing it from other things that I did not mean multiple times would be sufficient to prevent rampant misinterpretation by so many people. However, evidently, such clarifications were insufficient, and I should have instead gone overboard in my precision and clarity. I think if I re-wrote the post now, I would try to provide like 5 different independent examples demonstrating how I was talking about a different thing than the problem of getting an AI to “understand” or “predict” human values.
At the very least, I can try now to give a bit more clarification about what I meant, just in case doing this one more time causes the concept to “click” in someone’s mind:
Eliezer doesn’t actually say this in the above post, but his general argument expressed here and elsewhere seems to be that the premise “human value is complex” implies the conclusion: “therefore, it’s hard to get an AI to care about human value”. At least, he seems to think that this premise makes this conclusion significantly more likely.[1]
This seems to be his argument, as otherwise it would be unclear why Eliezer would bring up “complexity of values” in the first place. If the complexity of values had nothing to do with the difficulty of getting an AI to care about human values, then it is baffling why he would bring it up. Clearly, there must be some connection, and I think I am interpreting the connection made here correctly.
However, suppose you have a function that inputs a state of the world and outputs a number corresponding to how “good” the state of the world is. And further suppose that this function is transparent, legible, and can actually be used in practice to reliably determine the value of a given world state. In other words, you can give the function a world state, and it will spit out a number, which reliably informs you about the value of the world state. I claim that having such a function would simplify the AI alignment problem by reducing it from the hard problem of getting an AI to care about something complex (human value) to the easier problem of getting the AI to care about that particular function (which is simple, as the function can be hooked up to the AI directly).
In other words, if you have a solution to the value identification problem (i.e., you have the function that correctly and transparently rates the value of world states, as I just described), this almost completely sidesteps the problem that “human value is complex and therefore it’s difficult to get an AI to care about human value”. That’s because, if we have a function that directly encodes human value, and can be simply referenced or directly inputted into a computer, then all the AI needs to do is care about maximizing that function rather than maximizing a more complex referent of “human values”. The pointer to “this function” is clearly simple, and in any case, simpler than the idea of all of human value.
(This was supposed to narrowly reply to MIRI, by the way. If I were writing a more general point about how LLMs were evidence that alignment might be easy, I would not have focused so heavily on the historical questions about what people said, and I would have instead made simpler points about how GPT-4 seems to straightforwardly try do what you want, when you tell it to do things.)
My main point was that I thought recent progress in LLMs had demonstrated progress at the problem of building such a function, and solving the value identification problem, and that this progress goes beyond the problem of getting an AI to understand or predict human values. For one thing, an AI that merely understands human values will not necessarily act as a transparent, legible function that will tell you the value of any outcome. However, by contrast, solving the value identification problem would give you such a function. This strongly distinguishes the two problems. These problems are not the same thing. I’d appreciate if people stopped interpreting me as saying one thing when I clearly meant another, separate thing.
This interpretation is supported by the following quote, on Arbital,
The post is about the complexity of what needs to be gotten inside the AI. If you had a perfect blackbox that exactly evaluated the thing-that-needs-to-be-inside-the-AI, this could possibly simplify some particular approaches to alignment, that would still in fact be too hard because nobody has a way of getting an AI to point at anything. But it would not change the complexity of what needs to be moved inside the AI, which is the narrow point that this post is about; and if you think that some larger thing is not correct, you should not confuse that with saying that the narrow point this post is about, is incorrect.
One cannot hook up a function to an AI directly; it has to be physically instantiated somehow. For example, the function could be a human pressing a button; and then, any experimentation on the AI’s part to determine what “really” controls the button, will find that administering drugs to the human, or building a robot to seize control of the reward button, is “really” (from the AI’s perspective) the true meaning of the reward button after all! Perhaps you do not have this exact scenario in mind. So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?
I think it’s important to be able to make a narrow point about outer alignment without needing to defend a broader thesis about the entire alignment problem. To the extent my argument is “outer alignment seems easier than you portrayed it to be in this post, and elsewhere”, then your reply here that inner alignment is still hard doesn’t seem like it particularly rebuts my narrow point.
This post definitely seems to relevantly touch on the question of outer alignment, given the premise that we are explicitly specifying the conditions that the outcome pump needs to satisfy in order for the outcome pump to produce a safe outcome. Explicitly specifying a function that delineates safe from unsafe outcomes is essentially the prototypical case of an outer alignment problem. I was making a point about this aspect of the post, rather than a more general point about how all of alignment is easy.
(It’s possible that you’ll reply to me by saying “I never intended people to interpret me as saying anything about outer alignment in this post” despite the clear portrayal of an outer alignment problem in the post. Even so, I don’t think what you intended really matters that much here. I’m responding to what was clearly and explicitly written, rather than what was in your head at the time, which is unknowable to me.)
It seems you’re assuming here that something like iterated amplification and distillation will simply fail, because the supervisor function that provides rewards to the model can be hacked or deceived. I think my response to this is that I just tend to be more optimistic than you are that we can end up doing safe supervision where the supervisor ~always remains in control, and they can evaluate the AI’s outputs accurately, more-or-less sidestepping the issues you mention here.
I think my reasons for believing this are pretty mundane: I’d point to the fact that evaluation tends to be easier than generation, and the fact that we can employ non-agentic tools to help evaluate, monitor, and control our models to provide them accurate rewards without getting hacked. I think your general pessimism about these things is fairly unwarranted, and my guess is that if you had made specific predictions about this question in the past, about what will happen prior to world-ending AI, these predictions would largely have fared worse than predictions from someone like Paul Christiano.
Your distinction between “outer alignment” and “inner alignment” is both ahistorical and unYudkowskian. It was invented years after this post was written, by someone who wasn’t me; and though I’ve sometimes used the terms in occasions where they seem to fit unambiguously, it’s not something I see as a clear ontological division, especially if you’re talking about questions like “If we own the following kind of blackbox, would alignment get any easier?” which on my view breaks that ontology. So I strongly reject your frame that this post was “clearly portraying an outer alignment problem” and can be criticized on those grounds by you; that is anachronistic.
You are now dragging in a very large number of further inferences about “what I meant”, and other implications that you think this post has, which are about Christiano-style proposals that were developed years after this post. I have disagreements with those, many disagreements. But it is definitely not what this post is about, one way or another, because this post predates Christiano being on the scene.
What this post is trying to illustrate is that if you try putting crisp physical predicates on reality, that won’t work to say what you want. This point is true! If you then want to take in a bunch of anachronistic ideas developed later, and claim (wrongly imo) that this renders irrelevant the simple truth of what this post actually literally says, that would be a separate conversation. But if you’re doing that, please distinguish the truth of what this post actually says versus how you think these other later clever ideas evade or bypass that truth.
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
You have exhorted him several times to distinguish between that larger argument and the narrow point made by this post:
But it seems to me that he’s already doing this. He’s not alleging that this post is incorrect in isolation.
The only reason this discussion is happened on the comments of this post at all is the May 2024 update at the start of it, which Matthew used as a jumping-off point for saying “my critique of the ‘larger argument’ does not make the mistake referred to in the May 2024 update[2], but people keep saying it does[3], so I’ll try restating that critique again in the hopes it will be clearer this time.”
I say “some version of” to allow for a distinction between (a) the “larger argument” of Eliezer_2007′s which this post was meant to support in 2007, and (b) whatever version of the same “larger argument” was a standard MIRI position as of roughly 2016-2017.
As far as I can tell, Matthew is only interested in evaluating the 2016-2017 MIRI position, not the 2007 EY position (insofar as the latter is different, if it fact it is). When he cites older EY material, he does so as a means to an end – either as indirect evidence of later MIRI positions, because it was itself cited in the later MIRI material which is his main topic.
Note that the current version of Matthew’s 2023 post includes multiple caveats that he’s not making the mistake referred to in the May 2024 update.
Note also that Matthew’s post only mentions this post in two relatively minor ways, first to clarify that he doesn’t make the mistake referred to in the update (unlike some “Non-MIRI people” who do make the mistake), and second to support an argument about whether “Yudkowsky and other MIRI people” believe that it could be sufficient to get a single human’s values into the AI, or whether something like CEV would be required instead.
I bring up the mentions of this post in Matthew’s post in order to clarifies what role “is ‘The Hidden Complexity of Wishes’ correct in isolation, considered apart from anything outside it?” plays in Matthew’s critique – namely, none at all, IIUC.
(I realize that Matthew’s post has been edited over time, so I can only speak to the current version.)
To be fully explicit: I’m not claiming anything about whether or not the May 2024 update was about Matthew’s 2023 post (alone or in combination with anything else) or not. I’m just rephrasing what Matthew said in the first comment of this thread (which was also agnostic on the topic of whether the update referred to him).
I’ll confirm that I’m not saying this post’s exact thesis is false. This post seems to be largely a parable about a fictional device, rather than an explicit argument with premises and clear conclusions. I’m not saying the parable is wrong. Parables are rarely “wrong” in a strict sense, and I am not disputing this parable’s conclusion.
However, I am saying: this parable presumably played some role in the “larger” argument that MIRI has made in the past. What role did it play? Well, I think a good guess is that it portrayed the difficulty of precisely specifying what you want or intend, for example when explicitly designing a utility function. This problem was often alleged to be difficult because, when you want something complex, it’s difficult to perfectly delineate potential “good” scenarios and distinguish them from all potential “bad” scenarios. This is the problem I was analyzing in my original comment.
While the term “outer alignment” was not invented to describe this exact problem until much later, I was using that term purely as descriptive terminology for the problem this post clearly describes, rather than claiming that Eliezer in 2007 was deliberately describing something that he called “outer alignment” at the time. Because my usage of “outer alignment” was merely descriptive in this sense, I reject the idea that my comment was anachronistic.
And again: I am not claiming that this post is inaccurate in isolation. In both my above comment, and in my 2023 post, I merely cited this post as portraying an aspect of the problem that I was talking about, rather than saying something like “this particular post’s conclusion is wrong”. I think the fact that the post doesn’t really have a clear thesis in the first place means that it can’t be wrong in a strong sense at all. However, the post was definitely interpreted as explaining some part of why alignment is hard — for a long time by many people — and I was critiquing the particular application of the post to this argument, rather than the post itself in isolation.
Here’s an argument that alignment is difficult which uses complexity of value as a subpoint:
A1. If you try to manually specify what you want, you fail.
A2. Therefore, you want something algorithmically complex.
B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B3. We don’t understand how to affect the values-distribution toward something specific.
B4. If we don’t affect the value-distribution toward something specific, then the values-distribution probably puts large penalties for absolute algorithmic complexity; any specific utility function with higher absolute algorithmic complexity will be less likely to be the one that the AGI ends up with.
C1. Because of A2 (our values are algorithmically complex) and B4 (a complex utility function is unlikely to show up in an AGI without us skillfully intervening), an AGI is unlikely to have our values without us skillfully intervening.
C2. Because of B3 (we don’t know how to skillfully intervene on an AGI’s values) and C1, an AGI is unlikely to have our values.
I think that you think that the argument under discussion is something like:
(same) A1. If you try to manually specify what you want, you fail.
(same) A2. Therefore, you want something algorithmically complex.
(same) B1. When humanity makes an AGI, the AGI will have gotten values via some process; that process induces some probability distribution over what values the AGI ends up with.
(same) B2. We want to affect the values-distribution, somehow, so that it ends up with our values.
B′3. The greater the complexity of our values, the harder it is to point at our values.
B′4. The harder it is to point at our values, the more work or difficulty is involved in B2.
C′1. By B′3 and B′4: the greater the complexity of our values, the more work or difficulty is involved in B2 (determining the AGI’s values).
C′2. Because of A2 (our values are algorithmically complex) and C′1, it would take a lot of work to make an AGI pursue our values.
These are different arguments, which make use of the complexity of values in different ways. You dispute B′3 on the grounds that it can be easy to point at complex values. B′3 isn’t used in the first argument though.
In the situation assumed by your first argument, AGI would be very unlikely to share our values even if our values were much simpler than they are.
Complexity makes things worse, yes, but the conclusion “AGI is unlikely to have our values” is already entailed by the other premises even if we drop the stuff about complexity.
Why: if we’re just sampling some function from a simplicity prior, we’re very unlikely to get any particular nontrivial function that we’ve decided to care about in advance of the sampling event. There are just too many possible functions, and probability mass has to get divided among them all.
In other words, if it takes N bits to specify human values, there are 2N ways that a bitstring of the same length could be set, and we’re hoping to land on just one of those through luck alone. (And to land on a bitstring of this specific length in the first place, of course.) Unless N is very small, such a coincidence is extremely unlikely.
And N is not going to be that small; even in the sort of naive and overly simple “hand-crafted” value specifications which EY has critiqued in this post and elsewhere, a lot of details have to be specified. (E.g. some proposals refer to “humans” and so a full algorithmic description of them would require an account of what is and isn’t a human.)
One could devise a variant of this argument that doesn’t have this issue, by “relaxing the problem” so that we have some control, just not enough to pin down the sampled function exactly. And then the remaining freedom is filled randomly with a simplicity bias. This partial control might be enough to make a simple function likely, while not being able to make a more complex function likely. (Hmm, perhaps this is just your second argument, or a version of it.)
This kind of reasoning might be applicable in a world where its premises are true, but I don’t think it’s premises are true in our world.
In practice, we apparently have no trouble getting machines to compute very complex functions, including (as Matthew points out) specifications of human value whose robustness would have seemed like impossible magic back in 2007. The main difficulty, if there is one, is in “getting the function to play the role of the AGI values,” not in getting the AGI to compute the particular function we want in the first place.
Right, that is the problem (and IDK of anyone discussing this who says otherwise).
Another position would be that it’s probably easy to influence a few bits of the AI’s utility function, but not others. For example, it’s conceivable that, by doing capabilities research in different ways, you could increase the probability that the AGI is highly ambitious—e.g. tries to take over the whole lightcone, tries to acausally bargain, etc., rather than being more satisficy. (IDK how to do that, but plausibly it’s qualitatively easier than alignment.) Then you could claim that it’s half a bit more likely that you’ve made an FAI, given that an FAI would probably be ambitious. In this case, it does matter that the utility function is complex.
While the term “outer alignment” wasn’t coined until later to describe the exact issue that I’m talking about, I was using that term purely as a descriptive label for the problem this post clearly highlights, rather than implying that you were using or aware of the term in 2007.
Because I was simply using “outer alignment” in this descriptive sense, I reject the notion that my comment was anachronistic. I used that term as shorthand for the thing I was talking about, which is clearly and obviously portrayed by your post, that’s all.
To be very clear: the exact problem I am talking about is the inherent challenge of precisely defining what you want or intend, especially (though not exclusively) in the context of designing a utility function. This difficulty arises because, when the desired outcome is complex, it becomes nearly impossible to perfectly delineate between all potential ‘good’ scenarios and all possible ‘bad’ scenarios. This challenge has been a recurring theme in discussions of alignment, as it’s considered hard to capture every nuance of what you want in your specification without missing an edge case.
This problem is manifestly portrayed by your post, using the example of an outcome pump to illustrate. I was responding to this portrayal of the problem, and specifically saying that this specific narrow problem seems easier in light of LLMs, for particular reasons.
It is frankly frustrating to me that, from my perspective, you seem to have reliably missed the point of what I am trying to convey here.
I only brought up Christiano-style proposals because I thought you were changing the topic to a broader discussion, specifically to ask me what methodologies I had in mind when I made particular points. If you had not asked me “So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?” then I would not have mentioned those things. In any case, none of the things I said about Christiano-style proposals were intended to critique this post’s narrow point. I was responding to that particular part of your comment instead.
As far as the actual content of this post, I do not dispute its exact thesis. The post seems to be a parable, not a detailed argument with a clear conclusion. The parable seems interesting to me. It also doesn’t seem wrong, in any strict sense. However, I do think that some of the broader conclusions that many people have drawn from the parable seem false, in context. I was responding to the specific way that this post had been applied and interpreted in broader arguments about AI alignment.
My central thesis in regards to this post is simply: the post clearly portrays a specific problem that was later called the “outer alignment” problem by other people. This post portrays this problem as being difficult in a particular way. And I think this portrayal is misleading, even if the literal parable holds up in pure isolation.
Indeed. For it is written:
Algorithmic complexity is precisely analogous to difficulty-of-learning-to-predict, so saying “it’s not about learning to predict, it’s about algorithmic complexity” doesn’t make sense. One read of the original is: learning to respect common sense moral side constraints is tricky[1], but AI systems will learn how to do it in the end. I’d be happy to call this read correct, and is consistent with the observation that today’s AI systems do respect common sense moral side constraints given straightforward requests, and that it took a few years to figure out how to do it. That read doesn’t really jive with your commentary.
Your commentary seems to situate this post within a larger argument: teaching a system to “act” is different to teaching it to “predict” because in the former case a sufficiently capable learner’s behaviour can collapse to a pathological policy, whereas teaching a capable learner to predict does not risk such collapse. Thus “prediction” is distinguished from “algorithmic complexity”. Furthermore, commonsense moral side constraints are complex enough to risk such collapse when we train an “actor” but not a “predictor”. This seems confused.
First, all we need to turn a language model prediction into an action is a means of turning text into action, and we have many such means. So the distinction between text predictor and actor is suspect. We could consider an alternative knows/cares distinction: does a system act properly when properly incentivised (“knows”) vs does it act properly when presented with whatever context we are practically able to give it (“”“cares”””)? Language models usually act properly given simple prompts, so in this sense they “care”. So rejecting evidence from language models does not seem well justified.
Second, there’s no need to claim that commonsense moral side constraints in particular are so hard that trying to develop AI systems that respect them leads to policy collapse. It need only be the case that one of the things we try to teach them to do leads to policy collapse. Teaching values is not particularly notable among all the things we might want AI systems to do; it certainly does not seem to be among the hardest. Focussing on values makes the argument unnecessarily weak.
Third, algorithmic complexity is measured with respect to a prior. The post invokes (but does not justify) an “English speaking evil genie” prior. I don’t think anyone thinks this is a serious prior for reasoning about advanced AI system behaviour. But the post is (according to your commentary, if not the post itself) making a quantitative point—values are sufficiently complex to induce policy collapse—but it’s measuring this quantity using a nonsense prior. If the quantitative argument was indeed the original point, it is mystifying why a nonsense prior was chosen to make it, and also why no effort was made to justify the prior.
the text proposes full value alignment as a solution to the commonsense side constraints problem, but this turned out to be stronger than necessary.
My question is why is the following statement below true, exactly?
Here’s a basic model of policy collapse: suppose there exist pathological policies of low prior probability (/high algorithmic complexity) such that they play the training game when it is strategically wise to do so, and when they get a good opportunity they defect in order to pursue some unknown aim.
Because they play the training game, a wide variety of training objectives will collapse to one of these policies if the system in training starts exploring policies of sufficiently high algorithmic complexity. So, according to this crude model, there’s a complexity bound: stay under it and you’re fine, go over it and you get pathological behaviour. Roughly, whatever desired behaviour requires the most algorithmically complex policy is the one that is most pertinent for assessing policy collapse risk (because that’s the one that contributes most of the algorithmic complexity, and so it give your first order estimate of whether or not you’re crossing the collapse threshold). So, which desired behaviour requires the most complex policy: is it, for example, respecting commonsense moral constraints, or is it inventing molecular nanotechnology?
Tangentially, the policy collapse theory does not predict outcomes that look anything like malicious compliance. It predicts that, if you’re in a position of power over the AI system, your mother is saved exactly as you want her to be. If you are not in such a position then your mother is not saved at all and you get a nanobot war instead or something. That is, if you do run afoul of policy collapse, it doesn’t matter if you want your system to pursue simple or complex goals, you’re up shit creek either way.
Alice: I want to make a bovine stem cell that can be cultured at scale in vats to make meat-like tissue. I could use directed evolution. But in my alternate universe, genome sequencing costs $1 billion per genome, so I can’t straightforwardly select cells to amplify based on whether their genome looks culturable. Currently the only method I have is to do end-to-end testing: I take a cell line, I try to culture a great big batch, and then see if the result is good quality edible tissue, and see if the cell line can last for a year without mutating beyond repair. This is very expensive, but more importantly, it doesn’t work. I can select for cells that make somewhat more meat-like tissue; but when I do that, I also heavily select for other very bad traits, such as forming cancer-like growths. I estimate that it takes on the order of 500 alleles optimized relative to the wild type to get a cell that can be used for high-quality, culturable-at-scale edible tissue. Because that’s a large complex change, it won’t just happen by accident; something about our process for making the cells has to put those bits there.
Bob: In a recent paper, a polygenic score for culturable meat is given. Since we now have the relevant polygenic score, we actually have a short handle for the target: namely, a pointer to an implementation of this polygenic score as a computer program.
Alice: That seems of limited relevance. It’s definitely relevant in that, if I grant the premise that this is actually the right polygenic score (which I don’t), we now know what exactly we would put in the genome if we could. That’s one part of the problem solved, but it’s not the part I was talking about. I’m talking about the part where I don’t know how to steer the genome precisely enough to get anywhere complex.
Bob: You’ve been bringing up the complexity of the genomic target. I’m saying that actually the target isn’t that complex, because it’s just a function call to the PGS.
Alice: Ok, yes, we’ve greatly decreased the relative algorithmic complexity of the right genome, in some sense. It is indeed the case that if I ran a computer program randomly sampled from strings I could type into a python file, it would be far more likely to output the right genome if I have the PGS file on my computer compared to if I don’t. True. But that’s not very relevant because that’s not the process we’re discussing. We’re discussing the process that creates a cell with its genome, not the process that randomly samples computer programs weighted by [algorithmic complexity in the python language on my computer]. The problem is that I don’t know how to interface with the cell-creation process in a way that lets me push bits of selection into it. Instead, the cell-creation process just mostly does its own thing. Even if I do end-to-end phenotype selection, I’m not really steering the core process of cell-genome-selection.
Bob: I understand, but you were saying that the complexity of the target makes the whole task harder. Now that we have the PGS, the target is not very complex; we just point at the PGS.
Alice: The point about the complexity is to say that cells growing in my lab won’t just spontaneously start having the 500 alleles I want. I’d have to do something to them—I’d have to know how to pump selection power into them. It’s some specific technique I need to have but don’t have, for dealing with cells. It doesn’t matter that the random-program complexity has decreased, because we’re not talking about random programs, we’re talking about cell-genome-selection. Cell-genome-selection is the process where I don’t know how to consistently pump bits into, and it’s the process that doesn’t by chance get the 500 alleles. It’s the process against which I’m measuring complexity.
This analogy is valid in the case where we have absolutely no idea how to use a system’s representations or “knowledge” to direct an AIs behavior. That is the world Yudkowsky wrote the sequences in. It is not the world we currently live in. There are several, perhaps many, plausible plans to direct a competent AGIs actions and its “thoughts” and “values”′ toward either its own or a subsystem’s “understanding” of human values. See Goals selected from learned knowledge: an alternative to RL alignment for some of those plans. Critiques need to go beyond the old “we have no idea” argument and actually address the ideas we have.
That’s incorrect, but more importantly it’s off topic. The topic is “what does the complexity of value have to do with the difficulty of alignment”. Barnett AFAIK in this comment is not saying (though he might agree, and maybe he should be taken as saying so implicitly or something) “we have lots of ideas for getting an AI to care about some given values”. Rather he’s saying “if you have a simple pointer to our values, then the complexity of values no longer implies anything about the difficulty of alignment because values effectively aren’t complex anymore”.
This.
I’m not sure you could be as confident as Yudkowsky was at the time, but yeah there was a serious probability in the epistemic state of 2008 that human values were so complicated and that simple techniques made AIs so completely goodhart on the task that’s intended that controlling smart AI was essentially hopeless.
We now know that a lot of the old Lesswrong lore on how complicated human values and wishes are, at least in the code section are either incorrect or irrelevant, and we also know that the standard LW story of how humans came to dominate other animals is incorrect to a degree that impacts AI alignment.
I have my own comments on the ideas below, but people really should try to update on the evidence we gained from LLMs, as we learned a lot about ourselves and LLMs in the process, because there’s a lot of evidence that generalizes from LLMs to future AGI/ASI, and IMO LW updated way, way too slowly on AI safety.
https://www.lesswrong.com/posts/83TbrDxvQwkLuiuxk/?commentId=BxNLNXhpGhxzm7heg
https://www.lesswrong.com/posts/YyosBAutg4bzScaLu/thoughts-on-ai-is-easy-to-control-by-pope-and-belrose#4yXqCNKmfaHwDSrAZ (This is more of a model-based RL approach to alignment)
https://www.lesswrong.com/posts/wkFQ8kDsZL5Ytf73n/my-disagreements-with-agi-ruin-a-list-of-lethalities#dyfwgry3gKRBqQzoW
https://www.lesswrong.com/posts/wkFQ8kDsZL5Ytf73n/my-disagreements-with-agi-ruin-a-list-of-lethalities#7bvmdfhzfdThZ6qck
a) I think at least part of what’s gone on is that Eliezer has been misunderstood and facing the same actually quite dumb arguments a lot, and he is now (IMO) too quick to round new arguments off to something he’s got cached arguments for. (I’m not sure whether this is exactly what went on in this case, but seems plausible without carefully rereading everything)
b) I do think when Eliezer wrote this post, there were literally a bunch of people making quite dumb arguments that were literally “the solution to AI ethics/alignment is [my preferred elegant system of ethics] / [just have it track smiling faces] / [other explicit hardcoded solutions that were genuinely impractical]”
I think I personally did also not get what you were trying to say for awhile, so I don’t think the problem here is just Eliezer (although it might be me making a similar mistake to what I hypothesize Eliezer to have made, for reasons that are correlated with him)
I do generally think a criticism I have of Eliezer is that he has spent too much time comparatively focused on the dumber 3⁄4 of arguments, instead of engaging directly with top critics which are often actually making more subtle points (and being a bit too slow to update that this is what’s going on)
Wish there was a system where people could pay money to bid up what they believed were the “top arguments” that they wanted me to respond to. Possibly a system where I collect the money for writing a diligent response (albeit note that in this case I’d weigh the time-cost of responding as well as the bid for a response); but even aside from that, some way of canonizing what “people who care enough to spend money on that” think are the Super Best Arguments That I Should Definitely Respond To. As it stands, whatever I respond to, there’s somebody else to say that it wasn’t the real argument, and this mainly incentivizes me to sigh and go on responding to whatever I happen to care about more.
(I also wish this system had been in place 24 years ago so you could scroll back and check out the wacky shit that used to be on that system earlier, but too late now.)
I do think such a system would be really valuable, and is the sort of the thing the LW team should try to build. (I’m mostly not going to respond to this idea right now but I’ve filed it away as something to revisit more seriously with Lightcone. Seems straightforwardly good)
But it feels slightly orthogonal to what I was trying to say. Let me try again.
(this is now official a tangent from the original point, but, feels important to me)
It would be good if the world could (deservedly) trust, that the best x-risk thinkers have a good group epistemic process for resolving disagreements.
At least two steps that seem helpful for that process are:
Articulating clear lists of the best arguments, such that people can prioritize refuting them (or updating on them).
But, before that, there is a messier process of “people articulating half formed versions of those arguments, struggling to communicate through different ontologies, being slightly confused.” And there is some back-and-forth process typically needed to make progress.
It is that “before” step where it feels like things seem to be going wrong, to me. (I haven’t re-read Matthew’s post or your response comment from a year ago in enough detail to have a clear sense of what, if anything, went wrong. But to illustrate the ontology: I that instance was roughly in the liminal space between the two steps)
Half-formed confused arguments in different ontologies are probably “wrong”, but that isn’t necessarily because they are completely stupid, it can be because they are half-formed. And maybe the final version of the argument is good, or maybe not, but it’s at least a less stupid version of that argument. And if Alice rejects a confused, stupid argument in a loud way, without understanding the generator that Bob was trying to pursue, Bob’s often rightly annoyed that Alice didn’t really hear them and didn’t really engage.
Dealing with confused half-formed arguments is expensive, and I’m not sure it’s worth people’s time, especially given that confused half-formed arguments are hard to distinguish from “just wrong” ones.
But, I think we can reduce wasted-motion on the margin.
A hopefully cheap-enough TAP that might help if more people did, might be something like:
<TAP> When responding to a wrong argument (which might be completely stupid, or might be a half-formed thing going in an eventually interesting direction)
<ACTION> Preface response with something like: “I think you’re saying X. Assuming so, I think this is wrong because [insert argument].” End the argument with “If this seemed to be missing the point, can you try saying your thing in different words, or clarify?”
(if it feels too expensive to articulate what X is, instead one could start with something more like “It looks at first glance like this is wrong because [insert argument]” and then still end with the “check if missing the point?” closing note)
I think more-of-that-on-the-margin from a bunch of people would save a lot of time spent in aggro-y escalation spirals.
re: top level posts
This doesn’t quite help with when, instead of replying to someone, you’re writing a top-level post responding to an abstracted argument (i.e. The Sun is big, but superintelligences will not spare Earth a little sunlight).
I’d have to think more about what to do for that case, but, the sort of thing I’m imagining is a bit more scaffolding that builds towards “having a well indexed list of the best arguments.” Maybe briefly noting early on “This essay is arguing for [this particular item in List of Lethalities]” or “This argument is adding a new item to List of Lethalities” (and then maybe update that post, since it’s nice to have a comprehensive list).
This doesn’t feel like a complete solution, but, the sort of things I’d be looking for a cheap things you can add to posts that help bootstrap towards a clearer-list-of-the-best-arguments existing.
I would suggest formulating this like a literal attention economy.
You set a price for your attention (probably like $1). The price at which even if the post is a waste of time, the money makes it worth it.
“Recommenders” can recommend content to you by paying the price.
If the content was worth your time, you pay the recommender the $1 back plus a couple cents.
The idea is that the recommenders would get good at predicting what posts you’d pay them for. And since you aren’t a causal decision theorist they know you won’t scam them. In particular, on average you should be losing money (but in exchange you get good content).
This doesn’t necessarily require new software. Just tell people to send PayPals with a link to the content.
With custom software, theoretically there could exist a secondary market for “shares” in the payout from step 3 to make things more efficient. That way the best recommenders could sell their shares and then use that money to recommend more content before you payout.
If the system is bad at recommending content, at least you get paid!
I think this is worth a new top-level post. I think the discussion on your Evaluating the historical value misspecification argument was a high-water mark for resolving the disagreement on alignment difficulty between old-schoolers and new prosaic alignment thinkers. But that discussion didn’t make it past the point you raise here: if we can identify human values, shouldn’t that help (a lot) in making an AGI that pursues those values?
One key factor is whether the understanding of human values is available while the AGI is still dumb enough to remain in your control.
I tried to progress this line of discussion in my The (partial) fallacy of dumb superintelligence and Goals selected from learned knowledge: an alternative to RL alignment.
I want to push back on this a bit. I suspect that “demonstrated progress” is doing a lot of work here, and smuggling an assumption that current trends with LLMs will continue and can be extrapolated straightforwardly.
It’s true that LLMs have some nice properties for encapsulating fuzzy and complex concepts like human values, but I wouldn’t actually want to use any current LLMs as a referent or in a rating system like the one you propose, for obvious reasons.
Maybe future LLMs will retain all the nice properties of current LLMs while also solving various issues with jailbreaking, hallucination, robustness, reasoning about edge cases, etc. but declaring victory already (even on a particular and narrow point about value identification) seems premature to me.
Separately, I think some of the nice properties you list don’t actually buy you that much in practice, even if LLM progress does continue straightforwardly.
A lot of the properties you list follow from the fact that LLMs are pure functions of their input (at least with a temperature of 0).
Functional purity is a very nice property, and traditional software that encapsulates complex logic in pure functions is often easier to reason about, debug, and formally verify vs. software that uses lots of global mutable state and / or interacts with the outside world through a complex I/O interface. But when the function in question is 100s of GB of opaque floats, I think it’s a bit of a stretch to call it transparent and legible just because it can be evaluated outside of the IO monad.
Aside from purity, I don’t think your point about an LLM being a “particular function” that can be “hooked up to the AI directly” is doing much work -
input()
(i.e. asking actual humans) seems just as direct and particular asllm()
. If you want your AI system to actually do something in the messy real world, you have to break down the nice theoretical boundary and guarantees you get from functional purity somewhere.More concretely, given your proposed rating system, simply replace any LLM calls with a call that just asks actual humans to rate a world state given some description, and it seems like you get something that is at least as legible and transparent (in an informal sense) as the LLM version. The main advantage with using an LLM here is that you could potentially get lots of such ratings cheaply and quickly. Replay-ability, determinism and the relative ease of interpretability vs. doing neuroscience on the human raters are also nice, but none of these properties are very reassuring or helpful if the ratings themselves aren’t all that good. (Also, if you’re doing something with such low sample efficiency that you can’t just use actual humans, you’re probably on the wrong track anyway.)