What this post is trying to illustrate is that if you try putting crisp physical predicates on reality, that won’t work to say what you want. This point is true!
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
You have exhorted him several times to distinguish between that larger argument and the narrow point made by this post:
[...] and if you think that some larger thing is not correct, you should not confuse that with saying that the narrow point this post is about, is incorrect [...]
But if you’re doing that, please distinguish the truth of what this post actually says versus how you think these other later clever ideas evade or bypass that truth.
But it seems to me that he’s already doing this. He’s not alleging that this post is incorrect in isolation.
The only reason this discussion is happened on the comments of this post at all is the May 2024 update at the start of it, which Matthew used as a jumping-off point for saying “my critique of the ‘larger argument’ does not make the mistake referred to in the May 2024 update[2], but people keep saying it does[3], so I’ll try restating that critique again in the hopes it will be clearer this time.”
I say “some version of” to allow for a distinction between (a) the “larger argument” of Eliezer_2007′s which this post was meant to support in 2007, and (b) whatever version of the same “larger argument” was a standard MIRI position as of roughly 2016-2017.
As far as I can tell, Matthew is only interested in evaluating the 2016-2017 MIRI position, not the 2007 EY position (insofar as the latter is different, if it fact it is). When he cites older EY material, he does so as a means to an end – either as indirect evidence of later MIRI positions, because it was itself cited in the later MIRI material which is his main topic.
Note that the current version of Matthew’s 2023 post includes multiple caveats that he’s not making the mistake referred to in the May 2024 update.
Note also that Matthew’s post only mentions this post in two relatively minor ways, first to clarify that he doesn’t make the mistake referred to in the update (unlike some “Non-MIRI people” who do make the mistake), and second to support an argument about whether “Yudkowsky and other MIRI people” believe that it could be sufficient to get a single human’s values into the AI, or whether something like CEV would be required instead.
I bring up the mentions of this post in Matthew’s post in order to clarifies what role “is ‘The Hidden Complexity of Wishes’ correct in isolation, considered apart from anything outside it?” plays in Matthew’s critique – namely, none at all, IIUC.
(I realize that Matthew’s post has been edited over time, so I can only speak to the current version.)
To be fully explicit: I’m not claiming anything about whether or not the May 2024 update was about Matthew’s 2023 post (alone or in combination with anything else) or not. I’m just rephrasing what Matthew said in the first comment of this thread (which was also agnostic on the topic of whether the update referred to him).
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
I’ll confirm that I’m not saying this post’s exact thesis is false. This post seems to be largely a parable about a fictional device, rather than an explicit argument with premises and clear conclusions. I’m not saying the parable is wrong. Parables are rarely “wrong” in a strict sense, and I am not disputing this parable’s conclusion.
However, I am saying: this parable presumably played some role in the “larger” argument that MIRI has made in the past. What role did it play? Well, I think a good guess is that it portrayed the difficulty of precisely specifying what you want or intend, for example when explicitly designing a utility function. This problem was often alleged to be difficult because, when you want something complex, it’s difficult to perfectly delineate potential “good” scenarios and distinguish them from all potential “bad” scenarios. This is the problem I was analyzing in my original comment.
While the term “outer alignment” was not invented to describe this exact problem until much later, I was using that term purely as descriptive terminology for the problem this post clearly describes, rather than claiming that Eliezer in 2007 was deliberately describing something that he called “outer alignment” at the time. Because my usage of “outer alignment” was merely descriptive in this sense, I reject the idea that my comment was anachronistic.
And again: I am not claiming that this post is inaccurate in isolation. In both my above comment, and in my 2023 post, I merely cited this post as portraying an aspect of the problem that I was talking about, rather than saying something like “this particular post’s conclusion is wrong”. I think the fact that the post doesn’t really have a clear thesis in the first place means that it can’t be wrong in a strong sense at all. However, the post was definitely interpreted as explaining some part of why alignment is hard — for a long time by many people — and I was critiquing the particular application of the post to this argument, rather than the post itself in isolation.
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
You have exhorted him several times to distinguish between that larger argument and the narrow point made by this post:
But it seems to me that he’s already doing this. He’s not alleging that this post is incorrect in isolation.
The only reason this discussion is happened on the comments of this post at all is the May 2024 update at the start of it, which Matthew used as a jumping-off point for saying “my critique of the ‘larger argument’ does not make the mistake referred to in the May 2024 update[2], but people keep saying it does[3], so I’ll try restating that critique again in the hopes it will be clearer this time.”
I say “some version of” to allow for a distinction between (a) the “larger argument” of Eliezer_2007′s which this post was meant to support in 2007, and (b) whatever version of the same “larger argument” was a standard MIRI position as of roughly 2016-2017.
As far as I can tell, Matthew is only interested in evaluating the 2016-2017 MIRI position, not the 2007 EY position (insofar as the latter is different, if it fact it is). When he cites older EY material, he does so as a means to an end – either as indirect evidence of later MIRI positions, because it was itself cited in the later MIRI material which is his main topic.
Note that the current version of Matthew’s 2023 post includes multiple caveats that he’s not making the mistake referred to in the May 2024 update.
Note also that Matthew’s post only mentions this post in two relatively minor ways, first to clarify that he doesn’t make the mistake referred to in the update (unlike some “Non-MIRI people” who do make the mistake), and second to support an argument about whether “Yudkowsky and other MIRI people” believe that it could be sufficient to get a single human’s values into the AI, or whether something like CEV would be required instead.
I bring up the mentions of this post in Matthew’s post in order to clarifies what role “is ‘The Hidden Complexity of Wishes’ correct in isolation, considered apart from anything outside it?” plays in Matthew’s critique – namely, none at all, IIUC.
(I realize that Matthew’s post has been edited over time, so I can only speak to the current version.)
To be fully explicit: I’m not claiming anything about whether or not the May 2024 update was about Matthew’s 2023 post (alone or in combination with anything else) or not. I’m just rephrasing what Matthew said in the first comment of this thread (which was also agnostic on the topic of whether the update referred to him).
I’ll confirm that I’m not saying this post’s exact thesis is false. This post seems to be largely a parable about a fictional device, rather than an explicit argument with premises and clear conclusions. I’m not saying the parable is wrong. Parables are rarely “wrong” in a strict sense, and I am not disputing this parable’s conclusion.
However, I am saying: this parable presumably played some role in the “larger” argument that MIRI has made in the past. What role did it play? Well, I think a good guess is that it portrayed the difficulty of precisely specifying what you want or intend, for example when explicitly designing a utility function. This problem was often alleged to be difficult because, when you want something complex, it’s difficult to perfectly delineate potential “good” scenarios and distinguish them from all potential “bad” scenarios. This is the problem I was analyzing in my original comment.
While the term “outer alignment” was not invented to describe this exact problem until much later, I was using that term purely as descriptive terminology for the problem this post clearly describes, rather than claiming that Eliezer in 2007 was deliberately describing something that he called “outer alignment” at the time. Because my usage of “outer alignment” was merely descriptive in this sense, I reject the idea that my comment was anachronistic.
And again: I am not claiming that this post is inaccurate in isolation. In both my above comment, and in my 2023 post, I merely cited this post as portraying an aspect of the problem that I was talking about, rather than saying something like “this particular post’s conclusion is wrong”. I think the fact that the post doesn’t really have a clear thesis in the first place means that it can’t be wrong in a strong sense at all. However, the post was definitely interpreted as explaining some part of why alignment is hard — for a long time by many people — and I was critiquing the particular application of the post to this argument, rather than the post itself in isolation.