When Omega predicts your behavior, it carries out the same abstract computation as you do when you decide whether to one-box or two-box. To make this point clear, we can imagine that Omega makes this prediction by creating a simulation of you and observing its behavior in Newcomb’s problem. [...] TDT says to act as if deciding the output of this computation...
RationalWiki explains this in the way that you should act as if it is you that is being simulated and who possibly faces punishment. This is very close to what the LessWrong Wiki says, phrased in a language that people with a larger inferential distance can understand.
Yudkowsky further writes:
The first malicious lie is here:
an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward
Neither Roko, nor anyone else I know about, ever tried to use this as an argument to persuade anyone that they should donate money.
...there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn’t give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.1 So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished.
This is like a robber walking up to you and explaining that you could take into account that he could shoot you if you don’t give him your money.
Also notice that Roko talks about trading with uFAIs as well.
Roko said that you could reason that way, but he wasn’t actually advocating that.
All the same, the authors of the RationalWiki article might have thought that he was; it’s not clear to me that the error is malicious. It’s still an error.
RationalWiki explains this in the way that you should act as if it is you that is being simulated and who possibly faces punishment. This is very close to what the LessWrong Wiki says, phrased in a language that people with a larger inferential distance can understand.
I’m pretty sure that I understand what the quoted text says (apart from the random sentence fragment), and what you’re subsequently claiming that it says. I just don’t see how the two relate, beyond that both involve simulations.
This is like a robber walking up to you and explaining that you could take into account that he could shoot you if you don’t give him your money.
From your own source, immediately following the bolded sentence:
But of course, if you’re thinking like that, then the CEV-singleton is even more likely to want to punish
you [...] It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk...
I don’t completely understand what he’s saying (possibly I might if I were to read his previous post); but he’s pretty obviously not saying what you say he is. (I’m also not aware of his ever having been employed by SIAI or MIRI.)
(I’d be interested in the perspectives of the 7+ users who upvoted this. I see that it was edited; did it say something different when you upvoted it? Are you just siding with XiXiDu or against EY regardless of details? Or is my brain malfunctioning so badly that what looks like transparent bullshit is actually plausible, convincing or even true?)
Downvoted for bad selective quoting in that last quote. I read it and thought, wow, Yudkowsky actually wrote that. Then I thought, hmmm, I wonder if the text right after that says something like “BUT, this would be wrong because …” ? Then I read user:Document’s comment. Thank you for looking that up.
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.
I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.
The difference is that I also highlight the crazy and outrageous stuff that can be found on LessWrong. And I also don’t bother offending the many fanboys who have a problem with this.
Seriously, you bring up a post titled “The Singularity Institute: How They Brainwash You” as supposed evidence towards you supporting LessWrong, MIRI whatever?
Yes, when you talk to LessWrongers, then you occasionally mention the old thing of how you consider it the “most intelligent and rational community I know of”. But that evaluation isn’t what you constantly repeat to people outside Lesswrong. When asking people “What does Alexander Kruel think of LessWrong?” nobody will say “He endorses it as the most intelligent and rational community he knows of!”
To people outside LessWrong you keep talking about how LessWrong/MIRI/whatever are people out to brainwash you. That’s you being pretty much a definitive example of ‘two-faced’.
I even defended LessWrong against RationalWiki previously.
You have edited your posts in that thread beyond all recognition. Back then (your original version of your posts) I bashed you for unthinking support of Lesswrong and unthinking condemnation of Rationalwiki.
As I said back then:
The problem isn’t that you asserted something about MWI—I’m not discussing the MWI itself here.
It’s rather that you defended something before you knew what it was that you were defending, and attacked people on their knowledge of the facts before you knew what the facts actually were.
Then once you got more informed about it, you immediately changed the form of the defense while maintaining the same judgment. (Previously it was “Bad critics who falsely claim Eliezer has judged MWI to be correct” now it’s “Bad critics who correctly claim Eliezer has judged MWI to be correct, but they badly don’t share that conclusion”)
This all is evidence (not proof, mind you) of strong bias
In short your “defense” of LessWrong against Rationalwiki at that time was as worthless, as unjust, as motivated by whatever biases drive you, as any later criticism of LessWrong by you has been. Whether defending LessWrong or criticizing it, you’re always being in the wrong.
Oh, for pity’s sake. You want to repeatedly ad hominem attack XiXiDu for being a “biased source.” What of Yudkowsky? He’s a biased source—but perhaps we should engage his arguments, possibly by collecting them in one place.
“Lacking context and positive examples”? This doesn’t engage the issue at all. If you want to automatically say this to all of XiXiDu’s comments, you’re not helping.
I feel, and XiXiDu seems to agree, that his posts require a disclaimer or official counterarguments. I feel it’s appropriate to point out that someone has made collecting and spreading every negative aspect of a community they can find into a major part of their life.
Regarding Yudkowsky’s accusations against RationalWiki. Yudkowsky writes:
Calling this malicious is a huge exaggeration. Here is a quote from the LessWrong Wiki entry on Timeless Decision Theory:
RationalWiki explains this in the way that you should act as if it is you that is being simulated and who possibly faces punishment. This is very close to what the LessWrong Wiki says, phrased in a language that people with a larger inferential distance can understand.
Yudkowsky further writes:
This is not a malicious lie. Here is a quote from Roko’s original post (emphasis mine):
This is like a robber walking up to you and explaining that you could take into account that he could shoot you if you don’t give him your money.
Also notice that Roko talks about trading with uFAIs as well.
Roko said that you could reason that way, but he wasn’t actually advocating that.
All the same, the authors of the RationalWiki article might have thought that he was; it’s not clear to me that the error is malicious. It’s still an error.
I’m pretty sure that I understand what the quoted text says (apart from the random sentence fragment), and what you’re subsequently claiming that it says. I just don’t see how the two relate, beyond that both involve simulations.
From your own source, immediately following the bolded sentence:
I don’t completely understand what he’s saying (possibly I might if I were to read his previous post); but he’s pretty obviously not saying what you say he is. (I’m also not aware of his ever having been employed by SIAI or MIRI.)
(I’d be interested in the perspectives of the 7+ users who upvoted this. I see that it was edited; did it say something different when you upvoted it? Are you just siding with XiXiDu or against EY regardless of details? Or is my brain malfunctioning so badly that what looks like transparent bullshit is actually plausible, convincing or even true?)
Downvoted for bad selective quoting in that last quote. I read it and thought, wow, Yudkowsky actually wrote that. Then I thought, hmmm, I wonder if the text right after that says something like “BUT, this would be wrong because …” ? Then I read user:Document’s comment. Thank you for looking that up.
Roko wrote that, not Yudkowsky. But either way, yes, it’s incomplete.
The last quote isn’t from Yudkowsky.
Ah, my mistake, thanks again.
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.
To quote one of my posts:
I even defended LessWrong against RationalWiki previously.
The difference is that I also highlight the crazy and outrageous stuff that can be found on LessWrong. And I also don’t bother offending the many fanboys who have a problem with this.
Seriously, you bring up a post titled “The Singularity Institute: How They Brainwash You” as supposed evidence towards you supporting LessWrong, MIRI whatever?
Yes, when you talk to LessWrongers, then you occasionally mention the old thing of how you consider it the “most intelligent and rational community I know of”. But that evaluation isn’t what you constantly repeat to people outside Lesswrong. When asking people “What does Alexander Kruel think of LessWrong?” nobody will say “He endorses it as the most intelligent and rational community he knows of!”
To people outside LessWrong you keep talking about how LessWrong/MIRI/whatever are people out to brainwash you. That’s you being pretty much a definitive example of ‘two-faced’.
You have edited your posts in that thread beyond all recognition. Back then (your original version of your posts) I bashed you for unthinking support of Lesswrong and unthinking condemnation of Rationalwiki.
As I said back then:
In short your “defense” of LessWrong against Rationalwiki at that time was as worthless, as unjust, as motivated by whatever biases drive you, as any later criticism of LessWrong by you has been. Whether defending LessWrong or criticizing it, you’re always being in the wrong.
Oh, for pity’s sake. You want to repeatedly ad hominem attack XiXiDu for being a “biased source.” What of Yudkowsky? He’s a biased source—but perhaps we should engage his arguments, possibly by collecting them in one place.
“Lacking context and positive examples”? This doesn’t engage the issue at all. If you want to automatically say this to all of XiXiDu’s comments, you’re not helping.
I feel, and XiXiDu seems to agree, that his posts require a disclaimer or official counterarguments. I feel it’s appropriate to point out that someone has made collecting and spreading every negative aspect of a community they can find into a major part of their life.