It might be useful to feature a page containing what we, you know, actually think about the basilisk idea. Although the rationalwiki page seems to be pretty solidly on top of google search, we might catch a couple people looking for the source.
If any XKCD readers are here: Welcome! I assume you’ve already googled what “Roko’s Basilisk” is. For a better idea of what’s going on with this idea, see Eliezer’s comment on the xkcd thread (linked in Emile’s comment), or his earlier response here.
Because of Eliezer’s reaction, probably a hundred more people have heard of the Basilisk, and it tars LW’s reputation.
And this wasn’t particularly unforseeable—see Streisand Effect.
Part of rationality is about regarding one’s actions as instrumental.
He mucked that one up. But to be fair to him, it’s because he takes these ideas very seriously. I don’t care about the basilisk because I don’t take elaborate TDT-based reasoning too seriously, partially out of ironic detachment, but many here would say I should.
For a better idea of what’s going on with this idea, see Eliezer’s comment on the xkcd thread (linked in Emile’s comment), or his earlier response here.
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.
I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.
The difference is that I also highlight the crazy and outrageous stuff that can be found on LessWrong. And I also don’t bother offending the many fanboys who have a problem with this.
When I visited MIRI’s headquarters, they were trying to set up a video link to the Future of Humanity Institute. Somebody had put up a monitor in a prominent place and there was a sticky note saying something like “Connects to FHI—do not touch”.
Except that the H was kind of sloppy and bent upward so it looked like an A.
That explanation by Eliezer cleared things up for me. He really should have explained himself earlier. I actually had some vague understanding of what Eliezer was doing with his deletion and refusal to discuss the topic, but as usual, Eliezer’s explanation make things that I thought I sort-of-knew seem obvious in retrospect.
And as Eliezer realizes, the attempt to hush things up was a mistake. Roko’s post should have been taken as a teaching moment.
Exactly. Having the official position buried in comments with long chains of references doesn’t help to sound convincing compared to a well-formatted (even if misleading) article.
That response in /r/futurology is really good actually, I hadn’t seen it before. Maybe it should be reposted (with the sarcasm slightly toned down) as a main article here?
Also kudos to Eleizer for admitting he messed up with the original deletion.
It might be useful to feature a page containing what we, you know, actually think about the basilisk idea. Although the rationalwiki page seems to be pretty solidly on top of google search, we might catch a couple people looking for the source.
If any XKCD readers are here: Welcome! I assume you’ve already googled what “Roko’s Basilisk” is. For a better idea of what’s going on with this idea, see Eliezer’s comment on the xkcd thread (linked in Emile’s comment), or his earlier response here.
Because of Eliezer’s reaction, probably a hundred more people have heard of the Basilisk, and it tars LW’s reputation.
And this wasn’t particularly unforseeable—see Streisand Effect.
Part of rationality is about regarding one’s actions as instrumental.
He mucked that one up. But to be fair to him, it’s because he takes these ideas very seriously. I don’t care about the basilisk because I don’t take elaborate TDT-based reasoning too seriously, partially out of ironic detachment, but many here would say I should.
Righto, you should avoid not taking things seriously because of ironic detachment.
For a better idea of what’s going on you should read all of his comments on the topic in chronological order.
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.
To quote one of my posts:
I even defended LessWrong against RationalWiki previously.
The difference is that I also highlight the crazy and outrageous stuff that can be found on LessWrong. And I also don’t bother offending the many fanboys who have a problem with this.
I’m guessing Eliezer has one of those, probably locked away behind a triply-locked vault in the basement of MIRI.
See, it’s comments like these that are one of the reasons people think LW is a cult.
Does MIRI actually has a basement?
It’s behind the hidden door. Full of boxes which say “AI inside—DO NOT TALK TO IT”.
The ghosts there are not really dangerous. Usually.
When I visited MIRI’s headquarters, they were trying to set up a video link to the Future of Humanity Institute. Somebody had put up a monitor in a prominent place and there was a sticky note saying something like “Connects to FHI—do not touch”.
Except that the H was kind of sloppy and bent upward so it looked like an A.
I was really careful not to touch that monitor.
That explanation by Eliezer cleared things up for me. He really should have explained himself earlier. I actually had some vague understanding of what Eliezer was doing with his deletion and refusal to discuss the topic, but as usual, Eliezer’s explanation make things that I thought I sort-of-knew seem obvious in retrospect.
And as Eliezer realizes, the attempt to hush things up was a mistake. Roko’s post should have been taken as a teaching moment.
Exactly. Having the official position buried in comments with long chains of references doesn’t help to sound convincing compared to a well-formatted (even if misleading) article.
That response in /r/futurology is really good actually, I hadn’t seen it before. Maybe it should be reposted (with the sarcasm slightly toned down) as a main article here?
Also kudos to Eleizer for admitting he messed up with the original deletion.