That doesn’t sound plausible to me, but if you’re right, the right thing to do would be letting as many people as possible know about the issue, so that it’s more likely to be averted.
The way it works is: if people are keeping the basilisk a secret for the sake of protecting others (even if it increases their own punishment), that means that those people value protecting others over their own safety. Therefore, a more effective way to punish them, is to torture those they’re trying to protect.
In Newcomb’s a good agent will 1-box in emulator and 2-box in reality if it could tell apart sim and reality. Even a tiniest flaw in the emulation results in lack of incentive for following through with the basilisk threat. You need a very dumb decision theory for the agent to just torture people for no gain.
Yes, and in that case the basilisk isn’t a problem at all. My point is that under any decision-theoretic assumptions Eliezer’s strategy of secrecy doesn’t help.
I hope the downvotes of the parent are for taboo violation and not for content. When it comes to Roko’s Basilisk specifically (considering potential spooky acausal variants separately) Army’s solution is correct. With the caveat firmly in place I don’t believe even Eliezer would disagree with that. If he did then I would have to seriously reconsider my support for SIAI—it would indicate that he is someone who is likely to actually implement (or support the implementation of) the Basilisk’s glare.
I indeed suspect that someone is just downvoting all posts mentioning the basilisk regardless of content. (As for “[T]hat doesn’t sound plausible to me”, this is slightly less true now than when I wrote that post—see http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/64f2.)
Yes.
Consider using the term “Roko’s Basilisk” for clarity.
That doesn’t sound plausible to me, but if you’re right, the right thing to do would be letting as many people as possible know about the issue, so that it’s more likely to be averted.
The way it works is: if people are keeping the basilisk a secret for the sake of protecting others (even if it increases their own punishment), that means that those people value protecting others over their own safety. Therefore, a more effective way to punish them, is to torture those they’re trying to protect.
Are you sure you don’t want to at the very least rot-13 that? Some people here have explicitly said they’d rather not find out what the basilisk is.
In Newcomb’s a good agent will 1-box in emulator and 2-box in reality if it could tell apart sim and reality. Even a tiniest flaw in the emulation results in lack of incentive for following through with the basilisk threat. You need a very dumb decision theory for the agent to just torture people for no gain.
Yes, and in that case the basilisk isn’t a problem at all. My point is that under any decision-theoretic assumptions Eliezer’s strategy of secrecy doesn’t help.
Well, yea. The whole thing is just stupid, how-ever you look at it.
I hope the downvotes of the parent are for taboo violation and not for content. When it comes to Roko’s Basilisk specifically (considering potential spooky acausal variants separately) Army’s solution is correct. With the caveat firmly in place I don’t believe even Eliezer would disagree with that. If he did then I would have to seriously reconsider my support for SIAI—it would indicate that he is someone who is likely to actually implement (or support the implementation of) the Basilisk’s glare.
I indeed suspect that someone is just downvoting all posts mentioning the basilisk regardless of content. (As for “[T]hat doesn’t sound plausible to me”, this is slightly less true now than when I wrote that post—see http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/64f2.)
That is certainly not consistent with his behavior.