What the heck was up with that, anyway? I’m still confused about Yudkowsky’s reaction to it; from what I’ve pieced together from other posts about it, if anything, attracting the attention of an alien AI so it’ll upload you into an infinite hell-simulation/use nanobots to turn the Earth into Hell would be a Good Thing, since at least you don’t have to worry about dying and ceasing to exist.
Even if posting it openly would just get deleted, could someone PM me or something?
EDIT: Someone PMed me; I get it now. It seems like Eleizer’s biggest fear could be averted simply by making a firm precommitment not to respond to such blackmail, and thereby giving it no reason to commit such blackmail upon you.
Simply? Making firm commitments at all, especially commitments believable by random others, is a hard problem. I just finished reading Schelling’s Strategies of Commitment so the issue is at the top of my mind right now.
What the heck was up with that, anyway? I’m still confused about Yudkowsky’s reaction to it; from what I’ve pieced together from other posts about it, if anything, attracting the attention of an alien AI so it’ll upload you into an infinite hell-simulation/use nanobots to turn the Earth into Hell would be a Good Thing, since at least you don’t have to worry about dying and ceasing to exist.
Even if posting it openly would just get deleted, could someone PM me or something? EDIT: Someone PMed me; I get it now. It seems like Eleizer’s biggest fear could be averted simply by making a firm precommitment not to respond to such blackmail, and thereby giving it no reason to commit such blackmail upon you.
Simply? Making firm commitments at all, especially commitments believable by random others, is a hard problem. I just finished reading Schelling’s Strategies of Commitment so the issue is at the top of my mind right now.