If all you say is “I can’t prove anything, but if I’m right, it’ll be really bad”, I yawn and move on.
This is the normal response, even here at LW—I think there’s a popular misperception that LW doctrine is to give the Pascal’s Mugger money. The point of the exercise is to examine the thought processes behind that intuitive, obviously correct “no,” when it appears, on the surface, to be the lower expected utility option. After all, we don’t want to build an AI that can be victimized by Pascalian muggers.
One popular option is the one you picked: Simply ignore probabilities below a certain threshhold, whatever the payoff. Another is to discount by the algorithmic complexity, or by the “measure” of the hostages. Yet another is to observe that, if 3^^^^3 people exist, a random person’ (your) chances of being able to affect all the rest in a life-and-death way has to be scaled by 1/3^^^^3. Yet another is that, in a world where things like this happen, a dollar has near-infinite utility. Komponisto suggested that the kolmogorov complexity of 3^^^^3 deaths, or units of disutility, is much higher than that of the number 3^^^^3; so any such problem is inherently broken.
Of course, if you’re not planning to build an optimizing agent, your “yawn and move on” response is fine. That’s what the problem is about, not signing up for cryonics or donating to SI or whatever (the proponents of the last two argue for relatively large probabilities of extremely large utilities).
This is the normal response, even here at LW—I think there’s a popular misperception that LW doctrine is to give the Pascal’s Mugger money. The point of the exercise is to examine the thought processes behind that intuitive, obviously correct “no,” when it appears, on the surface, to be the lower expected utility option. After all, we don’t want to build an AI that can be victimized by Pascalian muggers.
One popular option is the one you picked: Simply ignore probabilities below a certain threshhold, whatever the payoff. Another is to discount by the algorithmic complexity, or by the “measure” of the hostages. Yet another is to observe that, if 3^^^^3 people exist, a random person’ (your) chances of being able to affect all the rest in a life-and-death way has to be scaled by 1/3^^^^3. Yet another is that, in a world where things like this happen, a dollar has near-infinite utility. Komponisto suggested that the kolmogorov complexity of 3^^^^3 deaths, or units of disutility, is much higher than that of the number 3^^^^3; so any such problem is inherently broken.
Of course, if you’re not planning to build an optimizing agent, your “yawn and move on” response is fine. That’s what the problem is about, not signing up for cryonics or donating to SI or whatever (the proponents of the last two argue for relatively large probabilities of extremely large utilities).