It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?
I’m at a loss for how to model expected utility in a way that doesn’t generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.
Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a conservative.
One thought, the expected utility of letting our god-like figure run this Turing simulation might well be positive! S/He is essentially creating these 3^^^3 people and then killing them. And in fact, it’s reasonable to assume that expected disutility of killing them is entirely dependent on (and thus exactly balanced by) the utility of their creation.
So, our mugger doesn’t really hand us a dilemma unless the claim is that this simulation is already running, and those people have lives worth living, but if you don’t pay the $5, the program will be altered (sun will stop in the sky, so tto speak) and they will all be killed). This last is more of a nitpick.
It does seem to me that the bayesian inference we draw from this person’s statement must be extraordinarily low, with an uncertainty much larger than its absolute value. Because a being which is both capable of this and willing to offer such a wager (either in truth or as a test) is deeply beyond our moral or intellectual comprehension. Indeed, if the claim is true, that fact will have utility implications that completely dwarf the immediate decision. If they are willing to do this much over 5 dollars, what will they do for a billion? Or for some end that money cannot normally purchase? Or merely at whim? It seems that the information we receive by failing to pay may be of value commensurate with the disutility of them truthfully carrying out their threat.
It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?
I’m at a loss for how to model expected utility in a way that doesn’t generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.
Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a conservative.
One thought, the expected utility of letting our god-like figure run this Turing simulation might well be positive! S/He is essentially creating these 3^^^3 people and then killing them. And in fact, it’s reasonable to assume that expected disutility of killing them is entirely dependent on (and thus exactly balanced by) the utility of their creation.
So, our mugger doesn’t really hand us a dilemma unless the claim is that this simulation is already running, and those people have lives worth living, but if you don’t pay the $5, the program will be altered (sun will stop in the sky, so tto speak) and they will all be killed). This last is more of a nitpick.
It does seem to me that the bayesian inference we draw from this person’s statement must be extraordinarily low, with an uncertainty much larger than its absolute value. Because a being which is both capable of this and willing to offer such a wager (either in truth or as a test) is deeply beyond our moral or intellectual comprehension. Indeed, if the claim is true, that fact will have utility implications that completely dwarf the immediate decision. If they are willing to do this much over 5 dollars, what will they do for a billion? Or for some end that money cannot normally purchase? Or merely at whim? It seems that the information we receive by failing to pay may be of value commensurate with the disutility of them truthfully carrying out their threat.