There is a reason to expect that it will scale in general.
To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn’t have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.
Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.
Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can’t be regarded as equivalent to a thousand century-long lives chained together in sequence?
Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can’t be regarded as equivalent to a thousand century-long lives chained together in sequence?
I’m not sure if we can write this off as a technical detail because we are formulating our prior based on it. What if we assume that we are talking about money and the mugger offers to give us an amount of money that is equivalent in terms of utility to creating N happy lives (assuming he knows your utility function)? If your reasoning is correct, then the prior probability for that would have to be the same as your prior for the mugger creating N happy lives, but since totally different mechanisms would be involved in doing so, this may not be true. That, to me, seems like a problem because we want to be able to defuse Pascal’s Mugging in any general case.
Well, there is no necessary reason why all claimed mechanisms must be equally probable. The mugger could say “I’ll heal the sick with my psychic powers” or “when I get to the bank on Monday, I’ll donate $$$ to medical research”; even if the potential utilities were the same and both probabilities were small, we would not consider the probabilities equal.
Also, the utility of money doesn’t scale indefinitely; if nothing else, it levels off once the amount starts being comparable to all the money in the world, so adding more just creates additional inflation.
Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.
Well, there is no necessary reason why all claimed mechanisms must be equally probable.
That’s why I don’t think we can defuse Pascal’s Mugging, since we can potentially imagine a mechanism for which our probability that the mugger is honest doesn’t scale with the amount of utility the mugger promises to give. That would imply that there is no fully general solution to Bostrom’s formulation of Pascal’s Mugging. And that worries me greatly.
However:
Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.
This gives me a little bit of hope, since we might be able to use it as a heuristic when dealing with situations like these. That’s not as good as a proof, but it’s not bad.
Also:
The mugger could say “I’ll heal the sick with my psychic powers” or “when I get to the bank on Monday, I’ll donate $$$ to medical research”
Only on LessWrong does that sentence make sense and not sound funny :)
There is a reason to expect that it will scale in general.
To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn’t have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.
Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.
Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can’t be regarded as equivalent to a thousand century-long lives chained together in sequence?
BTW, what does RW refer to?
I’m not sure if we can write this off as a technical detail because we are formulating our prior based on it. What if we assume that we are talking about money and the mugger offers to give us an amount of money that is equivalent in terms of utility to creating N happy lives (assuming he knows your utility function)? If your reasoning is correct, then the prior probability for that would have to be the same as your prior for the mugger creating N happy lives, but since totally different mechanisms would be involved in doing so, this may not be true. That, to me, seems like a problem because we want to be able to defuse Pascal’s Mugging in any general case.
RW = RationalWiki
Well, there is no necessary reason why all claimed mechanisms must be equally probable. The mugger could say “I’ll heal the sick with my psychic powers” or “when I get to the bank on Monday, I’ll donate $$$ to medical research”; even if the potential utilities were the same and both probabilities were small, we would not consider the probabilities equal.
Also, the utility of money doesn’t scale indefinitely; if nothing else, it levels off once the amount starts being comparable to all the money in the world, so adding more just creates additional inflation.
Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.
That’s why I don’t think we can defuse Pascal’s Mugging, since we can potentially imagine a mechanism for which our probability that the mugger is honest doesn’t scale with the amount of utility the mugger promises to give. That would imply that there is no fully general solution to Bostrom’s formulation of Pascal’s Mugging. And that worries me greatly.
However:
This gives me a little bit of hope, since we might be able to use it as a heuristic when dealing with situations like these. That’s not as good as a proof, but it’s not bad.
Also:
Only on LessWrong does that sentence make sense and not sound funny :)