For smaller amounts of money (/utility), this works. But think of the scenario where the mugger promises you one trillion $ and you say no, based on the expected value. He then offers you two trillion $ (let’s say your marginal utility of money is constant at this level, because you’re an effective altruist and expect to save twice as many lives with twice the money). Do you really think that the mugger being willing to give you two trillion is less than half as likely as him being willing to give you one trilion? It seems to me that anyone willing and able to give a stranger one trillion for a bet is probably also able to give twice as much money.
I do. You’re making a practical argument, so let’s put this in billions, since nobody has two trillion dollars. Today, according to Forbes, there is one person with over $200 billion in wealth, and 6 people (actually one is a family, but I’ll count them as unitary) with over $100 billion in wealth.
So at a base rate, being offered a plausible $200 billion by a Pascalian mugger is about 17% as likely as being offered $100 billion.
This doesn’t preclude the possibility that in some real world situation you may find some higher offers more plausible than some lower offers.
But as I said in another comment, there are only two possibilities: your evaluation is that the mugger’s offer is likely enough that it has positive expected utility to you, or that it is too unlikely and therefore doesn’t. In the former case, you are a fool not to accept. In the latter case, you are a fool to take the offer.
To be clear, I am talking about expected utility, not the expected payoff. If $100 is not worth twice as much to you as $50 in terms of utility, then it’s worse, not neutral, to go from a 50% chance at a $50 payoff to a 25% chance of a $100 payoff. This also helps explain why people are hesitant to accept the mugger’s offers. Not only might they become less likely, and perhaps even exponentially less likely, to receive the payoff, the marginal utility per dollar may decrease at the same time.
This is a practical argument though, and I don’t think it’s possible to give a conclusive account of what our likelihood or utility function ought to be in this contrived and hypothetical scenario.
I agree with what you’re saying; the reason I used trillions was exactly because it’s an amount nobody has. Any being which can produce a trillion dollars on the spot is likely (more than 50%, is my guess) powerful enough to produce two trillion dollars, while the same cannot be said for billions.
As for expected utility vs expected payoff, I agree that under conditions of diminishing marginal utility the offer is almost never worth taking. I am perhaps a bit too used to the more absurd versions of Pascal’s Mugging, where the mugger promises to grant you utility directly, or disutility in the form of a quadrillion years of torture.
Probably the intuition against accepting the money offer does indeed lie in diminishing marginal utility, but I find it interesting that I’m not tempted to take the offer even if it’s stated in terms of things with constant marginal utility to me, like lives saved or years of torture prevented.
I find it interesting that I’m not tempted to take the offer even if it’s stated in terms of things with constant marginal utility to me, like lives saved or years of torture prevented.
My instant response is that this strongly suggests that lives saved and years of torture prevented do not in fact have constant marginal utility to you. Or more specifically, the part of you that is in control of your intuitive reactions. I share your lack of temptation to take the offer.
My explanations are either or both of the following:
My instinctive sense of “altruistic temptation” is badly designed and makes poor choices in these scenarios, or else I am not as altruistic as I like to think.
My intuition for whether Pascalian Muggings are net positive expected value is correctly discerning that they are not, no matter the nature of the promised reward. Even in the case of an offer of increasing amounts of utility (defined as “anything for which twice as much is always twice as good”), I can still think that the offer to produce it is less and less likely to pay off the more that is offered.
For smaller amounts of money (/utility), this works. But think of the scenario where the mugger promises you one trillion $ and you say no, based on the expected value. He then offers you two trillion $ (let’s say your marginal utility of money is constant at this level, because you’re an effective altruist and expect to save twice as many lives with twice the money). Do you really think that the mugger being willing to give you two trillion is less than half as likely as him being willing to give you one trilion? It seems to me that anyone willing and able to give a stranger one trillion for a bet is probably also able to give twice as much money.
I do. You’re making a practical argument, so let’s put this in billions, since nobody has two trillion dollars. Today, according to Forbes, there is one person with over $200 billion in wealth, and 6 people (actually one is a family, but I’ll count them as unitary) with over $100 billion in wealth.
So at a base rate, being offered a plausible $200 billion by a Pascalian mugger is about 17% as likely as being offered $100 billion.
This doesn’t preclude the possibility that in some real world situation you may find some higher offers more plausible than some lower offers.
But as I said in another comment, there are only two possibilities: your evaluation is that the mugger’s offer is likely enough that it has positive expected utility to you, or that it is too unlikely and therefore doesn’t. In the former case, you are a fool not to accept. In the latter case, you are a fool to take the offer.
To be clear, I am talking about expected utility, not the expected payoff. If $100 is not worth twice as much to you as $50 in terms of utility, then it’s worse, not neutral, to go from a 50% chance at a $50 payoff to a 25% chance of a $100 payoff. This also helps explain why people are hesitant to accept the mugger’s offers. Not only might they become less likely, and perhaps even exponentially less likely, to receive the payoff, the marginal utility per dollar may decrease at the same time.
This is a practical argument though, and I don’t think it’s possible to give a conclusive account of what our likelihood or utility function ought to be in this contrived and hypothetical scenario.
I agree with what you’re saying; the reason I used trillions was exactly because it’s an amount nobody has. Any being which can produce a trillion dollars on the spot is likely (more than 50%, is my guess) powerful enough to produce two trillion dollars, while the same cannot be said for billions.
As for expected utility vs expected payoff, I agree that under conditions of diminishing marginal utility the offer is almost never worth taking. I am perhaps a bit too used to the more absurd versions of Pascal’s Mugging, where the mugger promises to grant you utility directly, or disutility in the form of a quadrillion years of torture.
Probably the intuition against accepting the money offer does indeed lie in diminishing marginal utility, but I find it interesting that I’m not tempted to take the offer even if it’s stated in terms of things with constant marginal utility to me, like lives saved or years of torture prevented.
My instant response is that this strongly suggests that lives saved and years of torture prevented do not in fact have constant marginal utility to you. Or more specifically, the part of you that is in control of your intuitive reactions. I share your lack of temptation to take the offer.
My explanations are either or both of the following:
My instinctive sense of “altruistic temptation” is badly designed and makes poor choices in these scenarios, or else I am not as altruistic as I like to think.
My intuition for whether Pascalian Muggings are net positive expected value is correctly discerning that they are not, no matter the nature of the promised reward. Even in the case of an offer of increasing amounts of utility (defined as “anything for which twice as much is always twice as good”), I can still think that the offer to produce it is less and less likely to pay off the more that is offered.