I continue to not understand why Pascal’s Mugging seems like a compelling argument. The more money the mugger offers, the less likely I think he is to deliver the goods. If I met a real-world Pascal’s Mugger on the street, there is no amount of money he could offer me that would make me think it was a positive expected value deal.
A Pascal’s Mugger accosts you on the street.
Mugger: “Please give me $1. If I promise in exchange to bring you $2 tomorrow, how likely do you think it is that I’ll follow through?”
You: “30%. The expected value is -$0.40, so no.”
Mugger: “What if I promise to bring you $3 tomorrow? How likely do you think it is that I’ll follow through then?”
You: “20%. The expected value is still -$0.40, so no.”
Mugger: “What if I promise to bring $4?”
You: “Let’s cut to the chase. I think the probability of you bringing me D dollars is 0.6/D, and so the expected value is always going to be -$0.40. I’m never giving you my dollar.”
Mugger: “Phooey.” [walks away to accost somebody else]
That would be a convenient resolution to the Mugging, but seems unlikely to in fact be true? By the time you get up to numbers around $1 million, the probability of you being paid is very low, but most of it is in situations like ‘Elon Musk is playing a prank on me,’ and in many of these situations you could also get paid $2 million.
It seems likely that ‘probability of payment given offer of $2 million’ is substantially more than half of ‘probability of payment given offer of $1 million’.
Pascal’s Mugging arguments are used to address two questions. One is “why can’t the mugger extract money from people by offering them arbitrarily large sums of money tomorrow in exchange for a small amount of money today?” This is the situation I have sketched.
The other is “why, when offered two propositions of equal expected value, do we prefer the one with a lower payoff and higher probability?” I think the situation you have articulated is more relevant to this question. What do you think?
Thanks! That sums up my intuition almost exactly (though I’d probably lower the probability drastically with every new attempt). There should be something out there that formalizes that part of rationality.
For smaller amounts of money (/utility), this works. But think of the scenario where the mugger promises you one trillion $ and you say no, based on the expected value. He then offers you two trillion $ (let’s say your marginal utility of money is constant at this level, because you’re an effective altruist and expect to save twice as many lives with twice the money). Do you really think that the mugger being willing to give you two trillion is less than half as likely as him being willing to give you one trilion? It seems to me that anyone willing and able to give a stranger one trillion for a bet is probably also able to give twice as much money.
I do. You’re making a practical argument, so let’s put this in billions, since nobody has two trillion dollars. Today, according to Forbes, there is one person with over $200 billion in wealth, and 6 people (actually one is a family, but I’ll count them as unitary) with over $100 billion in wealth.
So at a base rate, being offered a plausible $200 billion by a Pascalian mugger is about 17% as likely as being offered $100 billion.
This doesn’t preclude the possibility that in some real world situation you may find some higher offers more plausible than some lower offers.
But as I said in another comment, there are only two possibilities: your evaluation is that the mugger’s offer is likely enough that it has positive expected utility to you, or that it is too unlikely and therefore doesn’t. In the former case, you are a fool not to accept. In the latter case, you are a fool to take the offer.
To be clear, I am talking about expected utility, not the expected payoff. If $100 is not worth twice as much to you as $50 in terms of utility, then it’s worse, not neutral, to go from a 50% chance at a $50 payoff to a 25% chance of a $100 payoff. This also helps explain why people are hesitant to accept the mugger’s offers. Not only might they become less likely, and perhaps even exponentially less likely, to receive the payoff, the marginal utility per dollar may decrease at the same time.
This is a practical argument though, and I don’t think it’s possible to give a conclusive account of what our likelihood or utility function ought to be in this contrived and hypothetical scenario.
I agree with what you’re saying; the reason I used trillions was exactly because it’s an amount nobody has. Any being which can produce a trillion dollars on the spot is likely (more than 50%, is my guess) powerful enough to produce two trillion dollars, while the same cannot be said for billions.
As for expected utility vs expected payoff, I agree that under conditions of diminishing marginal utility the offer is almost never worth taking. I am perhaps a bit too used to the more absurd versions of Pascal’s Mugging, where the mugger promises to grant you utility directly, or disutility in the form of a quadrillion years of torture.
Probably the intuition against accepting the money offer does indeed lie in diminishing marginal utility, but I find it interesting that I’m not tempted to take the offer even if it’s stated in terms of things with constant marginal utility to me, like lives saved or years of torture prevented.
I find it interesting that I’m not tempted to take the offer even if it’s stated in terms of things with constant marginal utility to me, like lives saved or years of torture prevented.
My instant response is that this strongly suggests that lives saved and years of torture prevented do not in fact have constant marginal utility to you. Or more specifically, the part of you that is in control of your intuitive reactions. I share your lack of temptation to take the offer.
My explanations are either or both of the following:
My instinctive sense of “altruistic temptation” is badly designed and makes poor choices in these scenarios, or else I am not as altruistic as I like to think.
My intuition for whether Pascalian Muggings are net positive expected value is correctly discerning that they are not, no matter the nature of the promised reward. Even in the case of an offer of increasing amounts of utility (defined as “anything for which twice as much is always twice as good”), I can still think that the offer to produce it is less and less likely to pay off the more that is offered.
That is indeed somewhat similar to the “Hansonian adjustment” approach to solving the Mugging, when larger numbers come into play. Hanson originally suggested that, conditional on the claim that 3^^^^3 distinct people will come into existence, we should need a lot of evidence to convince us we’re the one with a unique opportunity to determine almost all of their fates. It seems like such claims should be penalized by a factor of 1/3^^^^3. We can perhaps extend this so it applies to causal nodes as well as people. That idea seems more promising to me than bounded utility, which implies that even a selfish agent would be unable to share many goals with its future self (and technically, even a simple expected value calculation takes time.)
Your numbers above are, at least, more credible than saying there’s a 1⁄512 chance someone will offer you a chance to pick between a billion US dollars and one hundred million.
I continue to not understand why Pascal’s Mugging seems like a compelling argument. The more money the mugger offers, the less likely I think he is to deliver the goods. If I met a real-world Pascal’s Mugger on the street, there is no amount of money he could offer me that would make me think it was a positive expected value deal.
A Pascal’s Mugger accosts you on the street.
Mugger: “Please give me $1. If I promise in exchange to bring you $2 tomorrow, how likely do you think it is that I’ll follow through?”
You: “30%. The expected value is -$0.40, so no.”
Mugger: “What if I promise to bring you $3 tomorrow? How likely do you think it is that I’ll follow through then?”
You: “20%. The expected value is still -$0.40, so no.”
Mugger: “What if I promise to bring $4?”
You: “Let’s cut to the chase. I think the probability of you bringing me D dollars is 0.6/D, and so the expected value is always going to be -$0.40. I’m never giving you my dollar.”
Mugger: “Phooey.” [walks away to accost somebody else]
That would be a convenient resolution to the Mugging, but seems unlikely to in fact be true? By the time you get up to numbers around $1 million, the probability of you being paid is very low, but most of it is in situations like ‘Elon Musk is playing a prank on me,’ and in many of these situations you could also get paid $2 million.
It seems likely that ‘probability of payment given offer of $2 million’ is substantially more than half of ‘probability of payment given offer of $1 million’.
Pascal’s Mugging arguments are used to address two questions. One is “why can’t the mugger extract money from people by offering them arbitrarily large sums of money tomorrow in exchange for a small amount of money today?” This is the situation I have sketched.
The other is “why, when offered two propositions of equal expected value, do we prefer the one with a lower payoff and higher probability?” I think the situation you have articulated is more relevant to this question. What do you think?
Thanks! That sums up my intuition almost exactly (though I’d probably lower the probability drastically with every new attempt). There should be something out there that formalizes that part of rationality.
For smaller amounts of money (/utility), this works. But think of the scenario where the mugger promises you one trillion $ and you say no, based on the expected value. He then offers you two trillion $ (let’s say your marginal utility of money is constant at this level, because you’re an effective altruist and expect to save twice as many lives with twice the money). Do you really think that the mugger being willing to give you two trillion is less than half as likely as him being willing to give you one trilion? It seems to me that anyone willing and able to give a stranger one trillion for a bet is probably also able to give twice as much money.
I do. You’re making a practical argument, so let’s put this in billions, since nobody has two trillion dollars. Today, according to Forbes, there is one person with over $200 billion in wealth, and 6 people (actually one is a family, but I’ll count them as unitary) with over $100 billion in wealth.
So at a base rate, being offered a plausible $200 billion by a Pascalian mugger is about 17% as likely as being offered $100 billion.
This doesn’t preclude the possibility that in some real world situation you may find some higher offers more plausible than some lower offers.
But as I said in another comment, there are only two possibilities: your evaluation is that the mugger’s offer is likely enough that it has positive expected utility to you, or that it is too unlikely and therefore doesn’t. In the former case, you are a fool not to accept. In the latter case, you are a fool to take the offer.
To be clear, I am talking about expected utility, not the expected payoff. If $100 is not worth twice as much to you as $50 in terms of utility, then it’s worse, not neutral, to go from a 50% chance at a $50 payoff to a 25% chance of a $100 payoff. This also helps explain why people are hesitant to accept the mugger’s offers. Not only might they become less likely, and perhaps even exponentially less likely, to receive the payoff, the marginal utility per dollar may decrease at the same time.
This is a practical argument though, and I don’t think it’s possible to give a conclusive account of what our likelihood or utility function ought to be in this contrived and hypothetical scenario.
I agree with what you’re saying; the reason I used trillions was exactly because it’s an amount nobody has. Any being which can produce a trillion dollars on the spot is likely (more than 50%, is my guess) powerful enough to produce two trillion dollars, while the same cannot be said for billions.
As for expected utility vs expected payoff, I agree that under conditions of diminishing marginal utility the offer is almost never worth taking. I am perhaps a bit too used to the more absurd versions of Pascal’s Mugging, where the mugger promises to grant you utility directly, or disutility in the form of a quadrillion years of torture.
Probably the intuition against accepting the money offer does indeed lie in diminishing marginal utility, but I find it interesting that I’m not tempted to take the offer even if it’s stated in terms of things with constant marginal utility to me, like lives saved or years of torture prevented.
My instant response is that this strongly suggests that lives saved and years of torture prevented do not in fact have constant marginal utility to you. Or more specifically, the part of you that is in control of your intuitive reactions. I share your lack of temptation to take the offer.
My explanations are either or both of the following:
My instinctive sense of “altruistic temptation” is badly designed and makes poor choices in these scenarios, or else I am not as altruistic as I like to think.
My intuition for whether Pascalian Muggings are net positive expected value is correctly discerning that they are not, no matter the nature of the promised reward. Even in the case of an offer of increasing amounts of utility (defined as “anything for which twice as much is always twice as good”), I can still think that the offer to produce it is less and less likely to pay off the more that is offered.
That is indeed somewhat similar to the “Hansonian adjustment” approach to solving the Mugging, when larger numbers come into play. Hanson originally suggested that, conditional on the claim that 3^^^^3 distinct people will come into existence, we should need a lot of evidence to convince us we’re the one with a unique opportunity to determine almost all of their fates. It seems like such claims should be penalized by a factor of 1/3^^^^3. We can perhaps extend this so it applies to causal nodes as well as people. That idea seems more promising to me than bounded utility, which implies that even a selfish agent would be unable to share many goals with its future self (and technically, even a simple expected value calculation takes time.)
Your numbers above are, at least, more credible than saying there’s a 1⁄512 chance someone will offer you a chance to pick between a billion US dollars and one hundred million.