I also think that the variant of the problem featuring an actual mugger is about scam recognition.
Suppose you get an unsolicited email claiming that a Nigerian prince wants to send you a Very Large Reward worth $Y. All you have to do is send him a cash advance of $5 first …
I analyze this as a straightforward two-player game tree via the usual minimax procedure. Player one goes first, and can either pay $5 or not. If player one chooses to pay, then player two goes second, and can either pay Very Large Reward $Y to player one, or he can run away with the cash in hand. Under the usual minimax assumptions, player 2 is obviously not going to pay out! Crucially, this analysis does not depend on the value for Y.
The analysis for Pascal’s mugger is equivalent. A decision procedure that needs to introduce ad hoc corrective factors based on the value of Y seems flawed to me. This type of situation should not require an unusual degree of mathematical sophistication to analyze.
When I list out the most relevant facts about this scenario, they include the following:
(1) we received an unsolicited offer
(2) from an unknown party from whom we won’t be able to seek redress if anything goes wrong
(3) who can take our money and run without giving us anything verifiable in return.
That’s all we need to know. The value of Y doesn’t matter. If the mugger performs a cool and impressive magic trick we may want to tip him for his skillful street performance. We still shouldn’t expect him to payout Y.
I generally learn a lot from the posts here, but in this case I think the reasoning in the post confuses rather than enlightens. When I look back on my own life experiences, there are certainly times when I got scammed. I understand that some in the Less Wrong community may also have fallen victim to scams or fraud in the past. I expect that many of us will likely be subject to disingenuous offers by unFriendly parties in the future. I respectfully suggest that knowing about common scams is a helpful part of a rationalist’s training. It may offer a large benefit relative to other investments.
If my analysis is flawed and/or I’ve missed the point of the exercise, I would appreciate learning why. Thanks!
When you say that player 2 “is obviously not going to pay out” that’s an approximation. You don’t know that he’s not going to pay off. You know that he’s very, very, very, unlikely to pay off. (For instance, there’s a very slim chance that he subscribes to a kind of honesty which leads him to do things he says he’ll do, and therefore doesn’t follow minimax.) But in Pascal’s Mugging, “very, very, very, unlikely” works differently from “no chance at all”.
That does not matter. If you think it is scam, then size of promised reward does not matter. 100? Googol? Googolplex? 3^^^3? Infinite? It just do not enter calculations in first place, since it is made up anyway.
Determining “is this scam?” probably would have to rely on other things than size of reward. That’ avoids whole “but but there is no 1 in 3^^^3 probablility because I say so” bs.
There’s a probability of a scam, you’re not certain that it is a scam. The small probability that you are wrong about it being a scam is multiplied by the large amount.
There seems to be this idea on LW that the probability of it being not a scam can only decrease with the Kolmogorov complexity of the offer. If you accept this idea, then the probability being a function of the amount doesn’t help you.
If you accept that the probability can decrease faster than that, then of course that’s a solution.
I suppose that people who talk about Kolmogorov complexity in this setting are thinking of AIXI or some similar decision procedure. Too bad that AIXI doesn’t work with unbounded utility, as expectations may diverge or become undefined.
I also think that the variant of the problem featuring an actual mugger is about scam recognition.
Suppose you get an unsolicited email claiming that a Nigerian prince wants to send you a Very Large Reward worth $Y. All you have to do is send him a cash advance of $5 first …
I analyze this as a straightforward two-player game tree via the usual minimax procedure. Player one goes first, and can either pay $5 or not. If player one chooses to pay, then player two goes second, and can either pay Very Large Reward $Y to player one, or he can run away with the cash in hand. Under the usual minimax assumptions, player 2 is obviously not going to pay out! Crucially, this analysis does not depend on the value for Y.
The analysis for Pascal’s mugger is equivalent. A decision procedure that needs to introduce ad hoc corrective factors based on the value of Y seems flawed to me. This type of situation should not require an unusual degree of mathematical sophistication to analyze.
When I list out the most relevant facts about this scenario, they include the following: (1) we received an unsolicited offer (2) from an unknown party from whom we won’t be able to seek redress if anything goes wrong (3) who can take our money and run without giving us anything verifiable in return.
That’s all we need to know. The value of Y doesn’t matter. If the mugger performs a cool and impressive magic trick we may want to tip him for his skillful street performance. We still shouldn’t expect him to payout Y.
I generally learn a lot from the posts here, but in this case I think the reasoning in the post confuses rather than enlightens. When I look back on my own life experiences, there are certainly times when I got scammed. I understand that some in the Less Wrong community may also have fallen victim to scams or fraud in the past. I expect that many of us will likely be subject to disingenuous offers by unFriendly parties in the future. I respectfully suggest that knowing about common scams is a helpful part of a rationalist’s training. It may offer a large benefit relative to other investments.
If my analysis is flawed and/or I’ve missed the point of the exercise, I would appreciate learning why. Thanks!
When you say that player 2 “is obviously not going to pay out” that’s an approximation. You don’t know that he’s not going to pay off. You know that he’s very, very, very, unlikely to pay off. (For instance, there’s a very slim chance that he subscribes to a kind of honesty which leads him to do things he says he’ll do, and therefore doesn’t follow minimax.) But in Pascal’s Mugging, “very, very, very, unlikely” works differently from “no chance at all”.
That does not matter. If you think it is scam, then size of promised reward does not matter. 100? Googol? Googolplex? 3^^^3? Infinite? It just do not enter calculations in first place, since it is made up anyway.
Determining “is this scam?” probably would have to rely on other things than size of reward. That’ avoids whole “but but there is no 1 in 3^^^3 probablility because I say so” bs.
There’s a probability of a scam, you’re not certain that it is a scam. The small probability that you are wrong about it being a scam is multiplied by the large amount.
What if the probability of it being a scam is a function of the amount offered?
There seems to be this idea on LW that the probability of it being not a scam can only decrease with the Kolmogorov complexity of the offer. If you accept this idea, then the probability being a function of the amount doesn’t help you.
If you accept that the probability can decrease faster than that, then of course that’s a solution.
I can’t come up with any reasons why that should be so.
I suppose that people who talk about Kolmogorov complexity in this setting are thinking of AIXI or some similar decision procedure.
Too bad that AIXI doesn’t work with unbounded utility, as expectations may diverge or become undefined.