When you say that player 2 “is obviously not going to pay out” that’s an approximation. You don’t know that he’s not going to pay off. You know that he’s very, very, very, unlikely to pay off. (For instance, there’s a very slim chance that he subscribes to a kind of honesty which leads him to do things he says he’ll do, and therefore doesn’t follow minimax.) But in Pascal’s Mugging, “very, very, very, unlikely” works differently from “no chance at all”.
That does not matter. If you think it is scam, then size of promised reward does not matter. 100? Googol? Googolplex? 3^^^3? Infinite? It just do not enter calculations in first place, since it is made up anyway.
Determining “is this scam?” probably would have to rely on other things than size of reward. That’ avoids whole “but but there is no 1 in 3^^^3 probablility because I say so” bs.
There’s a probability of a scam, you’re not certain that it is a scam. The small probability that you are wrong about it being a scam is multiplied by the large amount.
There seems to be this idea on LW that the probability of it being not a scam can only decrease with the Kolmogorov complexity of the offer. If you accept this idea, then the probability being a function of the amount doesn’t help you.
If you accept that the probability can decrease faster than that, then of course that’s a solution.
I suppose that people who talk about Kolmogorov complexity in this setting are thinking of AIXI or some similar decision procedure. Too bad that AIXI doesn’t work with unbounded utility, as expectations may diverge or become undefined.
When you say that player 2 “is obviously not going to pay out” that’s an approximation. You don’t know that he’s not going to pay off. You know that he’s very, very, very, unlikely to pay off. (For instance, there’s a very slim chance that he subscribes to a kind of honesty which leads him to do things he says he’ll do, and therefore doesn’t follow minimax.) But in Pascal’s Mugging, “very, very, very, unlikely” works differently from “no chance at all”.
That does not matter. If you think it is scam, then size of promised reward does not matter. 100? Googol? Googolplex? 3^^^3? Infinite? It just do not enter calculations in first place, since it is made up anyway.
Determining “is this scam?” probably would have to rely on other things than size of reward. That’ avoids whole “but but there is no 1 in 3^^^3 probablility because I say so” bs.
There’s a probability of a scam, you’re not certain that it is a scam. The small probability that you are wrong about it being a scam is multiplied by the large amount.
What if the probability of it being a scam is a function of the amount offered?
There seems to be this idea on LW that the probability of it being not a scam can only decrease with the Kolmogorov complexity of the offer. If you accept this idea, then the probability being a function of the amount doesn’t help you.
If you accept that the probability can decrease faster than that, then of course that’s a solution.
I can’t come up with any reasons why that should be so.
I suppose that people who talk about Kolmogorov complexity in this setting are thinking of AIXI or some similar decision procedure.
Too bad that AIXI doesn’t work with unbounded utility, as expectations may diverge or become undefined.