Even just considering the a priori probability of them honestly threatening you before you even take into account that they threatened you, it’s still enough to get Pascal mugged. The probability that they’re lying increases, but not fast enough.
Long version, with math:
Consider three possibilities:
A: Nobody makes the threat
B: Someone dishonestly makes the threat
C: Someone honestly makes the threat
Note that A, B, and C form a partition. That is to say exactly one of them is true. Technically it’s possible that more than one person makes the threat, and only one of them is honest, so B and C happen, so let’s say that the time and place of the threat are specified as well.
C is highly unlikely. If you tried to write a program simulating that, it would be long. It would still fit within the confines of this universe. Let’s be hyper-conservative, and call it K-complexity of a googol.
There is also a piece of evidence E, consisting of someone making the threat.
E is consistent with B and C, but not A. This means:
P(A&E) = 0, P(B&E) = P(B), P(C&E) = P(C)
Calculating from here:
P(C|E) = P(C&E)/P(E)
= P(C)/P(E)
P(C)
= 2^-googol
So the probability of the threatener being honest is at least 2^-googol. Since the disutility of not paying him if he is honest more than 2^googol times the utility of paying him, it’s worthwhile to pay.
What you present is the basic fallacy of Pascal’s Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.
Your formalism, in other words, doesn’t model the argument. The basic point is that Pascal Mugging can be solved by the same logic as succeeds with Pascal’s wager. Pascal ignored that believing in god A was instrumentally rational by ignoring that there might, with equal consequences, be a god B instead who hated people who worshiped god A.
Pascal’s Mugging ignores that giving to the mugger might cause the calamity threatened to be more likely if you accede to the mugger than if you don’t. The point of inflection is that point where the mugger’s making the claim becomes evidence against it rather than for it.
What you present is the basic fallacy of Pascal’s Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.
The prior probability of X is 2^-(K-complexity of X). There are more possible universes where they carry out smaller threats, so the K-complexity is lower. What I showed is that, even if there were only a single possible universe where the threat was carried out, it’s still simple enough that the K-complexity is small enough that it’s worth paying the threatener.
No commenters have engaged the argument!
You gave a vague argument. Rather than giving a vague counterargument along the same lines, I just ran the math directly. You can argue that P(C|E) decreases all you want, but since I found that the actual value is still too high, it clearly doesn’t decrease fast enough.
If you want the vague counterargument, it’s simple: The probability that it’s a lie approaches unity. It just doesn’t approach it fast enough. It’s a heck of a lot less likely that someone who threatens 3^^^3 lives is telling the truth than someone who’s threatening one. It’s just not 3^^^3 times less likely.
If you mean what I think you mean, I’m ignoring it because I’m going with worst case. Rather than tracking how the probability of someone making the threat reduces slower than the probability of them carrying it out (which means a lower probability of them carrying it out), I’m showing that even if we assume that the probability is one, it’s not enough to discount the threat.
P(Person is capable of carrying out the threat) is high enough for you to pay it off on its own. The only way for P(Person is capable of carrying out the threat | Person makes the threat) to be small enough to ignore is if P(Person makes the threat) > 1.
Short version:
Even just considering the a priori probability of them honestly threatening you before you even take into account that they threatened you, it’s still enough to get Pascal mugged. The probability that they’re lying increases, but not fast enough.
Long version, with math:
Consider three possibilities:
A: Nobody makes the threat
B: Someone dishonestly makes the threat
C: Someone honestly makes the threat
Note that A, B, and C form a partition. That is to say exactly one of them is true. Technically it’s possible that more than one person makes the threat, and only one of them is honest, so B and C happen, so let’s say that the time and place of the threat are specified as well.
C is highly unlikely. If you tried to write a program simulating that, it would be long. It would still fit within the confines of this universe. Let’s be hyper-conservative, and call it K-complexity of a googol.
There is also a piece of evidence E, consisting of someone making the threat. E is consistent with B and C, but not A. This means:
P(A&E) = 0, P(B&E) = P(B), P(C&E) = P(C)
Calculating from here:
P(C|E) = P(C&E)/P(E)
= P(C)/P(E)
= 2^-googol
So the probability of the threatener being honest is at least 2^-googol. Since the disutility of not paying him if he is honest more than 2^googol times the utility of paying him, it’s worthwhile to pay.
What you present is the basic fallacy of Pascal’s Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.
Your formalism, in other words, doesn’t model the argument. The basic point is that Pascal Mugging can be solved by the same logic as succeeds with Pascal’s wager. Pascal ignored that believing in god A was instrumentally rational by ignoring that there might, with equal consequences, be a god B instead who hated people who worshiped god A.
Pascal’s Mugging ignores that giving to the mugger might cause the calamity threatened to be more likely if you accede to the mugger than if you don’t. The point of inflection is that point where the mugger’s making the claim becomes evidence against it rather than for it.
No commenters have engaged the argument!
The prior probability of X is 2^-(K-complexity of X). There are more possible universes where they carry out smaller threats, so the K-complexity is lower. What I showed is that, even if there were only a single possible universe where the threat was carried out, it’s still simple enough that the K-complexity is small enough that it’s worth paying the threatener.
You gave a vague argument. Rather than giving a vague counterargument along the same lines, I just ran the math directly. You can argue that P(C|E) decreases all you want, but since I found that the actual value is still too high, it clearly doesn’t decrease fast enough.
If you want the vague counterargument, it’s simple: The probability that it’s a lie approaches unity. It just doesn’t approach it fast enough. It’s a heck of a lot less likely that someone who threatens 3^^^3 lives is telling the truth than someone who’s threatening one. It’s just not 3^^^3 times less likely.
What you’re ignoring is the comparison probability. See philH’s comment.
I’m not sure what you mean.
If you mean what I think you mean, I’m ignoring it because I’m going with worst case. Rather than tracking how the probability of someone making the threat reduces slower than the probability of them carrying it out (which means a lower probability of them carrying it out), I’m showing that even if we assume that the probability is one, it’s not enough to discount the threat.
P(Person is capable of carrying out the threat) is high enough for you to pay it off on its own. The only way for P(Person is capable of carrying out the threat | Person makes the threat) to be small enough to ignore is if P(Person makes the threat) > 1.