I think you are overestimating the probabilities there: it is only Pascal’s Mugging if you fail to attribute a low enough probability to the mugger’s claim. The problem, in my opinion, is not how to deal with tiny probabilities of vast utilities, but how not to attribute too high probabilities to events whose probabilities defy our brain’s capacity (like “magic powers from outside the Matrix”).
The problem here is that you’re not “attributing” a probability; you’re calculating a probability through Solomonoff Induction. In this case, the probability is far too low to actually calculate, but simple observation tells us this much: the Solomonoff probability is given by the expression 2^(-Kolmogorov), which is mere exponentiation. There’s pretty much no way mere exponentiation can catch up to four up-arrows in Knuth’s up-arrow notation; therefore, it doesn’t even really matter what the Kolmogorov complexity is, because there’s no way it can be nearly as low as 3^^^^3 is high.
All would be well and good if we could simply assign probabilities to be whatever we want; then we could just set the probability of Pascal’s-Mugging-type situations as low as we wanted. To an extent, since we’re humans and thus unable to compute the actual probabilities, we still can do this. But paradoxically enough, as a mind’s computational ability increases, so too does its susceptibility to these types of situations. An AI that is actually able to compute/approximate Solomonoff Induction would find that the probability is vastly outweighed by the utility gain, which is part of what makes the problem a problem.
I also feel that, as with Pascal’s wager, this situation can be mirrored (and therefore have the expected utilities canceled out) if you simply think “What if he intends to kill those people only if I abide by his demand ?”. As with Pascal’s wager, the possibilities aren’t only what the wager stipulates: when dealing with infinites in decision making (I’m not sure one can say “the probability of this event doesn’t overcome the vast utility gained” with such numbers) you probably have another infinite which you also can’t evaluate hiding behind the question.
But do the two possibilities really sum to zero? These are two different situations we’re talking about here: “he kills them if I don’t abide” versus “he kills them if I do”. If a computationally powerful enough AI calculated the probabilities of these two possibilities, will they actually miraculously cancel out? The probabilities will likely mostly cancel, true, but even the smallest remainder will still be enough to trigger the monstrous utilities carried by a number like 3^^^^3. If an AI actually carries out the calculations, without any a priori desire that the probabilities should cancel, can you guarantee that they will? If not, then the problem persists.
Also, your remark on infinities in decision-making is well-taken, but I don’t think it applies here. As large as 3^^^^3 is, it’s nowhere close to infinity. As such, the sort of problems that infinite utilities pose, while interesting in their own right, aren’t really relevant here.
You’re only “calculating a probability through Solomonoff Induction” if the probability is only affected by complexity. If there are other reasons that could reduce the probability, they can reduce it by more. For instance, a lying mugger can increase his probability of being able to extort money from a naive rationalist by increasing the size of the purported payoff, so a large payoff is better evidence for a lying mugger than a small payoff.
Additional factors very well may reduce the probability. The question is whether they reduce it by enough. Given how enormously large 3^^^^3 is, I’m practically certain they won’t. And even if you somehow manage to come up with a way to reduce the probability by enough, there’s nothing stopping the mugger from simply adding another up-arrow to his claim: “Give me five dollars, or I’ll torture and kill 3^^^^^3 people!” Then your probability reduction will be rendered pretty much irrelevant. And then, if you miraculously find a way to reduce the probability again to account for the enormous increase in utility, the mugger will simply add yet another up-arrow. So we see that ad hoc probability reductions don’t work well here, because the mugger can always overcome those by making his number bigger; what’s needed is a probability penalty that scales with the size of the mugger’s claim: a penalty that can always reduce the expected utility of his offer down to ~0. Factors independent of the size of his claim, such as the probability that he’s lying (since he could be lying no matter how big or how small his number actually is), are unlikely to accomplish this.
such as the probability that he’s lying (since he could be lying no matter how big or how small his number actually is)
He could be lying regardless of the size of the number, but the probability that he is lying would still be affected by the size of the number. A larger number is more likely to convince a naive rationalist than a smaller number, precisely because believing the larger number means believing there is more utility. This makes larger numbers more beneficial to fake muggers than smaller numbers. So the larger the number, the lower the chance that the mugger is telling the truth. This means that changing the size of the number can decrease the probability of truth in a way that keeps pace with the increase in utility that being true would provide.
(Actually, there’s an even more interesting factor that nobody ever brings up: even genuine muggers must have a distribution of numbers they are willing to use. This distribution must have a peak at a finite value, since it is impossible to have an even distribution over all numbers. If the fake mugger keeps adding arrows, he’s going to go over this peak and a rationalist’s estimate that he is telling the truth should go down because of that as well.)
Is this simply one statement ? Is Solomonoff complexity additive with multiple statements that must be true at once ?
Or is it possible that we can calculate the probability as a chain of Solomonoff complexities, something like:
s1, s2 … etc are the statements. You need all of them to be true: magic powers, matrix, etc. Are they simply considered as one statement with one Solomonoff complexity K = 2^(x) ? Or K1K2… = 2 ^ (x1 + x2 + …) ? Or K1^K2^… = 2^(2^(2^...)) ?
And if it’s considered as one statement, does simply calculating the probability with K1^K2^… solve the problem ?
Point taken on the summation of the possibilities, they might not sum to zero.
Also, does invoking “magic powers” equal invoking an infinite ? It basically says nothing except “I can do what I want”
The problem here is that you’re not “attributing” a probability; you’re calculating a probability through Solomonoff Induction. In this case, the probability is far too low to actually calculate, but simple observation tells us this much: the Solomonoff probability is given by the expression 2^(-Kolmogorov), which is mere exponentiation. There’s pretty much no way mere exponentiation can catch up to four up-arrows in Knuth’s up-arrow notation; therefore, it doesn’t even really matter what the Kolmogorov complexity is, because there’s no way it can be nearly as low as 3^^^^3 is high.
All would be well and good if we could simply assign probabilities to be whatever we want; then we could just set the probability of Pascal’s-Mugging-type situations as low as we wanted. To an extent, since we’re humans and thus unable to compute the actual probabilities, we still can do this. But paradoxically enough, as a mind’s computational ability increases, so too does its susceptibility to these types of situations. An AI that is actually able to compute/approximate Solomonoff Induction would find that the probability is vastly outweighed by the utility gain, which is part of what makes the problem a problem.
But do the two possibilities really sum to zero? These are two different situations we’re talking about here: “he kills them if I don’t abide” versus “he kills them if I do”. If a computationally powerful enough AI calculated the probabilities of these two possibilities, will they actually miraculously cancel out? The probabilities will likely mostly cancel, true, but even the smallest remainder will still be enough to trigger the monstrous utilities carried by a number like 3^^^^3. If an AI actually carries out the calculations, without any a priori desire that the probabilities should cancel, can you guarantee that they will? If not, then the problem persists.
Also, your remark on infinities in decision-making is well-taken, but I don’t think it applies here. As large as 3^^^^3 is, it’s nowhere close to infinity. As such, the sort of problems that infinite utilities pose, while interesting in their own right, aren’t really relevant here.
You’re only “calculating a probability through Solomonoff Induction” if the probability is only affected by complexity. If there are other reasons that could reduce the probability, they can reduce it by more. For instance, a lying mugger can increase his probability of being able to extort money from a naive rationalist by increasing the size of the purported payoff, so a large payoff is better evidence for a lying mugger than a small payoff.
Additional factors very well may reduce the probability. The question is whether they reduce it by enough. Given how enormously large 3^^^^3 is, I’m practically certain they won’t. And even if you somehow manage to come up with a way to reduce the probability by enough, there’s nothing stopping the mugger from simply adding another up-arrow to his claim: “Give me five dollars, or I’ll torture and kill 3^^^^^3 people!” Then your probability reduction will be rendered pretty much irrelevant. And then, if you miraculously find a way to reduce the probability again to account for the enormous increase in utility, the mugger will simply add yet another up-arrow. So we see that ad hoc probability reductions don’t work well here, because the mugger can always overcome those by making his number bigger; what’s needed is a probability penalty that scales with the size of the mugger’s claim: a penalty that can always reduce the expected utility of his offer down to ~0. Factors independent of the size of his claim, such as the probability that he’s lying (since he could be lying no matter how big or how small his number actually is), are unlikely to accomplish this.
He could be lying regardless of the size of the number, but the probability that he is lying would still be affected by the size of the number. A larger number is more likely to convince a naive rationalist than a smaller number, precisely because believing the larger number means believing there is more utility. This makes larger numbers more beneficial to fake muggers than smaller numbers. So the larger the number, the lower the chance that the mugger is telling the truth. This means that changing the size of the number can decrease the probability of truth in a way that keeps pace with the increase in utility that being true would provide.
(Actually, there’s an even more interesting factor that nobody ever brings up: even genuine muggers must have a distribution of numbers they are willing to use. This distribution must have a peak at a finite value, since it is impossible to have an even distribution over all numbers. If the fake mugger keeps adding arrows, he’s going to go over this peak and a rationalist’s estimate that he is telling the truth should go down because of that as well.)
Is this simply one statement ? Is Solomonoff complexity additive with multiple statements that must be true at once ? Or is it possible that we can calculate the probability as a chain of Solomonoff complexities, something like:
s1, s2 … etc are the statements. You need all of them to be true: magic powers, matrix, etc. Are they simply considered as one statement with one Solomonoff complexity K = 2^(x) ? Or K1K2… = 2 ^ (x1 + x2 + …) ? Or K1^K2^… = 2^(2^(2^...)) ?
And if it’s considered as one statement, does simply calculating the probability with K1^K2^… solve the problem ?
Point taken on the summation of the possibilities, they might not sum to zero.
Also, does invoking “magic powers” equal invoking an infinite ? It basically says nothing except “I can do what I want”