It’s not just “very large payoff and very small probability”. Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N). However, if you aggregate human suffering linearly, the utility you assign to these events increases in N (otherwise, find events with utility linear in N—they exist if your utility function is unbounded). Therefore, your expected utility diverges as N goes to infinity. So for any certain cost, there’s a number N large enough for you to pay the cost.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The problem with Pascal’s Mugging is that we think the expected payoff of certain kinds of actions is huge (which probably reveals a problem with how we compute expected utility). The problem with lotteries is that we know the expected payoff is negative, but play anyway.
Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N).
That isn’t correct; the prior will decrease more slowly than any computable function that monotonically decreases to zero.
Doesn’t it not decrease at all? After all, 3^^^^3 people get tortured is more likely than m people get tortured where m is some complicated integer between 0 and 3^^^^3 admitting no short description.
It doesn’t monotonically decrease but, for any probability p, there exists a number of people m such that the probability of n people get tortured, for any n > m, is less than p.
Very roughly, the idea is that the prior probability that the universe is a (n+1) state Turing machine is half the prior probability that the universe is a n state Turing machine. Whereas the most anyone can offer you in a n state machine is BB(n) but the most they can offer you in a (n+1) state machine is BB(n+1)
So, again very roughly, the probability I can offer you BB(n) is roughly k2^(-n) where k is a very small constant. So the probability I can offer you m utility is roughly k2^(-inverseBB(n)).
InverseBB(n) is a monotonically increasing function that increases more slowly than any montonically increasing computable function, so k2^(-inverseBB(n)) is a monotonically decreasing function that decreases more slowly than any computable monotonically decreasing function.
To add to that explanation, you can prove that the number of people who can be simulated on a halting n-state Turing machine has no computable upper bound by considering a Turing machine that alternates between some computation and simulating humans every fixed number of steps. If we could compute an upper bound on the number of humans simulates we could compute whether the TM would halt by waiting for that many people to be simulated, similarly to how we could use BB(n) to determine whether any n-state TM will halt if we knew its value.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The expected gain is less than the cost.for the mugging as well—otherwise, it is not a mugging, but an invitation to make a wise investment. As for the probability of the lottery payout being known—doesn’t that depend on which lottery, and which punter—we are talking about?
It’s not just “very large payoff and very small probability”. Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N). However, if you aggregate human suffering linearly, the utility you assign to these events increases in N (otherwise, find events with utility linear in N—they exist if your utility function is unbounded). Therefore, your expected utility diverges as N goes to infinity. So for any certain cost, there’s a number N large enough for you to pay the cost.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The problem with Pascal’s Mugging is that we think the expected payoff of certain kinds of actions is huge (which probably reveals a problem with how we compute expected utility). The problem with lotteries is that we know the expected payoff is negative, but play anyway.
That isn’t correct; the prior will decrease more slowly than any computable function that monotonically decreases to zero.
Doesn’t it not decrease at all? After all, 3^^^^3 people get tortured is more likely than m people get tortured where m is some complicated integer between 0 and 3^^^^3 admitting no short description.
It doesn’t monotonically decrease but, for any probability p, there exists a number of people m such that the probability of n people get tortured, for any n > m, is less than p.
Huh, I don’t understand Solomonoff induction then. Explain?
Very roughly, the idea is that the prior probability that the universe is a (n+1) state Turing machine is half the prior probability that the universe is a n state Turing machine. Whereas the most anyone can offer you in a n state machine is BB(n) but the most they can offer you in a (n+1) state machine is BB(n+1)
So, again very roughly, the probability I can offer you BB(n) is roughly k2^(-n) where k is a very small constant. So the probability I can offer you m utility is roughly k2^(-inverseBB(n)).
InverseBB(n) is a monotonically increasing function that increases more slowly than any montonically increasing computable function, so k2^(-inverseBB(n)) is a monotonically decreasing function that decreases more slowly than any computable monotonically decreasing function.
Ooo, I get it! I was thinking of writing out the utility (log_base), but this is more general. Thanks!
To add to that explanation, you can prove that the number of people who can be simulated on a halting n-state Turing machine has no computable upper bound by considering a Turing machine that alternates between some computation and simulating humans every fixed number of steps. If we could compute an upper bound on the number of humans simulates we could compute whether the TM would halt by waiting for that many people to be simulated, similarly to how we could use BB(n) to determine whether any n-state TM will halt if we knew its value.
The expected gain is less than the cost.for the mugging as well—otherwise, it is not a mugging, but an invitation to make a wise investment. As for the probability of the lottery payout being known—doesn’t that depend on which lottery, and which punter—we are talking about?