Not the lottery. Its expected payoff is known to be negative. It doesn’t rest on expected utility divergence for unlikely important events, just on regular stupidity.
Eliezer and Nick make the reverse argument about the Singularity: it’s not unlikely enough to count as a mugging.
Not the lottery. Its expected payoff is known to be negative. It doesn’t rest on expected utility divergence for unlikely important events, just on regular stupidity.
The lottery promises people a very small chance of a very large payoff—in return for some money up front.
I think you need to explain in more detail how that is significantly different from the pitch of a Pascal’s Mugger—which usually doesn’t make too much sense either.
Eliezer and Nick make the reverse argument about the Singularity: it’s not unlikely enough to count as a mugging.
It’s not just “very large payoff and very small probability”. Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N). However, if you aggregate human suffering linearly, the utility you assign to these events increases in N (otherwise, find events with utility linear in N—they exist if your utility function is unbounded). Therefore, your expected utility diverges as N goes to infinity. So for any certain cost, there’s a number N large enough for you to pay the cost.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The problem with Pascal’s Mugging is that we think the expected payoff of certain kinds of actions is huge (which probably reveals a problem with how we compute expected utility). The problem with lotteries is that we know the expected payoff is negative, but play anyway.
Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N).
That isn’t correct; the prior will decrease more slowly than any computable function that monotonically decreases to zero.
Doesn’t it not decrease at all? After all, 3^^^^3 people get tortured is more likely than m people get tortured where m is some complicated integer between 0 and 3^^^^3 admitting no short description.
It doesn’t monotonically decrease but, for any probability p, there exists a number of people m such that the probability of n people get tortured, for any n > m, is less than p.
Very roughly, the idea is that the prior probability that the universe is a (n+1) state Turing machine is half the prior probability that the universe is a n state Turing machine. Whereas the most anyone can offer you in a n state machine is BB(n) but the most they can offer you in a (n+1) state machine is BB(n+1)
So, again very roughly, the probability I can offer you BB(n) is roughly k2^(-n) where k is a very small constant. So the probability I can offer you m utility is roughly k2^(-inverseBB(n)).
InverseBB(n) is a monotonically increasing function that increases more slowly than any montonically increasing computable function, so k2^(-inverseBB(n)) is a monotonically decreasing function that decreases more slowly than any computable monotonically decreasing function.
To add to that explanation, you can prove that the number of people who can be simulated on a halting n-state Turing machine has no computable upper bound by considering a Turing machine that alternates between some computation and simulating humans every fixed number of steps. If we could compute an upper bound on the number of humans simulates we could compute whether the TM would halt by waiting for that many people to be simulated, similarly to how we could use BB(n) to determine whether any n-state TM will halt if we knew its value.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The expected gain is less than the cost.for the mugging as well—otherwise, it is not a mugging, but an invitation to make a wise investment. As for the probability of the lottery payout being known—doesn’t that depend on which lottery, and which punter—we are talking about?
I think you need to explain in more detail how that is significantly different from the pitch of a Pascal’s Mugger—which usually doesn’t make too much sense either.
It’s easy to calculate the expected returns from buying a lottery ticket, and they’re almost always negative. The psychology behind them is similar to a P-mugging, but only because people aren’t very good at math—eight-digit returns are compared against a one-digit outlay and scope insensitivity issues do their dirty work.
P-muggings like the one Eliezer described work differently: they postulate a return in utility (or, in some versions, avoided disutility) so vast that the small outlay in utility is meant to produce a positive expected return, as calculated by our usual decision theories, even after factoring in the very high probability that the P-mugger is lying, mistaken, or crazy. Whether or not it’s possible for such a setup to be credible is debatable; as given it probably wouldn’t work well in the wild, but I’d expect that to be due primarily to the way human risk aversion heuristics work.
True enough, but that distinction represents a can of worms that I don’t really want to open here. Point is, you don’t need that sort of utilitarian sleight of hand to get Pascal’s mugging to work—the vulnerability it exploits lies elsewhere, probably in the way Solomonoff-based decision theory bounds its expectations.
We can separate having any impact, e.g. on the scale of a saved life or more, in the actual world from solving a large part of the total problem. A $1 VillageReach contribution is quite unlikely to save a life, but $100,000 would be quite likely to save 100. Either way, there is little chance of making a noticeable percentage decrease in global poverty or disease rates (although there is some, e.g. by boosting the new institutions and culture of efficient philanthropy, etc). I think political contributions and funding for scientific (including medical) research would be a better comparison, where even large donations are unlikely to deliver actual results (although we think that on the whole the practice of funding medical research is quite likely to pay off, even if any particular researcher is unlikely to cure cancer, etc).
Not the lottery. Its expected payoff is known to be negative. It doesn’t rest on expected utility divergence for unlikely important events, just on regular stupidity.
Eliezer and Nick make the reverse argument about the Singularity: it’s not unlikely enough to count as a mugging.
The lottery promises people a very small chance of a very large payoff—in return for some money up front.
I think you need to explain in more detail how that is significantly different from the pitch of a Pascal’s Mugger—which usually doesn’t make too much sense either.
Yes, for example here.
Rember that it is not the probability of the S-word we are talking about, but the chance of a particular donation making much of a difference.
It’s not just “very large payoff and very small probability”. Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N). However, if you aggregate human suffering linearly, the utility you assign to these events increases in N (otherwise, find events with utility linear in N—they exist if your utility function is unbounded). Therefore, your expected utility diverges as N goes to infinity. So for any certain cost, there’s a number N large enough for you to pay the cost.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The problem with Pascal’s Mugging is that we think the expected payoff of certain kinds of actions is huge (which probably reveals a problem with how we compute expected utility). The problem with lotteries is that we know the expected payoff is negative, but play anyway.
That isn’t correct; the prior will decrease more slowly than any computable function that monotonically decreases to zero.
Doesn’t it not decrease at all? After all, 3^^^^3 people get tortured is more likely than m people get tortured where m is some complicated integer between 0 and 3^^^^3 admitting no short description.
It doesn’t monotonically decrease but, for any probability p, there exists a number of people m such that the probability of n people get tortured, for any n > m, is less than p.
Huh, I don’t understand Solomonoff induction then. Explain?
Very roughly, the idea is that the prior probability that the universe is a (n+1) state Turing machine is half the prior probability that the universe is a n state Turing machine. Whereas the most anyone can offer you in a n state machine is BB(n) but the most they can offer you in a (n+1) state machine is BB(n+1)
So, again very roughly, the probability I can offer you BB(n) is roughly k2^(-n) where k is a very small constant. So the probability I can offer you m utility is roughly k2^(-inverseBB(n)).
InverseBB(n) is a monotonically increasing function that increases more slowly than any montonically increasing computable function, so k2^(-inverseBB(n)) is a monotonically decreasing function that decreases more slowly than any computable monotonically decreasing function.
Ooo, I get it! I was thinking of writing out the utility (log_base), but this is more general. Thanks!
To add to that explanation, you can prove that the number of people who can be simulated on a halting n-state Turing machine has no computable upper bound by considering a Turing machine that alternates between some computation and simulating humans every fixed number of steps. If we could compute an upper bound on the number of humans simulates we could compute whether the TM would halt by waiting for that many people to be simulated, similarly to how we could use BB(n) to determine whether any n-state TM will halt if we knew its value.
The expected gain is less than the cost.for the mugging as well—otherwise, it is not a mugging, but an invitation to make a wise investment. As for the probability of the lottery payout being known—doesn’t that depend on which lottery, and which punter—we are talking about?
It’s easy to calculate the expected returns from buying a lottery ticket, and they’re almost always negative. The psychology behind them is similar to a P-mugging, but only because people aren’t very good at math—eight-digit returns are compared against a one-digit outlay and scope insensitivity issues do their dirty work.
P-muggings like the one Eliezer described work differently: they postulate a return in utility (or, in some versions, avoided disutility) so vast that the small outlay in utility is meant to produce a positive expected return, as calculated by our usual decision theories, even after factoring in the very high probability that the P-mugger is lying, mistaken, or crazy. Whether or not it’s possible for such a setup to be credible is debatable; as given it probably wouldn’t work well in the wild, but I’d expect that to be due primarily to the way human risk aversion heuristics work.
In dollars—but not expected utilons, obviously. People generally play the lottery because they want to win.
True enough, but that distinction represents a can of worms that I don’t really want to open here. Point is, you don’t need that sort of utilitarian sleight of hand to get Pascal’s mugging to work—the vulnerability it exploits lies elsewhere, probably in the way Solomonoff-based decision theory bounds its expectations.
By this logic any charity is a Pascal’s mugging.
I figure Pascal’s mugging additionaly requires a chance of very large utility delta being involved.
We can separate having any impact, e.g. on the scale of a saved life or more, in the actual world from solving a large part of the total problem. A $1 VillageReach contribution is quite unlikely to save a life, but $100,000 would be quite likely to save 100. Either way, there is little chance of making a noticeable percentage decrease in global poverty or disease rates (although there is some, e.g. by boosting the new institutions and culture of efficient philanthropy, etc). I think political contributions and funding for scientific (including medical) research would be a better comparison, where even large donations are unlikely to deliver actual results (although we think that on the whole the practice of funding medical research is quite likely to pay off, even if any particular researcher is unlikely to cure cancer, etc).