Not the lottery. Its expected payoff is known to be negative. It doesn’t rest on expected utility divergence for unlikely important events, just on regular stupidity.
Eliezer and Nick make the reverse argument about the Singularity: it’s not unlikely enough to count as a mugging.
Not the lottery. Its expected payoff is known to be negative. It doesn’t rest on expected utility divergence for unlikely important events, just on regular stupidity.
The lottery promises people a very small chance of a very large payoff—in return for some money up front.
I think you need to explain in more detail how that is significantly different from the pitch of a Pascal’s Mugger—which usually doesn’t make too much sense either.
Eliezer and Nick make the reverse argument about the Singularity: it’s not unlikely enough to count as a mugging.
It’s not just “very large payoff and very small probability”. Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N). However, if you aggregate human suffering linearly, the utility you assign to these events increases in N (otherwise, find events with utility linear in N—they exist if your utility function is unbounded). Therefore, your expected utility diverges as N goes to infinity. So for any certain cost, there’s a number N large enough for you to pay the cost.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The problem with Pascal’s Mugging is that we think the expected payoff of certain kinds of actions is huge (which probably reveals a problem with how we compute expected utility). The problem with lotteries is that we know the expected payoff is negative, but play anyway.
Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N).
That isn’t correct; the prior will decrease more slowly than any computable function that monotonically decreases to zero.
Doesn’t it not decrease at all? After all, 3^^^^3 people get tortured is more likely than m people get tortured where m is some complicated integer between 0 and 3^^^^3 admitting no short description.
It doesn’t monotonically decrease but, for any probability p, there exists a number of people m such that the probability of n people get tortured, for any n > m, is less than p.
Very roughly, the idea is that the prior probability that the universe is a (n+1) state Turing machine is half the prior probability that the universe is a n state Turing machine. Whereas the most anyone can offer you in a n state machine is BB(n) but the most they can offer you in a (n+1) state machine is BB(n+1)
So, again very roughly, the probability I can offer you BB(n) is roughly k2^(-n) where k is a very small constant. So the probability I can offer you m utility is roughly k2^(-inverseBB(n)).
InverseBB(n) is a monotonically increasing function that increases more slowly than any montonically increasing computable function, so k2^(-inverseBB(n)) is a monotonically decreasing function that decreases more slowly than any computable monotonically decreasing function.
To add to that explanation, you can prove that the number of people who can be simulated on a halting n-state Turing machine has no computable upper bound by considering a Turing machine that alternates between some computation and simulating humans every fixed number of steps. If we could compute an upper bound on the number of humans simulates we could compute whether the TM would halt by waiting for that many people to be simulated, similarly to how we could use BB(n) to determine whether any n-state TM will halt if we knew its value.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The expected gain is less than the cost.for the mugging as well—otherwise, it is not a mugging, but an invitation to make a wise investment. As for the probability of the lottery payout being known—doesn’t that depend on which lottery, and which punter—we are talking about?
I think you need to explain in more detail how that is significantly different from the pitch of a Pascal’s Mugger—which usually doesn’t make too much sense either.
It’s easy to calculate the expected returns from buying a lottery ticket, and they’re almost always negative. The psychology behind them is similar to a P-mugging, but only because people aren’t very good at math—eight-digit returns are compared against a one-digit outlay and scope insensitivity issues do their dirty work.
P-muggings like the one Eliezer described work differently: they postulate a return in utility (or, in some versions, avoided disutility) so vast that the small outlay in utility is meant to produce a positive expected return, as calculated by our usual decision theories, even after factoring in the very high probability that the P-mugger is lying, mistaken, or crazy. Whether or not it’s possible for such a setup to be credible is debatable; as given it probably wouldn’t work well in the wild, but I’d expect that to be due primarily to the way human risk aversion heuristics work.
True enough, but that distinction represents a can of worms that I don’t really want to open here. Point is, you don’t need that sort of utilitarian sleight of hand to get Pascal’s mugging to work—the vulnerability it exploits lies elsewhere, probably in the way Solomonoff-based decision theory bounds its expectations.
We can separate having any impact, e.g. on the scale of a saved life or more, in the actual world from solving a large part of the total problem. A $1 VillageReach contribution is quite unlikely to save a life, but $100,000 would be quite likely to save 100. Either way, there is little chance of making a noticeable percentage decrease in global poverty or disease rates (although there is some, e.g. by boosting the new institutions and culture of efficient philanthropy, etc). I think political contributions and funding for scientific (including medical) research would be a better comparison, where even large donations are unlikely to deliver actual results (although we think that on the whole the practice of funding medical research is quite likely to pay off, even if any particular researcher is unlikely to cure cancer, etc).
Could you elaborate on how those fit into the Pascal’s Mugging pattern? Religion and chain letters were covered already, but some of the others you gave aren’t so clear. (And even on the ones where I have a good idea what you mean, it would help to see the mapping explicitly.)
Remember, the challenge isn’t to find general mind viruses or high-fitness memes, but rather, memes that spread because of a PM-like threat/promise.
Edit: D’oh! The topic creator did ask for mind viruses, and the request was in the very comment I responded to! Still, I think the purpose of the request was mainly to elicit PM-type mind viruses, otherwise we’ll just be uninterestingly listing popular stuff.
The recent “givewell” interview drew the parallel in the case of the SIAI:
I accept a lot of the controversial premises of your mission, but I’m a pretty long way from sold that you have the right team or the right approach. Now some have argued to me that I don’t need to be sold—that even at an infinitesimal probability of success, your project is worthwhile. I see that as a Pascal’s Mugging and don’t accept it; I wouldn’t endorse your project unless it passed the basic hurdles of credibility and workable approach as well as potentially astronomically beneficial goal.
I suspect that isn’t quite right. The FHI endorses the “maxipok” principle. It is more about promising hell-avoidance than heavenly benefits. I am not sure the SIAI is sold on this—and I have heard them waxing lyrical on the “heavenly benefits” side—but I expect they will agree that the position makes sense.
So, the idea is not that the organisation accompanies its requests for donations with a confession that it is just waving high utility in front of them in the hope of parting them from their money. That is hardly likely to be an effective fundraising strategy. Nobody ever suggested that in the first place. The idea is more that it uses promises of very high utility to compensate for a lack of concrete success probabilities—much like Pascal’s mugger does.
If you are short of examples of them waving high utitily around, perhaps see:
Religions, chain letters (maybe), SIAI, FHI - oh, and the lottery.
Ironically, Yudkowsky and Bostrom have both written articles about Pascal’s Mugging.
There are plenty of memetic viruses out there—from urban ledends to World of Warcraft.
Not the lottery. Its expected payoff is known to be negative. It doesn’t rest on expected utility divergence for unlikely important events, just on regular stupidity.
Eliezer and Nick make the reverse argument about the Singularity: it’s not unlikely enough to count as a mugging.
The lottery promises people a very small chance of a very large payoff—in return for some money up front.
I think you need to explain in more detail how that is significantly different from the pitch of a Pascal’s Mugger—which usually doesn’t make too much sense either.
Yes, for example here.
Rember that it is not the probability of the S-word we are talking about, but the chance of a particular donation making much of a difference.
It’s not just “very large payoff and very small probability”. Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N). However, if you aggregate human suffering linearly, the utility you assign to these events increases in N (otherwise, find events with utility linear in N—they exist if your utility function is unbounded). Therefore, your expected utility diverges as N goes to infinity. So for any certain cost, there’s a number N large enough for you to pay the cost.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The problem with Pascal’s Mugging is that we think the expected payoff of certain kinds of actions is huge (which probably reveals a problem with how we compute expected utility). The problem with lotteries is that we know the expected payoff is negative, but play anyway.
That isn’t correct; the prior will decrease more slowly than any computable function that monotonically decreases to zero.
Doesn’t it not decrease at all? After all, 3^^^^3 people get tortured is more likely than m people get tortured where m is some complicated integer between 0 and 3^^^^3 admitting no short description.
It doesn’t monotonically decrease but, for any probability p, there exists a number of people m such that the probability of n people get tortured, for any n > m, is less than p.
Huh, I don’t understand Solomonoff induction then. Explain?
Very roughly, the idea is that the prior probability that the universe is a (n+1) state Turing machine is half the prior probability that the universe is a n state Turing machine. Whereas the most anyone can offer you in a n state machine is BB(n) but the most they can offer you in a (n+1) state machine is BB(n+1)
So, again very roughly, the probability I can offer you BB(n) is roughly k2^(-n) where k is a very small constant. So the probability I can offer you m utility is roughly k2^(-inverseBB(n)).
InverseBB(n) is a monotonically increasing function that increases more slowly than any montonically increasing computable function, so k2^(-inverseBB(n)) is a monotonically decreasing function that decreases more slowly than any computable monotonically decreasing function.
Ooo, I get it! I was thinking of writing out the utility (log_base), but this is more general. Thanks!
To add to that explanation, you can prove that the number of people who can be simulated on a halting n-state Turing machine has no computable upper bound by considering a Turing machine that alternates between some computation and simulating humans every fixed number of steps. If we could compute an upper bound on the number of humans simulates we could compute whether the TM would halt by waiting for that many people to be simulated, similarly to how we could use BB(n) to determine whether any n-state TM will halt if we knew its value.
The expected gain is less than the cost.for the mugging as well—otherwise, it is not a mugging, but an invitation to make a wise investment. As for the probability of the lottery payout being known—doesn’t that depend on which lottery, and which punter—we are talking about?
It’s easy to calculate the expected returns from buying a lottery ticket, and they’re almost always negative. The psychology behind them is similar to a P-mugging, but only because people aren’t very good at math—eight-digit returns are compared against a one-digit outlay and scope insensitivity issues do their dirty work.
P-muggings like the one Eliezer described work differently: they postulate a return in utility (or, in some versions, avoided disutility) so vast that the small outlay in utility is meant to produce a positive expected return, as calculated by our usual decision theories, even after factoring in the very high probability that the P-mugger is lying, mistaken, or crazy. Whether or not it’s possible for such a setup to be credible is debatable; as given it probably wouldn’t work well in the wild, but I’d expect that to be due primarily to the way human risk aversion heuristics work.
In dollars—but not expected utilons, obviously. People generally play the lottery because they want to win.
True enough, but that distinction represents a can of worms that I don’t really want to open here. Point is, you don’t need that sort of utilitarian sleight of hand to get Pascal’s mugging to work—the vulnerability it exploits lies elsewhere, probably in the way Solomonoff-based decision theory bounds its expectations.
By this logic any charity is a Pascal’s mugging.
I figure Pascal’s mugging additionaly requires a chance of very large utility delta being involved.
We can separate having any impact, e.g. on the scale of a saved life or more, in the actual world from solving a large part of the total problem. A $1 VillageReach contribution is quite unlikely to save a life, but $100,000 would be quite likely to save 100. Either way, there is little chance of making a noticeable percentage decrease in global poverty or disease rates (although there is some, e.g. by boosting the new institutions and culture of efficient philanthropy, etc). I think political contributions and funding for scientific (including medical) research would be a better comparison, where even large donations are unlikely to deliver actual results (although we think that on the whole the practice of funding medical research is quite likely to pay off, even if any particular researcher is unlikely to cure cancer, etc).
Could you elaborate on how those fit into the Pascal’s Mugging pattern? Religion and chain letters were covered already, but some of the others you gave aren’t so clear. (And even on the ones where I have a good idea what you mean, it would help to see the mapping explicitly.)
Remember, the challenge isn’t to find general mind viruses or high-fitness memes, but rather, memes that spread because of a PM-like threat/promise.
Edit: D’oh! The topic creator did ask for mind viruses, and the request was in the very comment I responded to! Still, I think the purpose of the request was mainly to elicit PM-type mind viruses, otherwise we’ll just be uninterestingly listing popular stuff.
The recent “givewell” interview drew the parallel in the case of the SIAI:
http://commonsenseatheism.com/wp-content/uploads/2011/05/siai-2011-02-III.pdf
I suspect that isn’t quite right. The FHI endorses the “maxipok” principle. It is more about promising hell-avoidance than heavenly benefits. I am not sure the SIAI is sold on this—and I have heard them waxing lyrical on the “heavenly benefits” side—but I expect they will agree that the position makes sense.
Its worth noting that the SIAI representative agreed that he shouldn’t support SIAI unless it passed those hurdles, he merely argued that it did.
To my knowledge no SIAI employee has ever made the Pascal’s mugging type argument, it is a pure strawman.
So, the idea is not that the organisation accompanies its requests for donations with a confession that it is just waving high utility in front of them in the hope of parting them from their money. That is hardly likely to be an effective fundraising strategy. Nobody ever suggested that in the first place. The idea is more that it uses promises of very high utility to compensate for a lack of concrete success probabilities—much like Pascal’s mugger does.
If you are short of examples of them waving high utitily around, perhaps see:
How Much it Matters to Know What Matters: A Back of the Envelope Calculation
Good find—and I think both promises of hell avoidance and heavenly benefits count as Pascal’s Mugging.