Observed Pascal’s Mugging
When I first read about Pascal’s Mugging, it stuck in my brain. I knew that I had seen something like this occur in real life, though I couldn’t quite place the specifics at the time. Then it hit me- chain letters are a classical example of this. They promise that if you copy the letter X times and distribute it, you will have good luck, your hair will grow back, etc., but if you don’t distribute it, you will have bad luck, pianos will fall on you, etc.
I thought a little bit further about this, because I knew I hadn’t originally gotten bothered about this sort of thing because of chain letters (no one sends actual mail anymore); chain letters are just the obvious example that will occur after five minutes of thought. The actual thing that had stuck in my brain was a phenomenon particular to the internet. On Youtube and 4chan, there are often messages posted at least superficially similar to the following:
In [year], [someone] was killed after [spectacularly gruesome details]. Post a copy of this message in the next 10 minutes, or the ghost of [someone] will find you tonight and [spectacularly gruesome details].
These threads have hundreds of replies almost every time that I see them on 4chan, and on Youtube they spread from one video to the next while actual comments become buried. This demonstrates that at least a significant fraction of the people that see these make the mental calculation that if they do not post it, they have a small chance of suffering incredible disutility, but if they do post it, they suffer only a few seconds of lost time. (They’re also much more effective at spreading than chain letters because the transmission time and cost of spreading are much less than with chain letters).
This is one of the best examples of real-life Pascalian muggings that I’ve found and one of the arguments for the benefits of raising the sanity waterline- the lost man-hours to this sort of thing may be non-trivial. Also, this demonstrates one crude way of hacking the human brain in order to accomplish a task. Essentially, these postings and all of their variants are simple memetic mind viruses. In 4chan especially, they can be seen to outcompete actual posting when placed side by side. No one wants to be killed in spectacularly gruesome detail, and so they spread with ease and can dominate threads when the conditions are right.
Anyone else have examples of Pascal’s Mugging or memetic viruses?
It seems like the term ‘Pascal’s Mugging’ is having its meaning degraded.
I believe the original article that introduced the idea was careful to make sure that a simple expected utility calculation would show that accepting the offer was rational. To do this it deliberately exploited explosive functions like Knuth up-arrow notation to make sure the utilities grew faster than the probabilities shrank. This is what made it scary, by our current understanding an ideal rational agent would hand over the money, it calls into question what we mean by ‘ideal rational agent’.
The example given is NOT a Pascal’s mugging. One life, even if it is my own, has nowhere near enough utility to overcome the astonishingly tiny probability of the message being correct. Even if I cared about nothing else, there are more effective uses of those few seconds in terms of increasing my life expectancy. The people who comment are being irrational (assuming they are taking it seriously and not just playing along for fun).
All the other examples given in the thread are the same (with the possible exception of religion). Please can we make an effort to keep the original meaning, it is more interesting as a concept for refining our understanding how rational agents would work.
Fair point.
However, the message could easily be modified to exclude the supernatural (the poster has hacked youtube and is monitoring the responses for IP addresses of those who viewed the video and those who posted the comment), thus raising the priors, and the consequences could be greater (if you don’t repost this, you, everyone you know, and everyone they know, will be killed, and all cryonics institutes will be invaded, and the bodies of those interred within removed, warmed, and allowed to decompose), thus raising the payoff.
In the ‘most inconvenient world’ it constitutes a mugging. Your objection to the example given does not mean that it is not isomorphic to Pascal’s mugging given suitable conditions.
Oh, it could be made a mugging, I don’t disagree.
I would still say your suggesting doesn’t work, that consequence is huge but the tiny probability still brings it down. At an estimate you’ve threatened maybe a few thousand people. To pull of a threat like that would require a huge amount of resources (at a guess, nothing short of a government could manage it), I would guess that the number of youtube commenters capable of that is probably much less than one-thousandth of the number of youtube commenters capable of killing a single person, so that threat is less scary than:
For small threats the probability it will be followed through on goes down faster than the utility at stake goes up. It is a non-trivial proposition that this trend ever reverses, which is why the original post is careful to make the important arguments to do with Busy Beaver numbers and Solomonoff Induction.
To make a real mugging you would have to go really big, threaten 3^^^^3 people who live outside the matrix and then we might be talking.
If your claim was ‘I can imagine something like this which would be a Pascal’s Mugging’ then you should have said that. What you actually said was that it was one, and you don’t get to use ‘least convenient possible world’ when you’re making statements about the actual world. You made a false claim, encouraged others to make similar claims and contributed to the loss of a useful term from LW vocabulary.
I get to use ‘least convenient possible world’ when I said that there are:
and gave an example of one claim that could be made.
I think that conversation would be best continued in messages.
I think the thought process of people who spread those YouTube things is less “Oh, if I do this it will reduce my chance of [spectacularly gruesome details], so I’d better do it,” than “LOL, I should put this more places to be funny.”
I don’t really think so. If your hypothesis would be true, then there’d probably be different types of spam around, there wouldn’t be a reason why “[gruesome details]”-spam was so abundant in comparison to, say, snail ASCII art or whatever people might find more funny.
This very specific type of spam floating around seems to be evidence that some people actually assign a probability high enough (or are scared enough, although knowing that it’s bullshit, emotions can be strong) to share it, trying to avoid those gruesome details.
I don’t know much about the relative frequencies of different kinds of spam, so I don’t know how strong your point is. The most common meme repeated on the youtube videos I watch is of the form “[number of people who disliked this video] people are [insult related to video topic].” I spread these because I find them amusing. I’ve never encountered a piece of [gruesome details] spam, but I’m probably watching the wrong videos.
I don’t quite think those comments are meant to be amusing, it rather profits from the division (into likers vs. non-likers) that has already happened. Post a similar comment on Rebecca Black’s “Friday” music video (which has far more dislikes than likes), such as, maybe, “10 people are looking forward to the weekend, 30000 enjoy mondays more”, and your comment will—this does my hypothesis predict—be voted down, even though they might be as amusing as other, similar comments. I have encountered quite a lot of [gruesome details] spam a few years ago, and back then, when I was still younger, those did make me nervous, although I knew it was BS. I think those comments are now regularly marked as spam, downvoted and deleted, but simply the fact that they are downvoted shows that people don’t find them amusing, doesn’t it? [comparison of likers and dislikers] are indeed more common, but I think they are not because they are, from a neutral pov, amusing, either. Maybe we should do a survey?
I agree that they profit from the division between likers and dislikers. But I think the amusement comes from finding a creative way to insult a minority. If any of that sort of comment are on the Friday video, I predict they will be making fun of the minority of likers.
The fact that they are spread shows that some people are either amused or nervous. The fact that they are downvoted shows that other people are both unamused and not nervous. I expect that the population of people who encounter them is split, with some proportion amused, some proportion nervous, and some proportion neither. Our question depends on the relative sizes of the first two factions.
I guess we can agree on that. Part of their spreading can definitely be credited to Pascal’s mugging. Other people might find them amusing (especially the original creators of those messages).
I also found different forms of Pascal’s mugging in different internet media: There are similar chain messages in instant messengers which promise you additional smileys / finding the love of your life within the next 24 hours / other [promised benefits] if you spread them. I guess these [promised benefits]-posts’ popularity can barely be attributed to people finding them amusing.
Of course, this is Pascal’s mugging for irrational people who believe in Omega / god / superstition.
But in the [gruesome details]-messages it’s not so very irrational, it depends on your utility function. Most people value themselves more than others, and rational egoists, having a utility function taking only their specific state of mind into account, could assign such a high negative utility to themselves being tortured and killed that it’s rational for them to spread the message.
Hmmm, I hadn’t thought of that. It seems as though it wouldn’t spread as effectively if that were the case, but it’s certainly possible that the motivation is merely ‘for the lulz’ rather than a successful Pascal’s mugging.
This is kind of obvious, since it relies on Pascal’s Wager (the “progenitor” of the Mugging), but it’s interesting to formalize it. (I’m borrowing from Hofstadter’s treatment in Metamagical Themas, IIRC.)
Religions basically follow the pattern of:
If you adopt [this meme], you will receive tremendous benefits in [unobservable time/place]. (The unobservability makes it functionally the same as “low probability” of good outcome.)
In order to truly adopt [this meme], you must convince others to adopt [this meme].
Pretty nice mind virus, if you can pull it off!
The obvious following step it to design a memetic virus for the spread of rationality.
If you adopt [rationality], you will receive tremendous benefits in [observable time/place].
In order to truly adopt [rationality], you must convince others to adopt [rationality].
It has the advantage of being true… no small feat.
No. Obviously you don’t necessarily have to convince others to adopt it to be truly it.
I agree with Yudkowski here… I think that every rationalist, if s/he wants to maximize winning and fun, should convince at least some other people to be rational.
This is trivial for many ‘system’ of thinking when they address global status of the environment, since it’s easy to predict results when the whole world thinks alike. Luckily though rationality can be used in a world which hosts different (and competing!) such systems.
Out of curiosity, what makes a thought a memetic virus rather than merely a thought? Is it only the negative utility associated with it, or it’s ability to spread and replicate itself or something else?
I think the difference is that the memetic virus contains the instructions for its own spreading...
I’ve seen a few, mostly when I scan my spam folder in case I missed something from Amazon etc, but the thing that annoys me the most is how blatantly false the threats and rewards are. At the very least they could put in the effort to make it seem plausible, maybe threaten to send a virus and offer free porn, or something like that…
I mean, it’s a good thing that these criminals aren’t very bright, but it’s depressing how little effort they put into their career. I’d have thought the inherent risks involved in criminality would have filtered out the laziest people.
Religions, chain letters (maybe), SIAI, FHI - oh, and the lottery.
Ironically, Yudkowsky and Bostrom have both written articles about Pascal’s Mugging.
There are plenty of memetic viruses out there—from urban ledends to World of Warcraft.
Not the lottery. Its expected payoff is known to be negative. It doesn’t rest on expected utility divergence for unlikely important events, just on regular stupidity.
Eliezer and Nick make the reverse argument about the Singularity: it’s not unlikely enough to count as a mugging.
The lottery promises people a very small chance of a very large payoff—in return for some money up front.
I think you need to explain in more detail how that is significantly different from the pitch of a Pascal’s Mugger—which usually doesn’t make too much sense either.
Yes, for example here.
Rember that it is not the probability of the S-word we are talking about, but the chance of a particular donation making much of a difference.
It’s not just “very large payoff and very small probability”. Take a bunch of events of the form “N people get tortured”. if you use Solomonoff induction, your prior probability for these events will roughly decrease in log(N). However, if you aggregate human suffering linearly, the utility you assign to these events increases in N (otherwise, find events with utility linear in N—they exist if your utility function is unbounded). Therefore, your expected utility diverges as N goes to infinity. So for any certain cost, there’s a number N large enough for you to pay the cost.
But this is not the case for the lottery. The payoff and its probability are known, and the expected gain is much less than the cost.
The problem with Pascal’s Mugging is that we think the expected payoff of certain kinds of actions is huge (which probably reveals a problem with how we compute expected utility). The problem with lotteries is that we know the expected payoff is negative, but play anyway.
That isn’t correct; the prior will decrease more slowly than any computable function that monotonically decreases to zero.
Doesn’t it not decrease at all? After all, 3^^^^3 people get tortured is more likely than m people get tortured where m is some complicated integer between 0 and 3^^^^3 admitting no short description.
It doesn’t monotonically decrease but, for any probability p, there exists a number of people m such that the probability of n people get tortured, for any n > m, is less than p.
Huh, I don’t understand Solomonoff induction then. Explain?
Very roughly, the idea is that the prior probability that the universe is a (n+1) state Turing machine is half the prior probability that the universe is a n state Turing machine. Whereas the most anyone can offer you in a n state machine is BB(n) but the most they can offer you in a (n+1) state machine is BB(n+1)
So, again very roughly, the probability I can offer you BB(n) is roughly k2^(-n) where k is a very small constant. So the probability I can offer you m utility is roughly k2^(-inverseBB(n)).
InverseBB(n) is a monotonically increasing function that increases more slowly than any montonically increasing computable function, so k2^(-inverseBB(n)) is a monotonically decreasing function that decreases more slowly than any computable monotonically decreasing function.
Ooo, I get it! I was thinking of writing out the utility (log_base), but this is more general. Thanks!
To add to that explanation, you can prove that the number of people who can be simulated on a halting n-state Turing machine has no computable upper bound by considering a Turing machine that alternates between some computation and simulating humans every fixed number of steps. If we could compute an upper bound on the number of humans simulates we could compute whether the TM would halt by waiting for that many people to be simulated, similarly to how we could use BB(n) to determine whether any n-state TM will halt if we knew its value.
The expected gain is less than the cost.for the mugging as well—otherwise, it is not a mugging, but an invitation to make a wise investment. As for the probability of the lottery payout being known—doesn’t that depend on which lottery, and which punter—we are talking about?
It’s easy to calculate the expected returns from buying a lottery ticket, and they’re almost always negative. The psychology behind them is similar to a P-mugging, but only because people aren’t very good at math—eight-digit returns are compared against a one-digit outlay and scope insensitivity issues do their dirty work.
P-muggings like the one Eliezer described work differently: they postulate a return in utility (or, in some versions, avoided disutility) so vast that the small outlay in utility is meant to produce a positive expected return, as calculated by our usual decision theories, even after factoring in the very high probability that the P-mugger is lying, mistaken, or crazy. Whether or not it’s possible for such a setup to be credible is debatable; as given it probably wouldn’t work well in the wild, but I’d expect that to be due primarily to the way human risk aversion heuristics work.
In dollars—but not expected utilons, obviously. People generally play the lottery because they want to win.
True enough, but that distinction represents a can of worms that I don’t really want to open here. Point is, you don’t need that sort of utilitarian sleight of hand to get Pascal’s mugging to work—the vulnerability it exploits lies elsewhere, probably in the way Solomonoff-based decision theory bounds its expectations.
By this logic any charity is a Pascal’s mugging.
I figure Pascal’s mugging additionaly requires a chance of very large utility delta being involved.
We can separate having any impact, e.g. on the scale of a saved life or more, in the actual world from solving a large part of the total problem. A $1 VillageReach contribution is quite unlikely to save a life, but $100,000 would be quite likely to save 100. Either way, there is little chance of making a noticeable percentage decrease in global poverty or disease rates (although there is some, e.g. by boosting the new institutions and culture of efficient philanthropy, etc). I think political contributions and funding for scientific (including medical) research would be a better comparison, where even large donations are unlikely to deliver actual results (although we think that on the whole the practice of funding medical research is quite likely to pay off, even if any particular researcher is unlikely to cure cancer, etc).
Could you elaborate on how those fit into the Pascal’s Mugging pattern? Religion and chain letters were covered already, but some of the others you gave aren’t so clear. (And even on the ones where I have a good idea what you mean, it would help to see the mapping explicitly.)
Remember, the challenge isn’t to find general mind viruses or high-fitness memes, but rather, memes that spread because of a PM-like threat/promise.
Edit: D’oh! The topic creator did ask for mind viruses, and the request was in the very comment I responded to! Still, I think the purpose of the request was mainly to elicit PM-type mind viruses, otherwise we’ll just be uninterestingly listing popular stuff.
The recent “givewell” interview drew the parallel in the case of the SIAI:
http://commonsenseatheism.com/wp-content/uploads/2011/05/siai-2011-02-III.pdf
I suspect that isn’t quite right. The FHI endorses the “maxipok” principle. It is more about promising hell-avoidance than heavenly benefits. I am not sure the SIAI is sold on this—and I have heard them waxing lyrical on the “heavenly benefits” side—but I expect they will agree that the position makes sense.
Its worth noting that the SIAI representative agreed that he shouldn’t support SIAI unless it passed those hurdles, he merely argued that it did.
To my knowledge no SIAI employee has ever made the Pascal’s mugging type argument, it is a pure strawman.
So, the idea is not that the organisation accompanies its requests for donations with a confession that it is just waving high utility in front of them in the hope of parting them from their money. That is hardly likely to be an effective fundraising strategy. Nobody ever suggested that in the first place. The idea is more that it uses promises of very high utility to compensate for a lack of concrete success probabilities—much like Pascal’s mugger does.
If you are short of examples of them waving high utitily around, perhaps see:
How Much it Matters to Know What Matters: A Back of the Envelope Calculation
Good find—and I think both promises of hell avoidance and heavenly benefits count as Pascal’s Mugging.
Religions, chain letters, SIAI, FHI. Ironically, Yudkowsky and Bostrom have both written articles about Pascal’s Mugging.
There are plenty of memetic viruses out there—from urban ledends to World of Warcraft.
I found a comment I had made several weeks ago in a previous mugging thread comparing P Muggings to Spam. Since Chain letters/comments are a type of spam, (albeit, usually less malicious than more dangerous spam) I’ll post it here with a few changes.
This seemed very similar to a standard hack used on people that we already rely on computers to defend us against. To be specific, it follows an incredibly similar framework to one of those Lottery/Nigerian 419 Scam emails.
Opening Narrative: Attempt to establish some level of trust and believability. Things with details tend to be more believable than things without details, although the conjunction fallacy can be tricky here. Present the target with two choices: (Hope they don’t realize it’s a false dichotomy)
Choice A: Send in a small amount of utility. (If Choice A is selected, repeat False dichotomy) Choice B: Allow a (fake) opportunity to acquire a large amount of (fake) utility to slip away.
Background: Make a MASSIVE number of attempts. even if the first 10,000 attempts fail, the cost of trying to hack is minimal compared to the rewards.
For chain letters of the type you mentioned specifically, they seem to replace Choice B with Choice C: Allow an opportunity to avoid a large amount of (fake) disutility to slip away. And instead of having the original spammer continue sending out the spam, they recruit the now subverted target to send out spam.
Here’s the link to the original mugging thread I was reading. It was a St. Petersburg Mugging thread and not a Pascal’s Mugging thread, but I see a lot of similarities between the different types of P Muggings.
I like your analysis. I think one of the under appreciated aspects of this mugging is the time limitation. ‘Post in the next X minutes’ is key to ‘Choice C’:
Specifically, it makes the person have to think about this with a limited time span and decide on what they perceive to be the action of lowest possible danger, with the [spectacularly gruesome details] of extremely low probability seeming to be more danger than wasting some time. It also has the effect of spreading the meme more quickly.
If we assume Y% of people who read the comment would spread it in each generation, the total # of times it is viewed is dictated by the gap between each ‘generation’ of people who spread it. By forcing the mugged to spread the meme within Z minutes, the meme becomes:
‘Bumped’ in a message board, granting greater visibility
Spread with a smaller gap between generations, and so more effective at propagating
[Edited for proper formatting]
I agree with your point about time limitation acting as a an important influence. My understanding is that time limitation influences human behavior in all sorts of ways, which is probably why a substantial percentage of TV ads have some kind of call for urgency (“Limited Time Offer” “Act Now” “Don’t Wait, Call Today”)
Also, is there a specific name for this sort of advertising behavior? I was interested in looking up more sources on this but I couldn’t find any reasonably high quality ones.
All the panic over nuclear energy hazards seems to be a Reverse Pascal’s Mugging.
Karma Party in this thread!
I am the Shanghai Rationalist organizer, I barely post. We have 17 members in our group, a mix of Lesswrong/Overcoming bias, freedomain radio, and mises.org people. We’ve had topics on cognitive bias exercises, the singularity, and last we did internal family systems (self therapy) with facilitating. I’m hoping to gain some karma so I can post to the main site. I’ve been asking Patrick Robotham to do it for me, and he’s obliged each time. I’m hoping to get some Karma without having to comment a lot. So Karma me if you want to help the Shanghai Rationalists group.
another thread for more karma points.
iawtp
me too
apparently
I
need
20
karma
points
to
win
this
game