Why would not giving him $5 make it more likely that people would die, as opposed to less likely? The two would seem to cancel out. It’s the same old “what if we are living in a simulation?” argument- it is, at least, possible that me hitting the sequence of letters “QWERTYUIOP” leads to a near-infinity of death and suffering in the “real world”, due to AGI overlords with wacky programming. Yet I do not refrain from hitting those letters, because there’s no entanglement which drives the probabilities in that direction as opposed to some other random direction; my actions do not alter the expected future state of the universe. You could just as easily wind up saving lives as killing people.
The mugger claims to not be a ‘person’ in the conventional sense, but rather an entity with outside-Matrix powers. If this statement is true, then generalized observations about the reference class of ‘people’ cannot necessarily be considered applicable.
Conversely, if it is false, then this is not a randomly-selected person, but rather someone who has started off the conversation with an outrageous profit-motivated lie, and as such cannot be trusted.
They claim to not be a human. They’re still a person, in the sense of a sapient being. As a larger class, you’d expect lower correlation, but it would still be above zero.
I am not convinced that, even among humans speaking to other humans, truth-telling can be assumed when there is such a blatantly obvious incentive to lie.
I mean, say there actually is someone who can destroy vast but currently-unobservable populations with less effort than it would take them to earn $5 with conventional economic activity, and the ethical calculus works out such that you’d be better served to pay them $5 than let it happen. At that point, aren’t they better served to exaggerate their destructive capacity by an order of magnitude or two, and ask you for $6? Or $10?
Once the number the mugger quotes exceeds your ability to independently confirm, or even properly imagine, the number itself becomes irrelevant. It’s either a display of incomprehensibly overwhelming force, to which you must submit utterly or be destroyed, or a bluff you should ignore.
...when there is such a blatantly obvious incentive to lie.
There is no blatantly obvious reason to want to torture the people only if you do give him money.
At that point, aren’t they better served to exaggerate their destructive capacity by an order of magnitude or two, and ask you for $6? Or $10?
So, you’re saying that the problem is that, if they really were going to kill 3^^^3 people, they’d lie? Why? 3^^^3 isn’t just enough to get $5. It’s enough that the expected seriousness of the threat is unimaginably large.
Look at it this way: If they’re going to lie, there’s no reason to exaggerate their destructive capacity by an order of magnitude when they can just make up a number. If they choose to make up a number, 3^^^3 is plenty high. As such, if it really is 3^^^3, they might as well just tell the truth. If there’s any chance that they’re not lying given that they really can kill 3^^^3 people, their threat is valid. It’s one thing to be 99.9% sure they’re lying, but here, a 1 − 1/sqrt(3^^^3) certainty that they’re lying still gives more than enough doubt for an unimaginably large threat.
It’s either a display of incomprehensibly overwhelming force, to which you must submit utterly or be destroyed, or a bluff you should ignore.
You’re not psychic. You don’t know which it is. In this case, the risk of the former is enough to overwhelm the larger probability of the latter.
Let’s say you’re a sociopath, that is, the only factors in your utility function are your own personal security and happiness. Two unrelated people approach you simultaneously, one carrying a homemade single-shot small-caliber pistol (a ‘zip gun’) and the other apparently unarmed. Both of them, separately, demand $10 in exchange for not killing you immediately. You’ve got a $20 bill in your wallet; the unarmed mugger, upon learning this, obligingly offers to make change. While he’s thus distracted, you propose to the mugger with the zip gun that he shoot the unarmed mugger, and that the two of you then split the proceeds. The mugger with the zipgun refuses, explaining that the unarmed mugger claims to be close personal friends with a professional sniper, who is most likely observing this situation from a few hundred yards away through a telescopic sight and would retaliate against anyone who hurt her friend the mugger. The mugger with the zip gun has never actually met the sniper or directly observed her handiwork, but is sufficiently detered by rumor alone.
If you don’t pay the zip-gun mugger, you’ll definitely get shot at, but only once, and with good chances of a miss or nonfatal injury. If you don’t pay the unarmed mugger, and the sniper is real, you will almost certainly die before you can determine her position or get behind sufficiently hard cover. If you pay them both, you will have to walk home through a bad part of town at night instead of taking the quicker-and-safer bus, which apart from the inconvenience might result in you being mugged a third time.
How would you respond to that?
I don’t need to be psychic. I just do the math. Taking any sort of infinitessimally-unlikely threat so seriously that it dominates my decisionmaking means anyone can yank my chain just by making a few unfounded assertions involving big enough numbers, and then once word gets around, the world will no longer contain acceptable outcomes.
I think this answer contains something important--
Not so much an answer to the problem, but a clue to the reason WHY we intuitively, as humans, know to respond in a way which seems un-mathematical.
It seems like a Game Theory problem to me. Here, we’re calling the opponents’ bluff. If we make the decision that SEEMINGLY MAXIMIZES OUR UTILITY, according to game theory we’re set up for a world of hurt in terms of indefinite situations where we can be taken advantage of. Game Theory already contains lots of situations where reasons exist to take action that seemingly does not maximize your own utility.
In your example, only you die. In Pascal’s mugging, it’s unimaginably worse.
Do you accept that, in the circumstance you gave, you are more likely to be shot by a sniper if you only pay one mugger? Not significantly more likely, but still more likely? If so, that’s analogous to accepting that Pascal’s mugger will be more likely to make good on his threat if you don’t pay.
In my example, the person making the decision was specified to be a sociopath, for whom there is no conceivable worse outcome than the total loss of personal identity and agency associated with death.
The two muggers are indifferent to each other’s success. You could pay off the unarmed mugger to eliminate the risk of being sniped (by that particular mugger’s friend, at least, if she exists; there may well be other snipers elsewhere in town with unrelated agendas, about whom you have even less information) and accept the risk of being shot with the zip gun, in order to afford the quicker, safer bus ride home. In that case you would only be paying one mugger, and still have the lowest possible sniper-related risk.
The three possible expenses were meant as metaphors for existential risk mitigation (imaginary sniper), infrastructure development (bus), and military/security development (zip gun), the latter two forming the classic guns-or-butter economic dilemma. Historically speaking, societies that put too much emphasis, too many resources, toward preventing low-probability high-impact disasters, such as divine wrath, ended up succumbing to comparatively banal things like famine, or pillaging by shorter-sighted neighbors. What use is a mathematical model of utility that would steer us into those same mistakes?
Is your problem that we’d have to keep the five dollars in case of another mugger? I’d hardly consider the idea of steering our life around pascal’s mugging to be disagreeing with it. For what it’s worth, if you look for hypothetical pascal’s muggings, expected utility doesn’t converge and decision theory breaks down.
Yes, but the chance of magic powers from outside the matrix is low enough
The chance of magic powers from outside the matrix is nothing compared to 3^^^^3. It makes no difference in whether or not it’s worth while to pay him.
That observation applies to humans, who also tend not to kill large numbers of people for no payoff (that is, if you’ve already refused the money and walked away).
Why would not giving him $5 make it more likely that people would die, as opposed to less likely? The two would seem to cancel out. It’s the same old “what if we are living in a simulation?” argument- it is, at least, possible that me hitting the sequence of letters “QWERTYUIOP” leads to a near-infinity of death and suffering in the “real world”, due to AGI overlords with wacky programming. Yet I do not refrain from hitting those letters, because there’s no entanglement which drives the probabilities in that direction as opposed to some other random direction; my actions do not alter the expected future state of the universe. You could just as easily wind up saving lives as killing people.
Because he said so, and people tend to be true to their word more often than dictated by chance.
The mugger claims to not be a ‘person’ in the conventional sense, but rather an entity with outside-Matrix powers. If this statement is true, then generalized observations about the reference class of ‘people’ cannot necessarily be considered applicable.
Conversely, if it is false, then this is not a randomly-selected person, but rather someone who has started off the conversation with an outrageous profit-motivated lie, and as such cannot be trusted.
They claim to not be a human. They’re still a person, in the sense of a sapient being. As a larger class, you’d expect lower correlation, but it would still be above zero.
It is threatening people just to test you. We can assume that Its behavior is completely different from ours. So Tom’s argument still works.
I am not convinced that, even among humans speaking to other humans, truth-telling can be assumed when there is such a blatantly obvious incentive to lie.
I mean, say there actually is someone who can destroy vast but currently-unobservable populations with less effort than it would take them to earn $5 with conventional economic activity, and the ethical calculus works out such that you’d be better served to pay them $5 than let it happen. At that point, aren’t they better served to exaggerate their destructive capacity by an order of magnitude or two, and ask you for $6? Or $10?
Once the number the mugger quotes exceeds your ability to independently confirm, or even properly imagine, the number itself becomes irrelevant. It’s either a display of incomprehensibly overwhelming force, to which you must submit utterly or be destroyed, or a bluff you should ignore.
There is no blatantly obvious reason to want to torture the people only if you do give him money.
So, you’re saying that the problem is that, if they really were going to kill 3^^^3 people, they’d lie? Why? 3^^^3 isn’t just enough to get $5. It’s enough that the expected seriousness of the threat is unimaginably large.
Look at it this way: If they’re going to lie, there’s no reason to exaggerate their destructive capacity by an order of magnitude when they can just make up a number. If they choose to make up a number, 3^^^3 is plenty high. As such, if it really is 3^^^3, they might as well just tell the truth. If there’s any chance that they’re not lying given that they really can kill 3^^^3 people, their threat is valid. It’s one thing to be 99.9% sure they’re lying, but here, a 1 − 1/sqrt(3^^^3) certainty that they’re lying still gives more than enough doubt for an unimaginably large threat.
You’re not psychic. You don’t know which it is. In this case, the risk of the former is enough to overwhelm the larger probability of the latter.
Not the way I do the math.
Let’s say you’re a sociopath, that is, the only factors in your utility function are your own personal security and happiness. Two unrelated people approach you simultaneously, one carrying a homemade single-shot small-caliber pistol (a ‘zip gun’) and the other apparently unarmed. Both of them, separately, demand $10 in exchange for not killing you immediately. You’ve got a $20 bill in your wallet; the unarmed mugger, upon learning this, obligingly offers to make change. While he’s thus distracted, you propose to the mugger with the zip gun that he shoot the unarmed mugger, and that the two of you then split the proceeds. The mugger with the zipgun refuses, explaining that the unarmed mugger claims to be close personal friends with a professional sniper, who is most likely observing this situation from a few hundred yards away through a telescopic sight and would retaliate against anyone who hurt her friend the mugger. The mugger with the zip gun has never actually met the sniper or directly observed her handiwork, but is sufficiently detered by rumor alone.
If you don’t pay the zip-gun mugger, you’ll definitely get shot at, but only once, and with good chances of a miss or nonfatal injury. If you don’t pay the unarmed mugger, and the sniper is real, you will almost certainly die before you can determine her position or get behind sufficiently hard cover. If you pay them both, you will have to walk home through a bad part of town at night instead of taking the quicker-and-safer bus, which apart from the inconvenience might result in you being mugged a third time.
How would you respond to that?
I don’t need to be psychic. I just do the math. Taking any sort of infinitessimally-unlikely threat so seriously that it dominates my decisionmaking means anyone can yank my chain just by making a few unfounded assertions involving big enough numbers, and then once word gets around, the world will no longer contain acceptable outcomes.
Can we use the less controversial term ‘economist’?
I think this answer contains something important--
Not so much an answer to the problem, but a clue to the reason WHY we intuitively, as humans, know to respond in a way which seems un-mathematical.
It seems like a Game Theory problem to me. Here, we’re calling the opponents’ bluff. If we make the decision that SEEMINGLY MAXIMIZES OUR UTILITY, according to game theory we’re set up for a world of hurt in terms of indefinite situations where we can be taken advantage of. Game Theory already contains lots of situations where reasons exist to take action that seemingly does not maximize your own utility.
In your example, only you die. In Pascal’s mugging, it’s unimaginably worse.
Do you accept that, in the circumstance you gave, you are more likely to be shot by a sniper if you only pay one mugger? Not significantly more likely, but still more likely? If so, that’s analogous to accepting that Pascal’s mugger will be more likely to make good on his threat if you don’t pay.
In my example, the person making the decision was specified to be a sociopath, for whom there is no conceivable worse outcome than the total loss of personal identity and agency associated with death.
The two muggers are indifferent to each other’s success. You could pay off the unarmed mugger to eliminate the risk of being sniped (by that particular mugger’s friend, at least, if she exists; there may well be other snipers elsewhere in town with unrelated agendas, about whom you have even less information) and accept the risk of being shot with the zip gun, in order to afford the quicker, safer bus ride home. In that case you would only be paying one mugger, and still have the lowest possible sniper-related risk.
The three possible expenses were meant as metaphors for existential risk mitigation (imaginary sniper), infrastructure development (bus), and military/security development (zip gun), the latter two forming the classic guns-or-butter economic dilemma. Historically speaking, societies that put too much emphasis, too many resources, toward preventing low-probability high-impact disasters, such as divine wrath, ended up succumbing to comparatively banal things like famine, or pillaging by shorter-sighted neighbors. What use is a mathematical model of utility that would steer us into those same mistakes?
Is your problem that we’d have to keep the five dollars in case of another mugger? I’d hardly consider the idea of steering our life around pascal’s mugging to be disagreeing with it. For what it’s worth, if you look for hypothetical pascal’s muggings, expected utility doesn’t converge and decision theory breaks down.
Yes, but the chance of magic powers from outside the matrix is low enough that what he says has an insignificant difference.
...or is an insignificant difference even possible?
The chance of magic powers from outside the matrix is nothing compared to 3^^^^3. It makes no difference in whether or not it’s worth while to pay him.
That observation applies to humans, who also tend not to kill large numbers of people for no payoff (that is, if you’ve already refused the money and walked away).
Yes, but they’re more likely to kill large numbers of people conditional on you not doing what they say than conditional on you doing what they say.
That’s a symmetric effect, though.
[comment deleted]
excellent point, sir.