This solution doesn’t work. Why? Because I pledge that if anyone fails to accept a “Pascal’s Mugging style trade-off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds”. I’ve just canceled out your pledge.
You could say all your allies take the same pledge as you, and you have more allies than me, but that’s getting too far into the practicalities of our lives and too far away from a general solution. A general solution can’t assume that the person considering whether to accept a Mugging will have heard either of our pledges, so the person would be unable to take those pledges into account for their decision.
I don’t know the actual solution to Pascal’s Mugging myself. I’ve pasted my outline-form notes on it so far into a reply to this comment, in case they’re useful.
You’re not thinking big enough. If anyone ever accepts a Pascal’s mugging again, my fuzzy celery God will execute a worse Pascal’s mugging than any other in existence no matter what the original Pascal’s mugging is.
P.S. You’ll never find out what muggings fuzzy celery God executes because they’re always unpredictable. This makes them impossible to disprove.
(My brother always hated having fights like this with me.)
If we have two gods, one claiming that if I do X, they’ll mug me, and one claiming that if I don’t do X they’ll mug me, well I’m probably going to believe the god that isn’t fuzzy and celery...
Well that’s making the wrong choice, buddy. Other Gods are useless against fuzzy celery God because fuzzy celery God can transform itself at will into the Most Believable God. Don’t think of fuzzy celery God as a piece of fuzzy celery. Fuzzy celery God is nothing like that. If an old wise man is the most compelling God-form for you, fuzzy celery God looks like an old wise man. If benevolent Gods are more credible to you, fuzzy celery God becomes benevolent. No matter what Pascal’s mugging the person wants to accept, fuzzy celery God will always take on the appearance and traits of the most believable God that the person can conceive of.
This solution doesn’t work. Why? Because I pledge that if anyone fails to accept a “Pascal’s Mugging style trade-off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds”. I’ve just canceled out your pledge.
Your argument doesn’t address the problem with Static_IP’s post, and indeed it has exactly the same problem: it is not an argument/explanation/clarification, but instead it’s one more mugging, see nyan_sandwich’s comment. The problem is not that someone has put out a Pascal’s mugging and now we have to pay up, unless the mugger is neutralized in some way. If it turns out that we in fact should pay up, the correct decision is easily performed.
The problem is that this situation is not understood. The theoretical model of expected utility plus some considerations about prior suggest that the correct decision is to pay the mugger, yet other considerations suggest otherwise, and there are potential flaws with the original argument, which motivates a search for better understanding of the situation. Modifying the situation in a way that makes the problem go away doesn’t solve the original problem, it instead shifts attention away from it.
Off – Pascal’s estimate might be farther off than the offered benefit, and how does he know how far to compensate?
Counter – there is a (smaller) probability that the man will give you the same amount of Utility only if you refuse. (Also a probability that will give way more Utility if refuse, but probably countered by probability that will give way more if accept.)
As shown by the “smaller”, I don’t think this argument completely explains the problem.
Known – the gambit is known, so that makes it more likely that he is tricking you – but sadly, no effect, I think.
Impossible – [My Dad]’s suspect argument: there is absolutely zero probability of the mugger giving you what he promises. There is no way to both extend someone’s lifespan and make them happy during it.
He could just take you out of the Matrix into a place where any obstacles to lengthy happiness are removed. There’s still a probability of that, right?
God – maybe level of probability involved is similar to that of God’s existence, with infinite heaven affecting the decision
Long-term – maybe what we should do in a one-shot event is different from what we should do if we repeated that event many times.
Assumption – one of the stated assumptions, such as utilitarianism or risk-neutrality, is incorrect and should not actually be held.
As an omnipotent god entity I pledge to counter any any attempt at pascals muggings, as long as the mugger actually has the power to do what they say.
I’ve just canceled out your pledge.
Yep. You did, or you would have if you could actually carry through on your threats. I maintain that you can’t. Now it’s a question of which of our claims is more likely to be true. That’s kind of the point here. When you’re dealing with that small or a probability then the calculation becomes useless and marred by noise.
If I’m correct, and I’m one of the very few entities capable of doing this, who happen across your world anyway, then I can cancel out your claim and a bunch of future claims. If you’re correct then I can’t. So the question is, how unlikely are my claims? How unlikely are yours? Are your claims significantly more likely (on the tiny scales we’re working with) then mine?
But yes, now that I look at it more in depth (thank you for the links), it’s obvious that this is a reiteration of the “counter” solution, but with actual specific and viable threats behind it.
This solution doesn’t work. Why? Because I pledge that if anyone fails to accept a “Pascal’s Mugging style trade-off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds”. I’ve just canceled out your pledge.
You could say all your allies take the same pledge as you, and you have more allies than me, but that’s getting too far into the practicalities of our lives and too far away from a general solution. A general solution can’t assume that the person considering whether to accept a Mugging will have heard either of our pledges, so the person would be unable to take those pledges into account for their decision.
I don’t know the actual solution to Pascal’s Mugging myself. I’ve pasted my outline-form notes on it so far into a reply to this comment, in case they’re useful.
You’re not thinking big enough. If anyone ever accepts a Pascal’s mugging again, my fuzzy celery God will execute a worse Pascal’s mugging than any other in existence no matter what the original Pascal’s mugging is.
P.S. You’ll never find out what muggings fuzzy celery God executes because they’re always unpredictable. This makes them impossible to disprove.
(My brother always hated having fights like this with me.)
If we have two gods, one claiming that if I do X, they’ll mug me, and one claiming that if I don’t do X they’ll mug me, well I’m probably going to believe the god that isn’t fuzzy and celery...
Well that’s making the wrong choice, buddy. Other Gods are useless against fuzzy celery God because fuzzy celery God can transform itself at will into the Most Believable God. Don’t think of fuzzy celery God as a piece of fuzzy celery. Fuzzy celery God is nothing like that. If an old wise man is the most compelling God-form for you, fuzzy celery God looks like an old wise man. If benevolent Gods are more credible to you, fuzzy celery God becomes benevolent. No matter what Pascal’s mugging the person wants to accept, fuzzy celery God will always take on the appearance and traits of the most believable God that the person can conceive of.
Your argument doesn’t address the problem with Static_IP’s post, and indeed it has exactly the same problem: it is not an argument/explanation/clarification, but instead it’s one more mugging, see nyan_sandwich’s comment. The problem is not that someone has put out a Pascal’s mugging and now we have to pay up, unless the mugger is neutralized in some way. If it turns out that we in fact should pay up, the correct decision is easily performed.
The problem is that this situation is not understood. The theoretical model of expected utility plus some considerations about prior suggest that the correct decision is to pay the mugger, yet other considerations suggest otherwise, and there are potential flaws with the original argument, which motivates a search for better understanding of the situation. Modifying the situation in a way that makes the problem go away doesn’t solve the original problem, it instead shifts attention away from it.
My unfinished outline-form notes on solving Pascal’s Mugging:
Pascal’s Mugger (http://www.nickbostrom.com/papers/pascal.pdf) possible solutions
Off – Pascal’s estimate might be farther off than the offered benefit, and how does he know how far to compensate?
Counter – there is a (smaller) probability that the man will give you the same amount of Utility only if you refuse. (Also a probability that will give way more Utility if refuse, but probably countered by probability that will give way more if accept.)
This seems to be Eliezer’s view, mentioned in Overcoming Bias – The Pascal’s Wager Fallacy Fallacy
As shown by the “smaller”, I don’t think this argument completely explains the problem.
Known – the gambit is known, so that makes it more likely that he is tricking you – but sadly, no effect, I think.
Impossible – [My Dad]’s suspect argument: there is absolutely zero probability of the mugger giving you what he promises. There is no way to both extend someone’s lifespan and make them happy during it.
He could just take you out of the Matrix into a place where any obstacles to lengthy happiness are removed. There’s still a probability of that, right?
God – maybe level of probability involved is similar to that of God’s existence, with infinite heaven affecting the decision
Long-term – maybe what we should do in a one-shot event is different from what we should do if we repeated that event many times.
Assumption – one of the stated assumptions, such as utilitarianism or risk-neutrality, is incorrect and should not actually be held.
As an omnipotent god entity I pledge to counter any any attempt at pascals muggings, as long as the mugger actually has the power to do what they say.
Yep. You did, or you would have if you could actually carry through on your threats. I maintain that you can’t. Now it’s a question of which of our claims is more likely to be true. That’s kind of the point here. When you’re dealing with that small or a probability then the calculation becomes useless and marred by noise.
If I’m correct, and I’m one of the very few entities capable of doing this, who happen across your world anyway, then I can cancel out your claim and a bunch of future claims. If you’re correct then I can’t. So the question is, how unlikely are my claims? How unlikely are yours? Are your claims significantly more likely (on the tiny scales we’re working with) then mine?
But yes, now that I look at it more in depth (thank you for the links), it’s obvious that this is a reiteration of the “counter” solution, but with actual specific and viable threats behind it.