I think you should pay in Counterfactual Mugging, and this is one of the newcomblike problem classes that is most common in real life.
Example: you find a wallet on the ground. You can, from least to most pro social:
Take it and steal the money from it
Leave it where it is
Take it and make an effort to return it to its owner
Let’s ignore the first option (suppose we’re not THAT evil). The universe has randomly selected you today to be in the position where your only options are to spend some resources to no personal gain, or not. In a parallel universe, perhaps your pocket had the hole in it, and a random person has come across your wallet.
Firstly, what they might be thinking is “Would this person do the same for me?”
Secondly, in a society which wins, people return each others’ wallets.
You might object that this is different from the Mugging, because you’re directly helping someone else in this case. But I would counter that the Mugging is the true version of this problem, one where you have no crutch of empathy to help you, so your decision theory alone is tested.
I think you should pay in Counterfactual Mugging, and this is one of the newcomblike problem classes that is most common in real life.
Example: you find a wallet on the ground. You can, from least to most pro social:
Take it and steal the money from it
Leave it where it is
Take it and make an effort to return it to its owner
Let’s ignore the first option (suppose we’re not THAT evil). The universe has randomly selected you today to be in the position where your only options are to spend some resources to no personal gain, or not. In a parallel universe, perhaps your pocket had the hole in it, and a random person has come across your wallet.
Firstly, what they might be thinking is “Would this person do the same for me?”
Secondly, in a society which wins, people return each others’ wallets.
You might object that this is different from the Mugging, because you’re directly helping someone else in this case. But I would counter that the Mugging is the true version of this problem, one where you have no crutch of empathy to help you, so your decision theory alone is tested.