Reposted here instead of part 1, didn’t realise part 2 had been started.
I don’t understand why you should pay the $100 in a counterfactual mugging. Before you are visited by Omega, you would give the same probabilities to Omega and Nomega existing so you don’t benefit from precommitting to pay the $100. However, when faced with Omega you’re probability estimate for its existence becomes 1 (and Nomegas becomes something lower than 1).
Now what you do seems to rely on the probability that you give to Omega visiting you again. If this was 0, surely you wouldn’t pay the $100 because its existence is irrelevant to future encounters if this is your only encounter.
If this was 1, it seems at a glance like you should. But I don’t understand in this case why you wouldn’t just keep your $100 and then afterwards self-modify to be the sort of being that would pay the $100 in the future and therefore end up an extra hundred on top.
I presume I’ve missed something there though. But once I understand that, I still don’t understand why you would give the $100 unless you assigned a greater than 10% probability to Omega returning in the future (even ignoring the none zero, but very low, chance of Nomega visiting).
I think I’ve figured out the answer to my question.
The related scenario: You’re stuck in the desert without water (or money) and a car offers to give you a lift if, when you reach the town, you pay them money. But you’re both perfectly rational so you know when you reach town, you would gain nothing by giving the person the money then. You say, “Yes” but they know you’re lying and so drive off.
If you use a decision theory which would have you give them the money once you reach town, you end up better off (ie. safely in town), even though the decision to give the money may seem stupid once you’re in town.
From the perspective of t = 2 (ie. after the event), giving up the money looks stupid, you’re in town. But if you didn’t follow that decision theory, you wouldn’t be in town, so it is beneficial to follow that decision theory.
Similarly, at t = 2 in the counterfactual mugging, giving up the money looks stupid. But if you didn’t follow that decision theory, you would never have had the opportunity to win a lot more money. So once again, following a decision theory which involves you acting as if you precommitted is beneficial.
So by that analysis: My mistake was asking what the beneficial action was at t = 2. Whereas, the actual question is, what’s the beneficial decision theory to follow.
Sounds right to me. I had actually written a blog post recently that explores the desert problem (aka Parfit’s Hitchhiker) that you might be interested in. I think it also sheds some light on why humans (usually) obey a decision theory that would win on Parfit’s Hitchhiker.
Reposted here instead of part 1, didn’t realise part 2 had been started.
I don’t understand why you should pay the $100 in a counterfactual mugging. Before you are visited by Omega, you would give the same probabilities to Omega and Nomega existing so you don’t benefit from precommitting to pay the $100. However, when faced with Omega you’re probability estimate for its existence becomes 1 (and Nomegas becomes something lower than 1).
Now what you do seems to rely on the probability that you give to Omega visiting you again. If this was 0, surely you wouldn’t pay the $100 because its existence is irrelevant to future encounters if this is your only encounter.
If this was 1, it seems at a glance like you should. But I don’t understand in this case why you wouldn’t just keep your $100 and then afterwards self-modify to be the sort of being that would pay the $100 in the future and therefore end up an extra hundred on top.
I presume I’ve missed something there though. But once I understand that, I still don’t understand why you would give the $100 unless you assigned a greater than 10% probability to Omega returning in the future (even ignoring the none zero, but very low, chance of Nomega visiting).
Is anyone able to explain what I’m missing?
I think I’ve figured out the answer to my question.
The related scenario: You’re stuck in the desert without water (or money) and a car offers to give you a lift if, when you reach the town, you pay them money. But you’re both perfectly rational so you know when you reach town, you would gain nothing by giving the person the money then. You say, “Yes” but they know you’re lying and so drive off.
If you use a decision theory which would have you give them the money once you reach town, you end up better off (ie. safely in town), even though the decision to give the money may seem stupid once you’re in town.
From the perspective of t = 2 (ie. after the event), giving up the money looks stupid, you’re in town. But if you didn’t follow that decision theory, you wouldn’t be in town, so it is beneficial to follow that decision theory.
Similarly, at t = 2 in the counterfactual mugging, giving up the money looks stupid. But if you didn’t follow that decision theory, you would never have had the opportunity to win a lot more money. So once again, following a decision theory which involves you acting as if you precommitted is beneficial.
So by that analysis: My mistake was asking what the beneficial action was at t = 2. Whereas, the actual question is, what’s the beneficial decision theory to follow.
Does my understanding seem correct?
Sounds right to me. I had actually written a blog post recently that explores the desert problem (aka Parfit’s Hitchhiker) that you might be interested in. I think it also sheds some light on why humans (usually) obey a decision theory that would win on Parfit’s Hitchhiker.