I’m not sure I would buy this argument unless you could claim that my Bob-simulation’s actions would cause Omega to give or not give me money. At very least it should depend on how Omega makes his prediction.
Perhaps a clearer variation goes as follows: Bill arranges so that if the coin is tails then (a) he will temporarily receive your winnings, if you get any, and (b) he will do a flawless imitation of Omega asking for money.
If you pay Bill then he returns both what you paid and your winnings (which you’re guaranteed to have, by hypothesis). If you don’t pay him then he has no winnings to give you.
Well look: If the real coin is tails and you pay up, then (assuming Omega is perfect, but otherwise irrespectively of how it makes its prediction) you know with certainty that you get the prize. If you don’t pay up then you would know with certainty that you don’t get the prize. The absence of a ‘causal arrow’ pointing from your decision to pay to Omega’s decision to pay becomes irrelevant in light of this.
(One complication which I think is reasonable to consider here is ‘what if physics is indeterministic and so knowing your prior state doesn’t permit Omega (or Bill) to calculate with certainty what you will do?’ Here I would generalize the game slightly so that if Omega calculates that your probability of paying up is p then you receive proportion p of the prize. Then everything else goes through unchanged—Omega and Bill will now calculate the same probability that you pay up.)
OK. I am uncomfortable with the idea of dealing with the situation where Omega is actually perfect.
I guess this boils down to me being not quite convinced by the arguments for one-boxing in Newcomb’s problem without further specification of how Omega operates.
At first sight it appears to be isomorphic to Newcomb’s problem. However, a couple of extra details have been thrown in:
A person’s decisions are a product of both conscious deliberation and predetermined unconscious factors beyond their control.
“Omega” only has access to the latter.
Now, I agree that when you have an imperfect Omega, even though it may be very accurate, you can’t rule out the possibility that it can only “see” the unfree part of your will, in which case you should “try as hard as you can to two-box (but perhaps not succeed).” However, if Omega has even “partial access” to the “free part” of your will then it will usually be best to one-box.
I did not know about it, thanks for pointing it out. It’s Simpson’s paradox the decision theory problem.
On the other hand (ignoring issues of Omega using magic or time travel, or you making precommitments), isn’t Newcomb’s problem always like this in that there is no direct causal relationship between your decision and his prediction, just that they share some common causation.
I’m not sure I would buy this argument unless you could claim that my Bob-simulation’s actions would cause Omega to give or not give me money. At very least it should depend on how Omega makes his prediction.
Perhaps a clearer variation goes as follows: Bill arranges so that if the coin is tails then (a) he will temporarily receive your winnings, if you get any, and (b) he will do a flawless imitation of Omega asking for money.
If you pay Bill then he returns both what you paid and your winnings (which you’re guaranteed to have, by hypothesis). If you don’t pay him then he has no winnings to give you.
Well look: If the real coin is tails and you pay up, then (assuming Omega is perfect, but otherwise irrespectively of how it makes its prediction) you know with certainty that you get the prize. If you don’t pay up then you would know with certainty that you don’t get the prize. The absence of a ‘causal arrow’ pointing from your decision to pay to Omega’s decision to pay becomes irrelevant in light of this.
(One complication which I think is reasonable to consider here is ‘what if physics is indeterministic and so knowing your prior state doesn’t permit Omega (or Bill) to calculate with certainty what you will do?’ Here I would generalize the game slightly so that if Omega calculates that your probability of paying up is p then you receive proportion p of the prize. Then everything else goes through unchanged—Omega and Bill will now calculate the same probability that you pay up.)
OK. I am uncomfortable with the idea of dealing with the situation where Omega is actually perfect.
I guess this boils down to me being not quite convinced by the arguments for one-boxing in Newcomb’s problem without further specification of how Omega operates.
Do you know about the “Smoking Lesion” problem?
At first sight it appears to be isomorphic to Newcomb’s problem. However, a couple of extra details have been thrown in:
A person’s decisions are a product of both conscious deliberation and predetermined unconscious factors beyond their control.
“Omega” only has access to the latter.
Now, I agree that when you have an imperfect Omega, even though it may be very accurate, you can’t rule out the possibility that it can only “see” the unfree part of your will, in which case you should “try as hard as you can to two-box (but perhaps not succeed).” However, if Omega has even “partial access” to the “free part” of your will then it will usually be best to one-box.
Or at least this is how I like to think about it.
I did not know about it, thanks for pointing it out. It’s Simpson’s paradox the decision theory problem.
On the other hand (ignoring issues of Omega using magic or time travel, or you making precommitments), isn’t Newcomb’s problem always like this in that there is no direct causal relationship between your decision and his prediction, just that they share some common causation.