At first sight it appears to be isomorphic to Newcomb’s problem. However, a couple of extra details have been thrown in:
A person’s decisions are a product of both conscious deliberation and predetermined unconscious factors beyond their control.
“Omega” only has access to the latter.
Now, I agree that when you have an imperfect Omega, even though it may be very accurate, you can’t rule out the possibility that it can only “see” the unfree part of your will, in which case you should “try as hard as you can to two-box (but perhaps not succeed).” However, if Omega has even “partial access” to the “free part” of your will then it will usually be best to one-box.
I did not know about it, thanks for pointing it out. It’s Simpson’s paradox the decision theory problem.
On the other hand (ignoring issues of Omega using magic or time travel, or you making precommitments), isn’t Newcomb’s problem always like this in that there is no direct causal relationship between your decision and his prediction, just that they share some common causation.
Do you know about the “Smoking Lesion” problem?
At first sight it appears to be isomorphic to Newcomb’s problem. However, a couple of extra details have been thrown in:
A person’s decisions are a product of both conscious deliberation and predetermined unconscious factors beyond their control.
“Omega” only has access to the latter.
Now, I agree that when you have an imperfect Omega, even though it may be very accurate, you can’t rule out the possibility that it can only “see” the unfree part of your will, in which case you should “try as hard as you can to two-box (but perhaps not succeed).” However, if Omega has even “partial access” to the “free part” of your will then it will usually be best to one-box.
Or at least this is how I like to think about it.
I did not know about it, thanks for pointing it out. It’s Simpson’s paradox the decision theory problem.
On the other hand (ignoring issues of Omega using magic or time travel, or you making precommitments), isn’t Newcomb’s problem always like this in that there is no direct causal relationship between your decision and his prediction, just that they share some common causation.