I have to admit that my intuition is that Omega is cheating, and somehow changing the box contents after my decision. CDT works fine in this case: one-box and take the money. I don’t think I learn much by figuring out where my intuition is wrong, so I have to first break my intuition and believe in a perfect predictor, then figure out where that counterfactual intuition is wrong. At which point my head starts to hurt.
In a world with perfect behavioral predictions over human timescales, it’s just silly to believe in simple free will. I don’t think that is our world, but I also don’t think it’s resolvable by pure discussion.
“I have to admit that my intuition is that Omega is cheating, and somehow changing the box contents after my decision”—Well, if there’s any kind of backwards causation, then you should obviously one-box.
“I don’t think I learn much by figuring out where my intuition is wrong, so I have to first break my intuition and believe in a perfect predictor, then figure out where that counterfactual intuition is wrong. At which point my head starts to hurt”—it may help to imagine that you are submitting computer programs into a game. In this case, perfect prediction is possible as it has access to the agents source code and it can simulate the situation the agent will face perfectly.
Note that Newcomb’s problem doesn’t depend on perfect prediction – 90% or even 55% accurate Omega still makes the problem work fine (you might have to tweak the payouts slightly)
Sure, it’s fine with even 1% accuracy with 1000:1 payout difference. But my point is that causal decision theory works just fine if Omega is cheating or imperfectly predicting. As long as the causal arrow isn’t fully independent from prediction to outcome and decision to outcome, one-boxing is trivial.
If “access to my source code” is possible and determines my actions (I don’t honestly know if it is), then the problem dissolves in another direction—there’s no choice anyway, it’s just an illusion.
it’s fine with even 1% accuracy with 1000:1 payout difference.
Well, if 1% accuracy means 99% of one-boxers are predicted to two-box, and 99% of two-boxers are expected to one-box, you should two-box. The prediction needs to at least be correlated with reality.
Sorry, described it in too few words. “1% better than random” is what I meant. If 51.5% of two-boxers get only the small payout, and 51.5% of one-boxers get the big payout, then one-boxing is obvious.
I have to admit that my intuition is that Omega is cheating, and somehow changing the box contents after my decision. CDT works fine in this case: one-box and take the money. I don’t think I learn much by figuring out where my intuition is wrong, so I have to first break my intuition and believe in a perfect predictor, then figure out where that counterfactual intuition is wrong. At which point my head starts to hurt.
In a world with perfect behavioral predictions over human timescales, it’s just silly to believe in simple free will. I don’t think that is our world, but I also don’t think it’s resolvable by pure discussion.
“I have to admit that my intuition is that Omega is cheating, and somehow changing the box contents after my decision”—Well, if there’s any kind of backwards causation, then you should obviously one-box.
“I don’t think I learn much by figuring out where my intuition is wrong, so I have to first break my intuition and believe in a perfect predictor, then figure out where that counterfactual intuition is wrong. At which point my head starts to hurt”—it may help to imagine that you are submitting computer programs into a game. In this case, perfect prediction is possible as it has access to the agents source code and it can simulate the situation the agent will face perfectly.
Note that Newcomb’s problem doesn’t depend on perfect prediction – 90% or even 55% accurate Omega still makes the problem work fine (you might have to tweak the payouts slightly)
Sure, it’s fine with even 1% accuracy with 1000:1 payout difference. But my point is that causal decision theory works just fine if Omega is cheating or imperfectly predicting. As long as the causal arrow isn’t fully independent from prediction to outcome and decision to outcome, one-boxing is trivial.
If “access to my source code” is possible and determines my actions (I don’t honestly know if it is), then the problem dissolves in another direction—there’s no choice anyway, it’s just an illusion.
Well, if 1% accuracy means 99% of one-boxers are predicted to two-box, and 99% of two-boxers are expected to one-box, you should two-box. The prediction needs to at least be correlated with reality.
Sorry, described it in too few words. “1% better than random” is what I meant. If 51.5% of two-boxers get only the small payout, and 51.5% of one-boxers get the big payout, then one-boxing is obvious.