Note that Newcomb’s problem doesn’t depend on perfect prediction – 90% or even 55% accurate Omega still makes the problem work fine (you might have to tweak the payouts slightly)
Sure, it’s fine with even 1% accuracy with 1000:1 payout difference. But my point is that causal decision theory works just fine if Omega is cheating or imperfectly predicting. As long as the causal arrow isn’t fully independent from prediction to outcome and decision to outcome, one-boxing is trivial.
If “access to my source code” is possible and determines my actions (I don’t honestly know if it is), then the problem dissolves in another direction—there’s no choice anyway, it’s just an illusion.
it’s fine with even 1% accuracy with 1000:1 payout difference.
Well, if 1% accuracy means 99% of one-boxers are predicted to two-box, and 99% of two-boxers are expected to one-box, you should two-box. The prediction needs to at least be correlated with reality.
Sorry, described it in too few words. “1% better than random” is what I meant. If 51.5% of two-boxers get only the small payout, and 51.5% of one-boxers get the big payout, then one-boxing is obvious.
Note that Newcomb’s problem doesn’t depend on perfect prediction – 90% or even 55% accurate Omega still makes the problem work fine (you might have to tweak the payouts slightly)
Sure, it’s fine with even 1% accuracy with 1000:1 payout difference. But my point is that causal decision theory works just fine if Omega is cheating or imperfectly predicting. As long as the causal arrow isn’t fully independent from prediction to outcome and decision to outcome, one-boxing is trivial.
If “access to my source code” is possible and determines my actions (I don’t honestly know if it is), then the problem dissolves in another direction—there’s no choice anyway, it’s just an illusion.
Well, if 1% accuracy means 99% of one-boxers are predicted to two-box, and 99% of two-boxers are expected to one-box, you should two-box. The prediction needs to at least be correlated with reality.
Sorry, described it in too few words. “1% better than random” is what I meant. If 51.5% of two-boxers get only the small payout, and 51.5% of one-boxers get the big payout, then one-boxing is obvious.