Ruling out backwards causality, I would two-box, and I would get $1000 unless Omega made a mistake.
No, I wouldn’t rather be someone who two-boxes in Newcomb, because if Omega makes its predictions based on the past, this would only lead to me losing $1000, because Newcomb is a one-time problem. I would have to choose differently in other decisions for Omega to change its prediction, and that is something I’m not willing to do.
Of course if I’m allowed to communicate with Omega, I would try to convince it that I’ll be one-boxing (while still two-boxing), and if I can increase the probability of Omega predicting me to one-box enough to justify actually precommiting to one-boxing (by use of a lie detector or whatever), then I would do that.
However, in reality I would probably get some satisfaction out of proving Omega wrong, so the payoff matrix may not be that simple. I don’t think this is in any way relevant to the theoretical problem, though.
Ruling out backwards causality, I would two-box, and I would get $1000 unless Omega made a mistake.
No, I wouldn’t rather be someone who two-boxes in Newcomb, because if Omega makes its predictions based on the past, this would only lead to me losing $1000, because Newcomb is a one-time problem. I would have to choose differently in other decisions for Omega to change its prediction, and that is something I’m not willing to do.
Of course if I’m allowed to communicate with Omega, I would try to convince it that I’ll be one-boxing (while still two-boxing), and if I can increase the probability of Omega predicting me to one-box enough to justify actually precommiting to one-boxing (by use of a lie detector or whatever), then I would do that.
However, in reality I would probably get some satisfaction out of proving Omega wrong, so the payoff matrix may not be that simple. I don’t think this is in any way relevant to the theoretical problem, though.