taw, I was kinda hoping you’d have some alternative formulations, having thought of it longer than me. What do you think? Is it still possible to rescue the problem?
I was mostly trying to approach it from classical decision theory side, but the results are still the same. There are three levels in the decision tree here:
You precommit to one-box / two-box
Omega decides 1000000 / 0. Omega is allowed to look at your precommitment
You do one-box / two-box
If we consider precommitment to be binding, we collapse it to “you decide first, omega second, so trivial one-box” . If we consider precommitment non-binding, we collapse it to “you make throwaway decision to one-box, omage does 1000000, you two-box and get 1001000″, and this “omega” has zero knowledge.
In classical decision theory you are not allowed to look at other people’s precommitments, so the game with decisions taking place at any point (between start and the action) and people changing their minds on every step is mathematically equivalent to one where precommitments are binding and decided before anybody acts.
This equivalency is broken by Newcomb’s problem, so precommitments and being able to break them now do matter, and people who try to use classical decision theory ignoring this will fail. Axiom broken, everybody dies.
taw, I was kinda hoping you’d have some alternative formulations, having thought of it longer than me. What do you think? Is it still possible to rescue the problem?
I was mostly trying to approach it from classical decision theory side, but the results are still the same. There are three levels in the decision tree here:
You precommit to one-box / two-box
Omega decides 1000000 / 0. Omega is allowed to look at your precommitment
You do one-box / two-box
If we consider precommitment to be binding, we collapse it to “you decide first, omega second, so trivial one-box” . If we consider precommitment non-binding, we collapse it to “you make throwaway decision to one-box, omage does 1000000, you two-box and get 1001000″, and this “omega” has zero knowledge.
In classical decision theory you are not allowed to look at other people’s precommitments, so the game with decisions taking place at any point (between start and the action) and people changing their minds on every step is mathematically equivalent to one where precommitments are binding and decided before anybody acts.
This equivalency is broken by Newcomb’s problem, so precommitments and being able to break them now do matter, and people who try to use classical decision theory ignoring this will fail. Axiom broken, everybody dies.