I’m confused by the “no dutch book” argument. Pre-California-lottery-resolution, we’ve got CB≺CA, but post-California-lottery-resolution we simultaneously still have A≺B and “we refuse any offer to switch from B to A”, which makes me very uncertain what ≺ means here.
Is this just EDT vs UDT again, or is the post-lottery A≺B subtly distinct from the pre-lottery one, or is “if you see yourself about to be dutch-booked, just suck it up and be sad” a generally accepted solution to otherwise being DB’d, or something else?
I think it is EDT vs UDT. We prefer B to A, but we prefer CA to CB, not because of dutch books, but because CA is good enough for Bob to be fair, and A is not good enough for Bob.
...huh. So UDT in general gets to just ignore the independence axiom because:
UDT’s whole shtick is credibly pre-committing to seemingly bad choices in some worlds in order to get good outcomes in others, and/or
UDT is optimizing over policies rather than actions, and I guess there’s nothing stopping us having preferences over properties of the policy like fairness (instead of only ordering policies by their “ground level” outcomes).
And this is where G comes in, it’s one way of encoding something-like-fairness.
I’m confused by the “no dutch book” argument. Pre-California-lottery-resolution, we’ve got CB≺CA, but post-California-lottery-resolution we simultaneously still have A≺B and “we refuse any offer to switch from B to A”, which makes me very uncertain what ≺ means here.
Is this just EDT vs UDT again, or is the post-lottery A≺B subtly distinct from the pre-lottery one, or is “if you see yourself about to be dutch-booked, just suck it up and be sad” a generally accepted solution to otherwise being DB’d, or something else?
I think it is EDT vs UDT. We prefer B to A, but we prefer CA to CB, not because of dutch books, but because CA is good enough for Bob to be fair, and A is not good enough for Bob.
...huh. So UDT in general gets to just ignore the independence axiom because:
UDT’s whole shtick is credibly pre-committing to seemingly bad choices in some worlds in order to get good outcomes in others, and/or
UDT is optimizing over policies rather than actions, and I guess there’s nothing stopping us having preferences over properties of the policy like fairness (instead of only ordering policies by their “ground level” outcomes).
And this is where G comes in, it’s one way of encoding something-like-fairness.
Sound about right?
yep