...huh. So UDT in general gets to just ignore the independence axiom because:
UDT’s whole shtick is credibly pre-committing to seemingly bad choices in some worlds in order to get good outcomes in others, and/or
UDT is optimizing over policies rather than actions, and I guess there’s nothing stopping us having preferences over properties of the policy like fairness (instead of only ordering policies by their “ground level” outcomes).
And this is where G comes in, it’s one way of encoding something-like-fairness.
...huh. So UDT in general gets to just ignore the independence axiom because:
UDT’s whole shtick is credibly pre-committing to seemingly bad choices in some worlds in order to get good outcomes in others, and/or
UDT is optimizing over policies rather than actions, and I guess there’s nothing stopping us having preferences over properties of the policy like fairness (instead of only ordering policies by their “ground level” outcomes).
And this is where G comes in, it’s one way of encoding something-like-fairness.
Sound about right?
yep