Here’s a simplified version of your second counterexample:
Omega appears and asks you which colour you like better, red or blue. If you chose the same colour that Omega happens to like, you get a million dollars, otherwise zero.
Obviously, your decision in this ridiculous scenario depends on your prior for meeting Omegas who like red vs. Omegas who like blue. Likewise, in your original counterexample, your action in scenario 1 should depend on your prior for encountering scenario 1 vs. scenario 2.
So yeah, this is a pretty big flaw in UDT that I pointed out sometime ago on the workshop list, and then found out that Caspian nailed it even earlier in the comments to Nesov’s original post on Counterfactual Mugging. The retort “just use priors” may or may not be satisfactory to you. It’s certainly not completely satisfactory to me, so I’d like a decision theory that doesn’t require anything beyond “local” descriptions of scenarios. Presumably, such a theory would win in Scenario 1 and lose in Scenario 2, which may or may not be what we want.
Here’s a simplified version of your second counterexample:
Omega appears and asks you which colour you like better, red or blue. If you chose the same colour that Omega happens to like, you get a million dollars, otherwise zero.
Obviously, your decision in this ridiculous scenario depends on your prior for meeting Omegas who like red vs. Omegas who like blue. Likewise, in your original counterexample, your action in scenario 1 should depend on your prior for encountering scenario 1 vs. scenario 2.
So yeah, this is a pretty big flaw in UDT that I pointed out sometime ago on the workshop list, and then found out that Caspian nailed it even earlier in the comments to Nesov’s original post on Counterfactual Mugging. The retort “just use priors” may or may not be satisfactory to you. It’s certainly not completely satisfactory to me, so I’d like a decision theory that doesn’t require anything beyond “local” descriptions of scenarios. Presumably, such a theory would win in Scenario 1 and lose in Scenario 2, which may or may not be what we want.