Beliefs here are weakly held, I want to become more right.
I think defining winning as coming away with the most utility is a crisp measure of what makes a good decision theory.
The theory of counterfactuals are, in my mind, what separates the decision theories themselves, and is therefore the core question/fuzziness in solving decision theory. Changing your theory of counterfactuals alters the answer of the fundamental question: “when you change your action/policy, what parts of the world change with you?”.
It doesn’t seem like there is a directly objective answer to this based on the mechanic—should you change everything that’s causally downstream of you? or everything that’s logically dependent on your decision? or everything that’s correlated with your decision? A priori these all seem basically reasonable to me, until we plug these into examples and see if they are dominated by other decision theories, as measured by expected utility.
(I think?) in examples like counterfactual mugging, the measuring stick is pretty clearly whichever decision theory gets more expected utility over the whole duration of the universe. It seems fine to lose utility where you start in the middle of the scenario (operationalized by there being any sort of entanglements to outside the scenario).
In my view, the fuzziness is in finding a well-defined way to achieve the goal of lots of expected utility, not in the goal itself.
Over the course of the universe, the best decision theory is a consensus/multiple-evaluation theory. Evaluate which part of the universe you’re in, and what is the likelihood that you’re in a causally-unusaual scenario, and use the DT which gives the best outcome.
How a predictor works when your meta-DT gives different answers based on whether you’ve been predicted, I don’t know. Like a lot of adversarial(-ish) situation, the side with the most predictive power wins.
Beliefs here are weakly held, I want to become more right.
I think defining winning as coming away with the most utility is a crisp measure of what makes a good decision theory.
The theory of counterfactuals are, in my mind, what separates the decision theories themselves, and is therefore the core question/fuzziness in solving decision theory. Changing your theory of counterfactuals alters the answer of the fundamental question: “when you change your action/policy, what parts of the world change with you?”.
It doesn’t seem like there is a directly objective answer to this based on the mechanic—should you change everything that’s causally downstream of you? or everything that’s logically dependent on your decision? or everything that’s correlated with your decision? A priori these all seem basically reasonable to me, until we plug these into examples and see if they are dominated by other decision theories, as measured by expected utility.
(I think?) in examples like counterfactual mugging, the measuring stick is pretty clearly whichever decision theory gets more expected utility over the whole duration of the universe. It seems fine to lose utility where you start in the middle of the scenario (operationalized by there being any sort of entanglements to outside the scenario).
In my view, the fuzziness is in finding a well-defined way to achieve the goal of lots of expected utility, not in the goal itself.
Over the course of the universe, the best decision theory is a consensus/multiple-evaluation theory. Evaluate which part of the universe you’re in, and what is the likelihood that you’re in a causally-unusaual scenario, and use the DT which gives the best outcome.
How a predictor works when your meta-DT gives different answers based on whether you’ve been predicted, I don’t know. Like a lot of adversarial(-ish) situation, the side with the most predictive power wins.