The obvious TDT/UDT answer would be C and the CDT answer would probably be D. The counterfactual component doesn’t (or shouldn’t) change your strategy much in this case.
The obvious UDT answer is C, but I think TDT defects, for the exact same reason it doesn’t pay the counterfactual mugger (can’t improve any aspect of the ‘real’ situation it finds itself in).
TDT cooperates. The node representing the output of TDT affects the counterfactual TDT agent, which in turn affects Omega’s “real” prediction of the counterfactual TDT.
By crafting an appropriate dependency graph, you can make TDT agent agree to any UDT decision. Even in CM, if you model Omega in more detail as depending on your decision, you can get TDT agent to comply, but this is not the point: TDT doesn’t get this answer naturally without external introduction of compensating explicit dependence bias, and neither does it in this case.
Are you saying that it’s easier to get TDT to comply to CM if it’s ontologically fundamental randomness than if it’s logical uncertainty? (but you think it can be made to comply then, too)
In the least convenient possible world, the TDT agent doesn’t care intrinsically about any counterfactual process, only about the result on the real world.
Saying you can get an agent with one DT to follow the output of another DT by changing its utility function is not interesting.
Saying you can get an agent with one DT to follow the output of another DT by changing its utility function is not interesting.
If the mapping is natural enough, it establishes relative expressive power of the decision theories, perhaps even allowing to get the same not-a-priori-obvious conclusions from studying one theory as the other. But granted, as I described in this post, the step forward made in UDT/ADT, as compared to TDT, is that causal graph doesn’t need to be given as part of problem statement, dependencies get inferred from utility/action definition.
I am not following your abstract argument, and would like to see an example of how a “natural enough” mapping can establish “relative expressive power of the decision theories”.
The obvious TDT/UDT answer would be C and the CDT answer would probably be D. The counterfactual component doesn’t (or shouldn’t) change your strategy much in this case.
The obvious UDT answer is C, but I think TDT defects, for the exact same reason it doesn’t pay the counterfactual mugger (can’t improve any aspect of the ‘real’ situation it finds itself in).
Edit: Wrong, see reply.
TDT cooperates. The node representing the output of TDT affects the counterfactual TDT agent, which in turn affects Omega’s “real” prediction of the counterfactual TDT.
By crafting an appropriate dependency graph, you can make TDT agent agree to any UDT decision. Even in CM, if you model Omega in more detail as depending on your decision, you can get TDT agent to comply, but this is not the point: TDT doesn’t get this answer naturally without external introduction of compensating explicit dependence bias, and neither does it in this case.
I would like to see the dependency graph that compels TDT to pay in a counterfactual mugging.
Not if it expresses what’s real, but surely if it expresses what the agent cares about, basically the counterfactual world explicitly included.
Are you saying that it’s easier to get TDT to comply to CM if it’s ontologically fundamental randomness than if it’s logical uncertainty? (but you think it can be made to comply then, too)
In the least convenient possible world, the TDT agent doesn’t care intrinsically about any counterfactual process, only about the result on the real world.
Saying you can get an agent with one DT to follow the output of another DT by changing its utility function is not interesting.
If the mapping is natural enough, it establishes relative expressive power of the decision theories, perhaps even allowing to get the same not-a-priori-obvious conclusions from studying one theory as the other. But granted, as I described in this post, the step forward made in UDT/ADT, as compared to TDT, is that causal graph doesn’t need to be given as part of problem statement, dependencies get inferred from utility/action definition.
Ok, so show me an actual example of a mapping that is “natural enough”, and causes TDT to pay of in CM.
I argued with your argument, not your conclusion.
I am not following your abstract argument, and would like to see an example of how a “natural enough” mapping can establish “relative expressive power of the decision theories”.
I think you’re right.