I maybe should have clarified that when I say CDT I’m referring to a steel-man CDT which would use some notion of logical causality. I don’t think the physical counterfactuals are a live hypothesis in our circles, but several people advocate reasoning which looks like logical causality.
Implementability asserts that you should think of yourself as logico-causally controlling your clone when it is a perfect copy.
If your decision logico-causally controls your clone’s decision and vice versa, doesn’t that imply a non-causal model (since it has a cycle)?
In the case of an exact clone this is less of an issue since there’s only one relevant logical fact. But in cases where something like correlated equilibrium is being emulated on logical uncertainty (as in this post), the decisions could be logically correlated without being identical.
[EDIT: in the case of correlated equilibrium specifically, there actually is a signal (which action you are told to take), and your action is conditionally independent of everything else given this signal, so there isn’t a problem. However, in COEDT, each agent knows the oracle distribution but not the oracle itself, which means they consider their own action to be correlated with other agents’ actions.]
I maybe should have clarified that when I say CDT I’m referring to a steel-man CDT which would use some notion of logical causality. I don’t think the physical counterfactuals are a live hypothesis in our circles, but several people advocate reasoning which looks like logical causality.
Implementability asserts that you should think of yourself as logico-causally controlling your clone when it is a perfect copy.
If your decision logico-causally controls your clone’s decision and vice versa, doesn’t that imply a non-causal model (since it has a cycle)?
In the case of an exact clone this is less of an issue since there’s only one relevant logical fact. But in cases where something like correlated equilibrium is being emulated on logical uncertainty (as in this post), the decisions could be logically correlated without being identical.
[EDIT: in the case of correlated equilibrium specifically, there actually is a signal (which action you are told to take), and your action is conditionally independent of everything else given this signal, so there isn’t a problem. However, in COEDT, each agent knows the oracle distribution but not the oracle itself, which means they consider their own action to be correlated with other agents’ actions.]