If your decision logico-causally controls your clone’s decision and vice versa, doesn’t that imply a non-causal model (since it has a cycle)?
In the case of an exact clone this is less of an issue since there’s only one relevant logical fact. But in cases where something like correlated equilibrium is being emulated on logical uncertainty (as in this post), the decisions could be logically correlated without being identical.
[EDIT: in the case of correlated equilibrium specifically, there actually is a signal (which action you are told to take), and your action is conditionally independent of everything else given this signal, so there isn’t a problem. However, in COEDT, each agent knows the oracle distribution but not the oracle itself, which means they consider their own action to be correlated with other agents’ actions.]
If your decision logico-causally controls your clone’s decision and vice versa, doesn’t that imply a non-causal model (since it has a cycle)?
In the case of an exact clone this is less of an issue since there’s only one relevant logical fact. But in cases where something like correlated equilibrium is being emulated on logical uncertainty (as in this post), the decisions could be logically correlated without being identical.
[EDIT: in the case of correlated equilibrium specifically, there actually is a signal (which action you are told to take), and your action is conditionally independent of everything else given this signal, so there isn’t a problem. However, in COEDT, each agent knows the oracle distribution but not the oracle itself, which means they consider their own action to be correlated with other agents’ actions.]