#4 (implementability): I think of this as the shakiest assumption; it is easy to set up decision problems which violate it. However, I tend to think such setups get the causal structure wrong. Other parents of the action should instead be thought of as children of the action. Furthermore, if an agent is learning about the structure of a situation by repeated exposure to that situation, implementability seems necessary for the agent to come to understand the situation it is in: parents of the action will look like children if you try to perform experiments to see what happens when you do different things.
This assumption seems sketchy to me. In particular, what if you make 2 copies of a deterministic agent, move them physically far from each other, give them the same information, and ask each to select an action? Clearly, if a rational agent is uncertain about either agent’s action, then they will believe the two agents’ actions to be (perfectly) correlated. The two actions can’t each be children of each other...
I maybe should have clarified that when I say CDT I’m referring to a steel-man CDT which would use some notion of logical causality. I don’t think the physical counterfactuals are a live hypothesis in our circles, but several people advocate reasoning which looks like logical causality.
Implementability asserts that you should think of yourself as logico-causally controlling your clone when it is a perfect copy.
If your decision logico-causally controls your clone’s decision and vice versa, doesn’t that imply a non-causal model (since it has a cycle)?
In the case of an exact clone this is less of an issue since there’s only one relevant logical fact. But in cases where something like correlated equilibrium is being emulated on logical uncertainty (as in this post), the decisions could be logically correlated without being identical.
[EDIT: in the case of correlated equilibrium specifically, there actually is a signal (which action you are told to take), and your action is conditionally independent of everything else given this signal, so there isn’t a problem. However, in COEDT, each agent knows the oracle distribution but not the oracle itself, which means they consider their own action to be correlated with other agents’ actions.]
This assumption seems sketchy to me. In particular, what if you make 2 copies of a deterministic agent, move them physically far from each other, give them the same information, and ask each to select an action? Clearly, if a rational agent is uncertain about either agent’s action, then they will believe the two agents’ actions to be (perfectly) correlated. The two actions can’t each be children of each other...
I maybe should have clarified that when I say CDT I’m referring to a steel-man CDT which would use some notion of logical causality. I don’t think the physical counterfactuals are a live hypothesis in our circles, but several people advocate reasoning which looks like logical causality.
Implementability asserts that you should think of yourself as logico-causally controlling your clone when it is a perfect copy.
If your decision logico-causally controls your clone’s decision and vice versa, doesn’t that imply a non-causal model (since it has a cycle)?
In the case of an exact clone this is less of an issue since there’s only one relevant logical fact. But in cases where something like correlated equilibrium is being emulated on logical uncertainty (as in this post), the decisions could be logically correlated without being identical.
[EDIT: in the case of correlated equilibrium specifically, there actually is a signal (which action you are told to take), and your action is conditionally independent of everything else given this signal, so there isn’t a problem. However, in COEDT, each agent knows the oracle distribution but not the oracle itself, which means they consider their own action to be correlated with other agents’ actions.]