Because our universe is causal, any computation performed in our universe must eventually bottom out in a causal DAG.
Totally agree. This is a big part of the reason why I’m excited about these kinds of diagrams.
This raises the issue of abstraction—the core problem of embedded agency. … how can one causal diagram (possibly with symmetry) represent another in a way which makes counterfactual queries on the map correspond to some kind of counterfactual on the territory?
Great question, I really think someone should look more carefully into this. A few potentially related papers:
In general, though, how to learn causal DAGs with symmetry is still an open question. We’d like something like Solomonoff Induction, but which can account for partial information about the internal structure of the causal DAG, rather than just overall input-output behavior.
Again, agreed. It would be great if we could find a way to make progress on this question.
Thanks for a nice post about causal diagrams!
Totally agree. This is a big part of the reason why I’m excited about these kinds of diagrams.
Great question, I really think someone should look more carefully into this. A few potentially related papers:
https://arxiv.org/abs/1105.0158
https://arxiv.org/abs/1812.03789
Again, agreed. It would be great if we could find a way to make progress on this question.