Edit: It now seems likely I simply misunderstood parts of the post and that John actually does propose to model the situation with a causal model in such a way that there is no dependency between the noise and other things happening in the model. He just doesn’t propose how to do that.
This is my first encounter with your more technical writing, so I may lack some of the context to make sense of all of this. Nevertheless, here’s my unfiltered reaction:
This article does seem to contain useful intuitions, but some of the concrete formalizations that you propose seem wrong from my current understanding.
For example, you want to formalize the situation with a computation DAG, which you claim is the same as a causal model. However, the main issue you use to motivate the formalization — namely, that stuff happening at a different time changes the resulting computations — is, as far as I know, not present in the usual formalization of causal models. Namely, the local functions in causal models only depend on the values of the parents and some “noise” that is thought to be independent of anything else going on in the model. In particular, the “noise” would then not depend on the order in which the operations happen.
I think for the same reason, the visualization of the “Cartesian boundary” as being comprised of the nodes visually interfacing between agent and environment makes little sense. For causal models, this would be correct, but if we assume that the order of computations actually does matter and that local computations can produce “write”-statements from which arbitrary different nodes can read — in particular, nodes from the environment can read nodes from the agent and vice versa — then we cannot so easily screen off the agent from the environment.
Edit: It now seems likely I simply misunderstood parts of the post and that John actually does propose to model the situation with a causal model in such a way that there is no dependency between the noise and other things happening in the model. He just doesn’t propose how to do that.
This is my first encounter with your more technical writing, so I may lack some of the context to make sense of all of this. Nevertheless, here’s my unfiltered reaction:
This article does seem to contain useful intuitions, but some of the concrete formalizations that you propose seem wrong from my current understanding.
For example, you want to formalize the situation with a computation DAG, which you claim is the same as a causal model. However, the main issue you use to motivate the formalization — namely, that stuff happening at a different time changes the resulting computations — is, as far as I know, not present in the usual formalization of causal models. Namely, the local functions in causal models only depend on the values of the parents and some “noise” that is thought to be independent of anything else going on in the model. In particular, the “noise” would then not depend on the order in which the operations happen.
I think for the same reason, the visualization of the “Cartesian boundary” as being comprised of the nodes visually interfacing between agent and environment makes little sense. For causal models, this would be correct, but if we assume that the order of computations actually does matter and that local computations can produce “write”-statements from which arbitrary different nodes can read — in particular, nodes from the environment can read nodes from the agent and vice versa — then we cannot so easily screen off the agent from the environment.