No. For example, AIXI is what I would regard as essentially a Bayesian agent, but it has a notion of causality because it has a notion of the environment taking its actions as an input.
This looks like a symptom of AIXI’s inability to self-model. Of course causality is going to look fundamental when you think you can magically intervene from outside the system.
Do you share the intuition I mention in my other comment? I feel that they way this post reframes CDT and TDT as attempts to clarify bad self-modelling by naive EDT is very similar to the way I would reframe Pearl’s positions as an attempt to clarify bad self-modelling by naive probability theory a la AIXI.
So your intuition is that causality isn’t fundamental but should fall out of correct self-modeling? I guess that’s also my intuition, and I also don’t know how to make that precise.
This looks like a symptom of AIXI’s inability to self-model. Of course causality is going to look fundamental when you think you can magically intervene from outside the system.
Do you share the intuition I mention in my other comment? I feel that they way this post reframes CDT and TDT as attempts to clarify bad self-modelling by naive EDT is very similar to the way I would reframe Pearl’s positions as an attempt to clarify bad self-modelling by naive probability theory a la AIXI.
So your intuition is that causality isn’t fundamental but should fall out of correct self-modeling? I guess that’s also my intuition, and I also don’t know how to make that precise.