I think counterfactuals only make sense when talking about a part of a system from the perspective of another part. Maybe probabilities as well. Similar to how in quantum mechanics, a system of two qubits can be in a pure state, but from the perspective of the first qubit, the second is in a mixed state.
In this view, causality/counterfactuals don’t have to be physically fundamental. For example, you can have a Game of Life world where “all causal claims reduce to claims about state” as you say: “if X then Y” where X and Y are successive states. Yet it makes perfect sense for an AI in that world to use probabilities or counterfactuals over another, demarcated part of the world.
There is of course a tension between that and logical decision theories, but maybe that’s ok?
I think counterfactuals only make sense when talking about a part of a system from the perspective of another part. Maybe probabilities as well. Similar to how in quantum mechanics, a system of two qubits can be in a pure state, but from the perspective of the first qubit, the second is in a mixed state.
In this view, causality/counterfactuals don’t have to be physically fundamental. For example, you can have a Game of Life world where “all causal claims reduce to claims about state” as you say: “if X then Y” where X and Y are successive states. Yet it makes perfect sense for an AI in that world to use probabilities or counterfactuals over another, demarcated part of the world.
There is of course a tension between that and logical decision theories, but maybe that’s ok?