Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world?
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.