You don’t grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a “universal log program”, for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn’t take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don’t take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
OK, so you’re saying that A, a human in ‘the real world’, acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the ‘output log’ of each depends on the ‘Platonic’ result of a common computation—in this case the computation where A’s brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that ‘Platonic’ computation.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world? On the other hand, if you specifically identify “yourself” with a particular chunk of “the real world” then it seems a bit misleading to say that “you” ambiently control P, given that “you” are yourself ambiently controlled by the abstract computation which is controlling P.
Perhaps this is only a ‘semantic quibble’ but in any case I can’t see how ambient control gets us any nearer to being able to say that we can threaten ‘parallel worlds’ causally disjoint from “the real world”, or receive responses or threats in return.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world?
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
You don’t grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a “universal log program”, for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn’t take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don’t take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
OK, so you’re saying that A, a human in ‘the real world’, acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the ‘output log’ of each depends on the ‘Platonic’ result of a common computation—in this case the computation where A’s brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that ‘Platonic’ computation.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world? On the other hand, if you specifically identify “yourself” with a particular chunk of “the real world” then it seems a bit misleading to say that “you” ambiently control P, given that “you” are yourself ambiently controlled by the abstract computation which is controlling P.
Perhaps this is only a ‘semantic quibble’ but in any case I can’t see how ambient control gets us any nearer to being able to say that we can threaten ‘parallel worlds’ causally disjoint from “the real world”, or receive responses or threats in return.
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.