You are an algorithm embedded in physics. You are not any of the other people executing this algorithm, you are this one.
There is an algorithm and the person executing the algorithm, different entities. Being the algorithm, you are not the person executing it. The algorithm is channeled by the person instantiating it concretely (in full detail) as well as other people who might be channeling approximations to it, for example only getting to know that the algorithm’s computation satisfies some specification instead of knowing everything that goes on in its computation.
Conducting yourself according to these decision theories still causes the physical actions only of this one
The use of “you are the algorithm” frame is noticing that other instances and predictors of the same algorithm have the same claim to consequences of its behaviors, there is no preferred instance. The actions of the other instances and of the predictors, if they take place in the world, are equally “physical” as those of the putative “primary” instance.
only acausally connected to the others of which these theories speak
As an algorithm, you are acausally connected to all instances inclusing the “primary” instance in the same sense, by their reasoning about you-the-algorithm.
Deciding as if deciding for all is different from causally deciding for all.
I don’t know what “causally deciding” means for algorithms. Deciding as if deciding for all is actually an interesting detail, it’s possible to consider its variants where you are only deciding for some, and that stipulation creates different decision problems depending on the collection of instances that are to be controlled by a given decision (a subset of all instances that could be controlled). This can be used to set up coalitions of interventions that the algorithm coordinates. The algorithm instances that are left out of such decision problems are left with no guidance, which is analogous to them failing to compute the specification (predict/compute/prove an action-relevant property of algorithm’s behavior), a normal occurrence. It also illustrates that the instances should be ready to pick up the slack when the algorithm becomes unobservable.
There is an algorithm and the person executing the algorithm, different entities. Being the algorithm, you are not the person executing it. The algorithm is channeled by the person instantiating it concretely (in full detail) as well as other people who might be channeling approximations to it, for example only getting to know that the algorithm’s computation satisfies some specification instead of knowing everything that goes on in its computation.
The use of “you are the algorithm” frame is noticing that other instances and predictors of the same algorithm have the same claim to consequences of its behaviors, there is no preferred instance. The actions of the other instances and of the predictors, if they take place in the world, are equally “physical” as those of the putative “primary” instance.
As an algorithm, you are acausally connected to all instances inclusing the “primary” instance in the same sense, by their reasoning about you-the-algorithm.
I don’t know what “causally deciding” means for algorithms. Deciding as if deciding for all is actually an interesting detail, it’s possible to consider its variants where you are only deciding for some, and that stipulation creates different decision problems depending on the collection of instances that are to be controlled by a given decision (a subset of all instances that could be controlled). This can be used to set up coalitions of interventions that the algorithm coordinates. The algorithm instances that are left out of such decision problems are left with no guidance, which is analogous to them failing to compute the specification (predict/compute/prove an action-relevant property of algorithm’s behavior), a normal occurrence. It also illustrates that the instances should be ready to pick up the slack when the algorithm becomes unobservable.