This is somewhat dated in the sense that LW-style decision theory later converged on treating agents-that-make-decisions as abstract algorithms rather than as their instances embedded in the world, see discussion of “algorithm” axis of classifying decision theories in this post.
With TAG, I don’t see what their decision theory has to do with the matter. Whatever their decision theory, it is impotent to achieve anything unless their physical instances embedded in the world are able to physically act in the world to achieve their aims, which is the thing that incompatibilists deny.
The point is about the frame of Yudkowsky’s explanation, where “you are physics” instead of “you are an algorithm”. The latter seems convergently more useful for decision theory of embedded agents, which can be predicted by other agents or can have multiple copies. So this doesn’t concern some prior meaning of “free will”, it motivates caring about a notion of free will that has to do with abstract computations of agent’s decisions rather than agent instances embedded in the world.
You are an algorithm embedded in physics. You are not any of the other people executing this algorithm, you are this one. Conducting yourself according to these decision theories still causes the physical actions only of this one, and is only acausally connected to the others of which these theories speak. Deciding as if deciding for all is different from causally deciding for all.
You are an algorithm embedded in physics. You are not any of the other people executing this algorithm, you are this one.
There is an algorithm and the person executing the algorithm, different entities. Being the algorithm, you are not the person executing it. The algorithm is channeled by the person instantiating it concretely (in full detail) as well as other people who might be channeling approximations to it, for example only getting to know that the algorithm’s computation satisfies some specification instead of knowing everything that goes on in its computation.
Conducting yourself according to these decision theories still causes the physical actions only of this one
The use of “you are the algorithm” frame is noticing that other instances and predictors of the same algorithm have the same claim to consequences of its behaviors, there is no preferred instance. The actions of the other instances and of the predictors, if they take place in the world, are equally “physical” as those of the putative “primary” instance.
only acausally connected to the others of which these theories speak
As an algorithm, you are acausally connected to all instances inclusing the “primary” instance in the same sense, by their reasoning about you-the-algorithm.
Deciding as if deciding for all is different from causally deciding for all.
I don’t know what “causally deciding” means for algorithms. Deciding as if deciding for all is actually an interesting detail, it’s possible to consider its variants where you are only deciding for some, and that stipulation creates different decision problems depending on the collection of instances that are to be controlled by a given decision (a subset of all instances that could be controlled). This can be used to set up coalitions of interventions that the algorithm coordinates. The algorithm instances that are left out of such decision problems are left with no guidance, which is analogous to them failing to compute the specification (predict/compute/prove an action-relevant property of algorithm’s behavior), a normal occurrence. It also illustrates that the instances should be ready to pick up the slack when the algorithm becomes unobservable.
You still haven’t said why it motivates that. Even if you are not talking about a prior definition of free will, why does your novel definition have to do with algorithms?
Why would decision theory determine the nature of free will? I would have though it was the other way round: free will has implications for what decisions are.
This is somewhat dated in the sense that LW-style decision theory later converged on treating agents-that-make-decisions as abstract algorithms rather than as their instances embedded in the world, see discussion of “algorithm” axis of classifying decision theories in this post.
With TAG, I don’t see what their decision theory has to do with the matter. Whatever their decision theory, it is impotent to achieve anything unless their physical instances embedded in the world are able to physically act in the world to achieve their aims, which is the thing that incompatibilists deny.
The point is about the frame of Yudkowsky’s explanation, where “you are physics” instead of “you are an algorithm”. The latter seems convergently more useful for decision theory of embedded agents, which can be predicted by other agents or can have multiple copies. So this doesn’t concern some prior meaning of “free will”, it motivates caring about a notion of free will that has to do with abstract computations of agent’s decisions rather than agent instances embedded in the world.
You are an algorithm embedded in physics. You are not any of the other people executing this algorithm, you are this one. Conducting yourself according to these decision theories still causes the physical actions only of this one, and is only acausally connected to the others of which these theories speak. Deciding as if deciding for all is different from causally deciding for all.
There is an algorithm and the person executing the algorithm, different entities. Being the algorithm, you are not the person executing it. The algorithm is channeled by the person instantiating it concretely (in full detail) as well as other people who might be channeling approximations to it, for example only getting to know that the algorithm’s computation satisfies some specification instead of knowing everything that goes on in its computation.
The use of “you are the algorithm” frame is noticing that other instances and predictors of the same algorithm have the same claim to consequences of its behaviors, there is no preferred instance. The actions of the other instances and of the predictors, if they take place in the world, are equally “physical” as those of the putative “primary” instance.
As an algorithm, you are acausally connected to all instances inclusing the “primary” instance in the same sense, by their reasoning about you-the-algorithm.
I don’t know what “causally deciding” means for algorithms. Deciding as if deciding for all is actually an interesting detail, it’s possible to consider its variants where you are only deciding for some, and that stipulation creates different decision problems depending on the collection of instances that are to be controlled by a given decision (a subset of all instances that could be controlled). This can be used to set up coalitions of interventions that the algorithm coordinates. The algorithm instances that are left out of such decision problems are left with no guidance, which is analogous to them failing to compute the specification (predict/compute/prove an action-relevant property of algorithm’s behavior), a normal occurrence. It also illustrates that the instances should be ready to pick up the slack when the algorithm becomes unobservable.
You still haven’t said why it motivates that. Even if you are not talking about a prior definition of free will, why does your novel definition have to do with algorithms?
Why would decision theory determine the nature of free will? I would have though it was the other way round: free will has implications for what decisions are.