Suppose there is a bounded version of your algorithm where you don’t have much time to think. If you are thinking for too long, the algorithm can no longer channel your thinking, and so you lose influence over its conclusions. A better algorithm has a higher time bound on the thinking loop, but that’s a different algorithm! And the low-time-bound algorithm might be the only implementation of you present in the physical world, yet it’s not the algorithm you want to follow.
So it’s useful to see the sense in which libertarian free will has it right. You are not the algorithm. If your algorithm behaves differently from how you behave, then so much worse for the algorithm. Except you are no longer in control of it in that case, so it might be in your interest to restrict your behavior to what your algorithm can do, or else you risk losing influence over the physical world. But if you can build a different algorithm that is better at channeling your preferred behavior than your current algorithm, that’s an improvement.
I never understood that point, “you are not the algorithm”. If you include the self-modification part in the algorithm itself, wouldn’t you “be the algorithm”?
Hmm. So, suppose there are several parts to this process. Main “algorithm”, analyzer of the main algoritm’s performance, and an algorithm modifier that “builds a new separate algorithm in the environment”. All 3 are parts of the same agent, and so can be just called the agent’s algorithm, no?
A known algorithm is a known finite syntactic thing, while an agent doesn’t normally know its behavior in this form, if it doesn’t tie its own identity to an existing algorithm. And that option doesn’t seem particularly motivated, as illustrated by it being desirable to build yourself a new algorithm.
Of course, if you just take the whole environment where the agent is embedded (with the environment being finite in nonblank data) and call that “the algorithm”, then any outcome in that environment is determined by that algorithm, and that somewhat robs notions disagreeing with that algorithm of motivation (though not really). But in more realistic situations there is unbounded unknown data in environment, so no algorithm fully describes its development, a choice of algorithm/data separation is a matter of framing.
In particular, an agent whose identity is not its initial algorithm can have preference found in environment, whose data is not part of the initial algorithm at all, can’t be inferred from it, can only be discovered by looking at the environment, perhaps only ever partially discovered. Most decision theory setups can’t understand that initial algorithm as an agent, since it’s usually an assumption of a decision algorithm that it knows what it optimizes for.
Suppose there is a bounded version of your algorithm where you don’t have much time to think. If you are thinking for too long, the algorithm can no longer channel your thinking, and so you lose influence over its conclusions. A better algorithm has a higher time bound on the thinking loop, but that’s a different algorithm! And the low-time-bound algorithm might be the only implementation of you present in the physical world, yet it’s not the algorithm you want to follow.
So it’s useful to see the sense in which libertarian free will has it right. You are not the algorithm. If your algorithm behaves differently from how you behave, then so much worse for the algorithm. Except you are no longer in control of it in that case, so it might be in your interest to restrict your behavior to what your algorithm can do, or else you risk losing influence over the physical world. But if you can build a different algorithm that is better at channeling your preferred behavior than your current algorithm, that’s an improvement.
I never understood that point, “you are not the algorithm”. If you include the self-modification part in the algorithm itself, wouldn’t you “be the algorithm”?
It’s not meaningfully self-modification if you are building a new separate algorithm in the environment.
Hmm. So, suppose there are several parts to this process. Main “algorithm”, analyzer of the main algoritm’s performance, and an algorithm modifier that “builds a new separate algorithm in the environment”. All 3 are parts of the same agent, and so can be just called the agent’s algorithm, no?
A known algorithm is a known finite syntactic thing, while an agent doesn’t normally know its behavior in this form, if it doesn’t tie its own identity to an existing algorithm. And that option doesn’t seem particularly motivated, as illustrated by it being desirable to build yourself a new algorithm.
Of course, if you just take the whole environment where the agent is embedded (with the environment being finite in nonblank data) and call that “the algorithm”, then any outcome in that environment is determined by that algorithm, and that somewhat robs notions disagreeing with that algorithm of motivation (though not really). But in more realistic situations there is unbounded unknown data in environment, so no algorithm fully describes its development, a choice of algorithm/data separation is a matter of framing.
In particular, an agent whose identity is not its initial algorithm can have preference found in environment, whose data is not part of the initial algorithm at all, can’t be inferred from it, can only be discovered by looking at the environment, perhaps only ever partially discovered. Most decision theory setups can’t understand that initial algorithm as an agent, since it’s usually an assumption of a decision algorithm that it knows what it optimizes for.