Let me use a crude example. Say a person is facing the choice of taking 10 dollars versus getting a healthy meal. What should he do?
We can analyze it by imagine taking his perspective, consider the outcome of both actions, and choose the one I like better based on some criteria. This process assumes the choice is unrestricted from the on-set. (More on this later)
Alternatively we can just analyze that person physically, monitor what electrical signals he receives from his eyes, how in his brain the neuron networks functions to reductively deduce his action. In this method, there is no alternative action at all. The whole analysis is derivative. And we did not take his perspective. As I said, consciousness and free will are always due to the self. In this analysis, free will is not presupposed for the experiment subject, (it is not the self).
When we analyze a computer bug, it is actually the second type of method we are using. It is no different from trying to figure out why an intricate machine doesn’t work. That is not taking the program’s perspective. If I am taking the computer/program’s perspective, then I will not be reductively studying it but rather assume I am the program, with subjective experience attached to it. Similar to “What is it like to be a Bat?”, it would be knowing what it is like to be that program. Doing so, the analysis would presuppose “I” (the program) has free will as well.
That may be a little hard to imagine due to the irreducible nature of subjective experience. I find it is easier to think from the opposite direction. Say we are actually computer simulations, (like in various simulation arguments), then we know what it is like to be a program already. It is also easy to see why from the simulator’s viewpoints we have no free will.
As to why the first-person self has to presuppose free will. Because I have to first assume my thought is based on logic and reason, rather than some predetermined mumble-jumble. Otherwise, there are no reliable beliefs or rational reasoning at all, which would be self-defeating. That is especially important if perspective-based reasoning is fundamental.
Let me use a crude example. Say a person is facing the choice of taking 10 dollars versus getting a healthy meal. What should he do?
The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
When we analyze a computer bug, it is actually the second type of method we are using.
Please do not tell me how my thought processes proceed when debugging, thank you very kindly. I’ll merely tell you that you have a bad case of Typical Mind Fallacy and leave it at that.
>The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
When you make decisions such as which movie to watch, which shirt to buy, etc, do you ever do so by analyzing your brain’s structure and function thus deducing what result it would produce? I will take a wild guess and say that’s not how you think. You decide by comparing the alternatives based on preference. This reasoning is clearly different from reductively studying the brain, I wouldn’t call it just a framing difference.
As for debugging, I am not telling you how to do it. Debugging is essentially figuring why a Turing machine is not functioning as intended. One can follow its actions step by step to find the error, but that would still be reductively analyzing it rather than imagine oneself being the program. That would involve imagining how it feels to be the program. I don’t even think that is possible. So I’m certainly not saying to assume the program’s “mind” being the same as your own, as Typical Mind Fallacy says.
You decide by comparing the alternatives based on preference.
Which is not obviously free. If you have clear, unconflicted preferences all the time then they determine your actions. In fact, that’s a common argument against free will.
As to why the first-person self has to presuppose free will. Because I have to first assume my thought is based on logic and reason, rather than some predetermined mumble-jumb
If you are determined by forces outside of you, that does not guarantee you are devoid of.logic and reason..a computer is constructed to be logically correct, for instance.
Equally, having an ability to choose doesn’t guarantee that you will choose reason.
Let me use a crude example. Say a person is facing the choice of taking 10 dollars versus getting a healthy meal. What should he do?
We can analyze it by imagine taking his perspective, consider the outcome of both actions, and choose the one I like better based on some criteria. This process assumes the choice is unrestricted from the on-set. (More on this later)
Alternatively we can just analyze that person physically, monitor what electrical signals he receives from his eyes, how in his brain the neuron networks functions to reductively deduce his action. In this method, there is no alternative action at all. The whole analysis is derivative. And we did not take his perspective. As I said, consciousness and free will are always due to the self. In this analysis, free will is not presupposed for the experiment subject, (it is not the self).
When we analyze a computer bug, it is actually the second type of method we are using. It is no different from trying to figure out why an intricate machine doesn’t work. That is not taking the program’s perspective. If I am taking the computer/program’s perspective, then I will not be reductively studying it but rather assume I am the program, with subjective experience attached to it. Similar to “What is it like to be a Bat?”, it would be knowing what it is like to be that program. Doing so, the analysis would presuppose “I” (the program) has free will as well.
That may be a little hard to imagine due to the irreducible nature of subjective experience. I find it is easier to think from the opposite direction. Say we are actually computer simulations, (like in various simulation arguments), then we know what it is like to be a program already. It is also easy to see why from the simulator’s viewpoints we have no free will.
As to why the first-person self has to presuppose free will. Because I have to first assume my thought is based on logic and reason, rather than some predetermined mumble-jumble. Otherwise, there are no reliable beliefs or rational reasoning at all, which would be self-defeating. That is especially important if perspective-based reasoning is fundamental.
The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
Please do not tell me how my thought processes proceed when debugging, thank you very kindly. I’ll merely tell you that you have a bad case of Typical Mind Fallacy and leave it at that.
>The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
When you make decisions such as which movie to watch, which shirt to buy, etc, do you ever do so by analyzing your brain’s structure and function thus deducing what result it would produce? I will take a wild guess and say that’s not how you think. You decide by comparing the alternatives based on preference. This reasoning is clearly different from reductively studying the brain, I wouldn’t call it just a framing difference.
As for debugging, I am not telling you how to do it. Debugging is essentially figuring why a Turing machine is not functioning as intended. One can follow its actions step by step to find the error, but that would still be reductively analyzing it rather than imagine oneself being the program. That would involve imagining how it feels to be the program. I don’t even think that is possible. So I’m certainly not saying to assume the program’s “mind” being the same as your own, as Typical Mind Fallacy says.
Which is not obviously free. If you have clear, unconflicted preferences all the time then they determine your actions. In fact, that’s a common argument against free will.
If you are determined by forces outside of you, that does not guarantee you are devoid of.logic and reason..a computer is constructed to be logically correct, for instance.
Equally, having an ability to choose doesn’t guarantee that you will choose reason.