This doesn’t hold in my experience. In many ways quite the contrary!
Firstly, I often reason about bugs in computer programs from the computer’s point of view: what does it “see” as input, what code does it execute, what does that mean for its behaviour, and so on. I certainly don’t think it has free will in any but the most limited sense.
More widely, to reason from some other person’s perspective is to attempt to model their actions, thoughts, and decisions. Where does their “free will” come into this? Nowhere, as far as I can tell. If anything the supposition of free will detracts from any perspective-based reasoning.
I take this as a description of an ingredient of the notion of perspective-based reasoning as it’s being defined, not a claim about a prior notion of perspective-based reasoning. So perspective-based reasoning is something with free will built in. If free will is ability to consider the possible worlds where each available decision is taken and to enact one of them, this should work for other agents as well. But for any one possible action of another agent, there is still a whole collection of possible worlds corresponding to the possible actions of the agent in charge of the perspective from which it’s being considered.
As far as I can tell, the post does not just define a new notion called “perspective-based reasoning”. It appears to be using the term to cover every possible form of reasoning that humans can do based on imagining the world from another perspective:
Though we cannot think without a perspective, we are capable of putting ourselves into others’ shoes. In another word, we can imagine thinking from different perspectives.
If it was intended to define a new term that applies only to a subset of such thought, then it completely missed the mark from my point of view.
Factual claims and hypothetical constructions are hopelessly jumbled together in these posts, so making sense of them requires sorting this out. I think the appropriate role for that provision is as an axiom for the notion, not as an assertion of fact. There may be a further assertion of fact, but it’s ignorable, while the axiom has some hope of being useful.
(It doesn’t look feasible to mine these posts for something not already familiar, but maybe they sketch their general topic enough to communicate what it is.)
Let me use a crude example. Say a person is facing the choice of taking 10 dollars versus getting a healthy meal. What should he do?
We can analyze it by imagine taking his perspective, consider the outcome of both actions, and choose the one I like better based on some criteria. This process assumes the choice is unrestricted from the on-set. (More on this later)
Alternatively we can just analyze that person physically, monitor what electrical signals he receives from his eyes, how in his brain the neuron networks functions to reductively deduce his action. In this method, there is no alternative action at all. The whole analysis is derivative. And we did not take his perspective. As I said, consciousness and free will are always due to the self. In this analysis, free will is not presupposed for the experiment subject, (it is not the self).
When we analyze a computer bug, it is actually the second type of method we are using. It is no different from trying to figure out why an intricate machine doesn’t work. That is not taking the program’s perspective. If I am taking the computer/program’s perspective, then I will not be reductively studying it but rather assume I am the program, with subjective experience attached to it. Similar to “What is it like to be a Bat?”, it would be knowing what it is like to be that program. Doing so, the analysis would presuppose “I” (the program) has free will as well.
That may be a little hard to imagine due to the irreducible nature of subjective experience. I find it is easier to think from the opposite direction. Say we are actually computer simulations, (like in various simulation arguments), then we know what it is like to be a program already. It is also easy to see why from the simulator’s viewpoints we have no free will.
As to why the first-person self has to presuppose free will. Because I have to first assume my thought is based on logic and reason, rather than some predetermined mumble-jumble. Otherwise, there are no reliable beliefs or rational reasoning at all, which would be self-defeating. That is especially important if perspective-based reasoning is fundamental.
Let me use a crude example. Say a person is facing the choice of taking 10 dollars versus getting a healthy meal. What should he do?
The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
When we analyze a computer bug, it is actually the second type of method we are using.
Please do not tell me how my thought processes proceed when debugging, thank you very kindly. I’ll merely tell you that you have a bad case of Typical Mind Fallacy and leave it at that.
>The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
When you make decisions such as which movie to watch, which shirt to buy, etc, do you ever do so by analyzing your brain’s structure and function thus deducing what result it would produce? I will take a wild guess and say that’s not how you think. You decide by comparing the alternatives based on preference. This reasoning is clearly different from reductively studying the brain, I wouldn’t call it just a framing difference.
As for debugging, I am not telling you how to do it. Debugging is essentially figuring why a Turing machine is not functioning as intended. One can follow its actions step by step to find the error, but that would still be reductively analyzing it rather than imagine oneself being the program. That would involve imagining how it feels to be the program. I don’t even think that is possible. So I’m certainly not saying to assume the program’s “mind” being the same as your own, as Typical Mind Fallacy says.
You decide by comparing the alternatives based on preference.
Which is not obviously free. If you have clear, unconflicted preferences all the time then they determine your actions. In fact, that’s a common argument against free will.
As to why the first-person self has to presuppose free will. Because I have to first assume my thought is based on logic and reason, rather than some predetermined mumble-jumb
If you are determined by forces outside of you, that does not guarantee you are devoid of.logic and reason..a computer is constructed to be logically correct, for instance.
Equally, having an ability to choose doesn’t guarantee that you will choose reason.
I got hung up here:
This doesn’t hold in my experience. In many ways quite the contrary!
Firstly, I often reason about bugs in computer programs from the computer’s point of view: what does it “see” as input, what code does it execute, what does that mean for its behaviour, and so on. I certainly don’t think it has free will in any but the most limited sense.
More widely, to reason from some other person’s perspective is to attempt to model their actions, thoughts, and decisions. Where does their “free will” come into this? Nowhere, as far as I can tell. If anything the supposition of free will detracts from any perspective-based reasoning.
I take this as a description of an ingredient of the notion of perspective-based reasoning as it’s being defined, not a claim about a prior notion of perspective-based reasoning. So perspective-based reasoning is something with free will built in. If free will is ability to consider the possible worlds where each available decision is taken and to enact one of them, this should work for other agents as well. But for any one possible action of another agent, there is still a whole collection of possible worlds corresponding to the possible actions of the agent in charge of the perspective from which it’s being considered.
As far as I can tell, the post does not just define a new notion called “perspective-based reasoning”. It appears to be using the term to cover every possible form of reasoning that humans can do based on imagining the world from another perspective:
If it was intended to define a new term that applies only to a subset of such thought, then it completely missed the mark from my point of view.
Factual claims and hypothetical constructions are hopelessly jumbled together in these posts, so making sense of them requires sorting this out. I think the appropriate role for that provision is as an axiom for the notion, not as an assertion of fact. There may be a further assertion of fact, but it’s ignorable, while the axiom has some hope of being useful.
(It doesn’t look feasible to mine these posts for something not already familiar, but maybe they sketch their general topic enough to communicate what it is.)
Let me use a crude example. Say a person is facing the choice of taking 10 dollars versus getting a healthy meal. What should he do?
We can analyze it by imagine taking his perspective, consider the outcome of both actions, and choose the one I like better based on some criteria. This process assumes the choice is unrestricted from the on-set. (More on this later)
Alternatively we can just analyze that person physically, monitor what electrical signals he receives from his eyes, how in his brain the neuron networks functions to reductively deduce his action. In this method, there is no alternative action at all. The whole analysis is derivative. And we did not take his perspective. As I said, consciousness and free will are always due to the self. In this analysis, free will is not presupposed for the experiment subject, (it is not the self).
When we analyze a computer bug, it is actually the second type of method we are using. It is no different from trying to figure out why an intricate machine doesn’t work. That is not taking the program’s perspective. If I am taking the computer/program’s perspective, then I will not be reductively studying it but rather assume I am the program, with subjective experience attached to it. Similar to “What is it like to be a Bat?”, it would be knowing what it is like to be that program. Doing so, the analysis would presuppose “I” (the program) has free will as well.
That may be a little hard to imagine due to the irreducible nature of subjective experience. I find it is easier to think from the opposite direction. Say we are actually computer simulations, (like in various simulation arguments), then we know what it is like to be a program already. It is also easy to see why from the simulator’s viewpoints we have no free will.
As to why the first-person self has to presuppose free will. Because I have to first assume my thought is based on logic and reason, rather than some predetermined mumble-jumble. Otherwise, there are no reliable beliefs or rational reasoning at all, which would be self-defeating. That is especially important if perspective-based reasoning is fundamental.
The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
Please do not tell me how my thought processes proceed when debugging, thank you very kindly. I’ll merely tell you that you have a bad case of Typical Mind Fallacy and leave it at that.
>The presupposition of free will in this question is not the act of taking the other person’s perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.
When you make decisions such as which movie to watch, which shirt to buy, etc, do you ever do so by analyzing your brain’s structure and function thus deducing what result it would produce? I will take a wild guess and say that’s not how you think. You decide by comparing the alternatives based on preference. This reasoning is clearly different from reductively studying the brain, I wouldn’t call it just a framing difference.
As for debugging, I am not telling you how to do it. Debugging is essentially figuring why a Turing machine is not functioning as intended. One can follow its actions step by step to find the error, but that would still be reductively analyzing it rather than imagine oneself being the program. That would involve imagining how it feels to be the program. I don’t even think that is possible. So I’m certainly not saying to assume the program’s “mind” being the same as your own, as Typical Mind Fallacy says.
Which is not obviously free. If you have clear, unconflicted preferences all the time then they determine your actions. In fact, that’s a common argument against free will.
If you are determined by forces outside of you, that does not guarantee you are devoid of.logic and reason..a computer is constructed to be logically correct, for instance.
Equally, having an ability to choose doesn’t guarantee that you will choose reason.