I suppose if I had the feeling that I could predict my own actions with certainty—as though I were able to compute my own input-output table—that would be like feeling like I didn’t have free will.
I don’t like introspecting, in general. It consistently makes me feel bad; my train of thought inevitably turns to the things about me that make me upset. Better to stay distracted and keep my mind on more pleasant things.
I suppose if I had the feeling that I could predict my own actions with certainty—as though I were able to compute my own input-output table—that would be like feeling like I didn’t have free will.
Yet people ordinarily predict their own actions all the time, quite reliably. For example, I predict that in a few minutes I will turn off the computer and go home, buying groceries at the supermarket on the way, and I already have a rough idea of what I will be doing this evening and tomorrow morning. There are factors that could interfere with this, but they rarely do, and the unpredictability comes from external sources (e.g. the supermarket is unexpectedly closed), not from me.
It is also interesting that people facing up to some hard moral choice will often, afterwards, talk in such terms as “I could not have done other than I did” (here is a random example of what I mean), or “Here I stand, I can do no other.” (The attribution of the last to Luther is disputed, but whoever wrote it, it was undoubtedly written.)
I can also predict that unless your mind has gone awry by contemplating the conundrum of free will, you are not going to deliberately step in front of a bus. Are you “free” to do so when you have (I hope) compelling reasons not to?
For example, I predict that in a few minutes I will turn off the computer and go home, buying groceries at the supermarket on the way
And that is exactly what I did. And for my next trick, I ate because I was hungry, and tonight I will sleep when I am tired.
The sensation of free will is the experience that our acts seem to us to come out of nowhere. But they do come out of somewhere; a part of us that is inaccessible to experience. The sensation is real, but to interpret it at face value is like imagining that your head has no back because you cannot see it.
David Velleman’s concept of epistemic freedom provides a way to agree with both CronoDAS and Richard here. We can “predict” our acts in the broad sense of forming correct expectations. But we also know that we could form the opposite expectation in many cases and be correct in that case too. Last time I bought ice cream, I expected to say “chocolate” to the person behind the counter, and I did. But I could have expected to say “raspberry” instead, and if I had, that’s what I’d have said.
Some prophecies are self-fulfilling. When I said “I’ll have chocolate”, that not only correctly predicted the outcome, but caused it as well. Self-fulfilling prophecies often allow multiple alternative prophecies, any of which will be fulfilled if made. Velleman says that intentions for immediate actions are typically self-fulfilling prophecies. There may be more to intention than that, but there is at least this much: that intentions do involve expectation, and the intention itself (and/or closely associated psychological processes) tends to bring it about.
Thank you for not dodging the question with philosophical considerations!
I suppose if I had the feeling that I could predict my own actions with certainty—as though I were able to compute my own input-output table—that would be like feeling like I didn’t have free will.
Interesting, so just going with the flow and not knowing what might happen next would feel like more free will to you? That seems almost like the opposite of what kalium suggests.
Interesting, so just going with the flow and not knowing what might happen next would feel like more free will to you? That seems almost like the opposite of what kalium suggests.
::follows link::
the main difference is that I would do things without a need to exert “willpower,” and with less internal monologue/debate.
“Willpower” and “internal monologue/debate” seem like processes that reflect uncertainty about future actions—there’s a subjective sense that it’s possible that I could have chosen to do something else. I’m not sure I see any difference, really.
It’s explicitly opposed to my response here. I feel like if I couldn’t predict my own actions with certainty then I wouldn’t have free will (more that I wouldn’t have a will than that it wouldn’t be free, although I tend to think that the “free” component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the universe, but which also always does what “you” want it to. Non-determinism is fine, but I can’t imagine how you could have the feeling of free will without making consistent choices. Wouldn’t you feel weird if your decisions happened at random?
I sort of think of “agent with free will” as a model for “that complicated thing that actually does determine someone’s actions, which I don’t have the data and/or computational capacity to simulate perfectly.” Predicting human behavior is like predicting weather, turbulent fluid flow, or any other chaotic system: you can sort of do it, but you’ll start running into problems as you aim for higher and higher precision and accuracy.
I don’t think it’s particularly meaningful to use “free will” for that instead of “difficult to predict.” I mean, you don’t say that weather has free will, even though you can’t model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn’t part of the technical definition. I think that your concept captures some of the real-world uses of the term “free will” but that it doesn’t capture enough of the usage to help deal with the confusion around it. In particular, your definition would mean that weather has free will, which is a phrase I wouldn’t be surprised to hear in colloquial English but doesn’t seem to be talking about the same thing that philosophers want to debate.
Hmmm...
I suppose if I had the feeling that I could predict my own actions with certainty—as though I were able to compute my own input-output table—that would be like feeling like I didn’t have free will.
I don’t like introspecting, in general. It consistently makes me feel bad; my train of thought inevitably turns to the things about me that make me upset. Better to stay distracted and keep my mind on more pleasant things.
Yet people ordinarily predict their own actions all the time, quite reliably. For example, I predict that in a few minutes I will turn off the computer and go home, buying groceries at the supermarket on the way, and I already have a rough idea of what I will be doing this evening and tomorrow morning. There are factors that could interfere with this, but they rarely do, and the unpredictability comes from external sources (e.g. the supermarket is unexpectedly closed), not from me.
It is also interesting that people facing up to some hard moral choice will often, afterwards, talk in such terms as “I could not have done other than I did” (here is a random example of what I mean), or “Here I stand, I can do no other.” (The attribution of the last to Luther is disputed, but whoever wrote it, it was undoubtedly written.)
I can also predict that unless your mind has gone awry by contemplating the conundrum of free will, you are not going to deliberately step in front of a bus. Are you “free” to do so when you have (I hope) compelling reasons not to?
And that is exactly what I did. And for my next trick, I ate because I was hungry, and tonight I will sleep when I am tired.
The sensation of free will is the experience that our acts seem to us to come out of nowhere. But they do come out of somewhere; a part of us that is inaccessible to experience. The sensation is real, but to interpret it at face value is like imagining that your head has no back because you cannot see it.
David Velleman’s concept of epistemic freedom provides a way to agree with both CronoDAS and Richard here. We can “predict” our acts in the broad sense of forming correct expectations. But we also know that we could form the opposite expectation in many cases and be correct in that case too. Last time I bought ice cream, I expected to say “chocolate” to the person behind the counter, and I did. But I could have expected to say “raspberry” instead, and if I had, that’s what I’d have said.
Some prophecies are self-fulfilling. When I said “I’ll have chocolate”, that not only correctly predicted the outcome, but caused it as well. Self-fulfilling prophecies often allow multiple alternative prophecies, any of which will be fulfilled if made. Velleman says that intentions for immediate actions are typically self-fulfilling prophecies. There may be more to intention than that, but there is at least this much: that intentions do involve expectation, and the intention itself (and/or closely associated psychological processes) tends to bring it about.
Thank you for not dodging the question with philosophical considerations!
Interesting, so just going with the flow and not knowing what might happen next would feel like more free will to you? That seems almost like the opposite of what kalium suggests.
::follows link::
“Willpower” and “internal monologue/debate” seem like processes that reflect uncertainty about future actions—there’s a subjective sense that it’s possible that I could have chosen to do something else. I’m not sure I see any difference, really.
It’s explicitly opposed to my response here. I feel like if I couldn’t predict my own actions with certainty then I wouldn’t have free will (more that I wouldn’t have a will than that it wouldn’t be free, although I tend to think that the “free” component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the universe, but which also always does what “you” want it to. Non-determinism is fine, but I can’t imagine how you could have the feeling of free will without making consistent choices. Wouldn’t you feel weird if your decisions happened at random?
I sort of think of “agent with free will” as a model for “that complicated thing that actually does determine someone’s actions, which I don’t have the data and/or computational capacity to simulate perfectly.” Predicting human behavior is like predicting weather, turbulent fluid flow, or any other chaotic system: you can sort of do it, but you’ll start running into problems as you aim for higher and higher precision and accuracy.
Does that make any sense? (I’m not sure it does.)
I don’t think it’s particularly meaningful to use “free will” for that instead of “difficult to predict.” I mean, you don’t say that weather has free will, even though you can’t model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn’t part of the technical definition. I think that your concept captures some of the real-world uses of the term “free will” but that it doesn’t capture enough of the usage to help deal with the confusion around it. In particular, your definition would mean that weather has free will, which is a phrase I wouldn’t be surprised to hear in colloquial English but doesn’t seem to be talking about the same thing that philosophers want to debate.
I don’t mean to imply that being difficult to predict is a sufficient condition for having free will… I’m kind of confused about this myself.
Hmmm....
Anytime you decide to do something and then act upon that decision, couldn’t you say you predicted your own action?
I’m going to move my hand!
*moves hand*
Ha! No free will!
Well, yes, sometimes I can predict fairly accurately. Other times, it’s harder.