Interesting, so just going with the flow and not knowing what might happen next would feel like more free will to you? That seems almost like the opposite of what kalium suggests.
::follows link::
the main difference is that I would do things without a need to exert “willpower,” and with less internal monologue/debate.
“Willpower” and “internal monologue/debate” seem like processes that reflect uncertainty about future actions—there’s a subjective sense that it’s possible that I could have chosen to do something else. I’m not sure I see any difference, really.
It’s explicitly opposed to my response here. I feel like if I couldn’t predict my own actions with certainty then I wouldn’t have free will (more that I wouldn’t have a will than that it wouldn’t be free, although I tend to think that the “free” component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the universe, but which also always does what “you” want it to. Non-determinism is fine, but I can’t imagine how you could have the feeling of free will without making consistent choices. Wouldn’t you feel weird if your decisions happened at random?
I sort of think of “agent with free will” as a model for “that complicated thing that actually does determine someone’s actions, which I don’t have the data and/or computational capacity to simulate perfectly.” Predicting human behavior is like predicting weather, turbulent fluid flow, or any other chaotic system: you can sort of do it, but you’ll start running into problems as you aim for higher and higher precision and accuracy.
I don’t think it’s particularly meaningful to use “free will” for that instead of “difficult to predict.” I mean, you don’t say that weather has free will, even though you can’t model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn’t part of the technical definition. I think that your concept captures some of the real-world uses of the term “free will” but that it doesn’t capture enough of the usage to help deal with the confusion around it. In particular, your definition would mean that weather has free will, which is a phrase I wouldn’t be surprised to hear in colloquial English but doesn’t seem to be talking about the same thing that philosophers want to debate.
::follows link::
“Willpower” and “internal monologue/debate” seem like processes that reflect uncertainty about future actions—there’s a subjective sense that it’s possible that I could have chosen to do something else. I’m not sure I see any difference, really.
It’s explicitly opposed to my response here. I feel like if I couldn’t predict my own actions with certainty then I wouldn’t have free will (more that I wouldn’t have a will than that it wouldn’t be free, although I tend to think that the “free” component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the universe, but which also always does what “you” want it to. Non-determinism is fine, but I can’t imagine how you could have the feeling of free will without making consistent choices. Wouldn’t you feel weird if your decisions happened at random?
I sort of think of “agent with free will” as a model for “that complicated thing that actually does determine someone’s actions, which I don’t have the data and/or computational capacity to simulate perfectly.” Predicting human behavior is like predicting weather, turbulent fluid flow, or any other chaotic system: you can sort of do it, but you’ll start running into problems as you aim for higher and higher precision and accuracy.
Does that make any sense? (I’m not sure it does.)
I don’t think it’s particularly meaningful to use “free will” for that instead of “difficult to predict.” I mean, you don’t say that weather has free will, even though you can’t model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn’t part of the technical definition. I think that your concept captures some of the real-world uses of the term “free will” but that it doesn’t capture enough of the usage to help deal with the confusion around it. In particular, your definition would mean that weather has free will, which is a phrase I wouldn’t be surprised to hear in colloquial English but doesn’t seem to be talking about the same thing that philosophers want to debate.
I don’t mean to imply that being difficult to predict is a sufficient condition for having free will… I’m kind of confused about this myself.