When I ask “what will I do in situation X?”, this model gives multiple answers.
…
But when the specific instance of situation X really happens, I will do only one of the possible actions.
…
“Free will” is then a hypothetical human ability that (in last minute) collapses the set of possible actions into one action.
These are important observations. But they still leave a lot unexplained. For example, you have models of many physical systems besides yourself. Many of these physical systems are too complicated for your model to predict. Imagine, for example, a mechanical random number generator whose inner workings you don’t know. Here too, your model gives multiple answers about what the random number generator will do, even though the generator will do only one of those possible things. Why do you not attribute free will to the generator? What is different about the way that you think about yourself (and other people) that leads you to attribute free will in one case, but not in the other?
Why do you not attribute free will to the generator?
Humans naturally do it, but we can learn otherwise. So the question is, why can we learn (and then really feel) that the generator does not have free will, but the same process would not work for humans, especially for ourselves.
First, we are more complex than a random number generator. The random generator is just… random. Now imagine a machine that a) generally follows some goals, but b) sometimes does random decisions, and c) rationalizes all its choices with “I was following this goal” or, in a case of random action: “that would be too much” or “it seemed suspicious” or “I was bored”. Perhaps it could have a few (potentially contradicting) goals, and always randomly choose one and do an action that increases this goal, even if it harms the other goals. Even more, it should allow some feedback; for example by speaking with it you could increase a probability of some goal, and then even if it randomly chooses not to follow that goal, it would give some rationalization why. This would feel more like a free will.
On the other hand I imagine that some people with human-predicting powers, like the marketing or political experts do not believe in so much human free will (with regard to their profession’s topic; otherwise they compartmentalize), because they are able to predict and manipulate human action.
By the intra-species competition we are complex enough to prevent other people from understaning and predicting us. As a side effect, it makes us incomprehensive and unpredictable to ourselves, too. Generally, we optimize for survival and reproduction, but we are not straightforward in it, because a person with simple algorithm could be simply abused. Sometimes the complexity brings some advantage (for example when we get angry, we act irrationally, but as far as this prevents other people from making us angry, even this irrational emotion is an evolutionary advantage), sometimes the complexity is caused merely by bugs in the program.
We can imagine more than we can really do; for example we can make a plan that feels real, but then we are unable to follow it. But this lie, if it convinces other people (and we better start by convincing ourselves), can bring us some advantage. So humans have an evolutionary incentive to misunderstand themselves—we do not have such incentive towards other species or machines.
I know this is not a perfect answer, but it is as good as I can give now.
These are important observations. But they still leave a lot unexplained. For example, you have models of many physical systems besides yourself. Many of these physical systems are too complicated for your model to predict. Imagine, for example, a mechanical random number generator whose inner workings you don’t know. Here too, your model gives multiple answers about what the random number generator will do, even though the generator will do only one of those possible things. Why do you not attribute free will to the generator? What is different about the way that you think about yourself (and other people) that leads you to attribute free will in one case, but not in the other?
Humans naturally do it, but we can learn otherwise. So the question is, why can we learn (and then really feel) that the generator does not have free will, but the same process would not work for humans, especially for ourselves.
First, we are more complex than a random number generator. The random generator is just… random. Now imagine a machine that a) generally follows some goals, but b) sometimes does random decisions, and c) rationalizes all its choices with “I was following this goal” or, in a case of random action: “that would be too much” or “it seemed suspicious” or “I was bored”. Perhaps it could have a few (potentially contradicting) goals, and always randomly choose one and do an action that increases this goal, even if it harms the other goals. Even more, it should allow some feedback; for example by speaking with it you could increase a probability of some goal, and then even if it randomly chooses not to follow that goal, it would give some rationalization why. This would feel more like a free will.
On the other hand I imagine that some people with human-predicting powers, like the marketing or political experts do not believe in so much human free will (with regard to their profession’s topic; otherwise they compartmentalize), because they are able to predict and manipulate human action.
By the intra-species competition we are complex enough to prevent other people from understaning and predicting us. As a side effect, it makes us incomprehensive and unpredictable to ourselves, too. Generally, we optimize for survival and reproduction, but we are not straightforward in it, because a person with simple algorithm could be simply abused. Sometimes the complexity brings some advantage (for example when we get angry, we act irrationally, but as far as this prevents other people from making us angry, even this irrational emotion is an evolutionary advantage), sometimes the complexity is caused merely by bugs in the program.
We can imagine more than we can really do; for example we can make a plan that feels real, but then we are unable to follow it. But this lie, if it convinces other people (and we better start by convincing ourselves), can bring us some advantage. So humans have an evolutionary incentive to misunderstand themselves—we do not have such incentive towards other species or machines.
I know this is not a perfect answer, but it is as good as I can give now.