Harris does try to reduce free will to the feeling that “arises from our moment-to-moment ignorance of the prior causes of our thoughts and actions”.
I like this. Free will is the feeling when you don’t know the causes of your thoughts and actions.
However, to say that I could have done otherwise is merely to think the thought “I could have done otherwise” after doing whatever I in fact did.
To say that I could have done otherwise means that there could be a parallel reality where the unknown causes of my thoughts and actions were slightly different, and therefore produced a different outcome, and yet the ignorance before the outcome felt the same. That is, the same feeling is connected to different outcomes… in different realities. I can’t use the feeling to predict the specific outcome; I can only get a distribution of likely outcomes. And yet, because I am in a specific reality, the specific outcome is determined; I just can’t predict it.
(Map vs territory distinction, essentially. Free will exists on the map, not in the territory. It is not an illusion, in the sense that it is actually there on the map; without perfect self-knowledge it can’t be otherwise.)
How should I feel about my lack of free will?
It is an unnecessary hypothesis; removing it doesn’t change the actual calculation (except maybe for the part where you used to reflect on the fact that you were deciding, and then sometimes felt confused about the concept of free will). Just do whatever you think is the right thing to do. You see a few options you could take. Ignore the fact that one of them is already “predetermined” in some sense. The fact that you consider multiple options and choose the best one is how the “predetermined” one gets selected. It’s like, when you try to calculate 123×456, the result is also predetermined, and yet you need to do the calculation to actually get it. In the same way, consider the possible consequences of the options you have, and then you will choose one, based on some combination of logic and whim that will happen in your brain at given moment.
How this relates to punishment of crime?
We assume that when the potential criminal runs their calculation to decide whether to do the crime or not, the information about the possible punishment is part of the calculation. When it is not, for example if the criminal is too retarded to even understand the concept of punishment or the fact that it would apply here… then this type of punishment just needlessly hurts the criminal, without actually providing protection for potential victims.
Note that this works even if the criminal is not aware of the specific punishment for the specific crime, only is vaguely aware that “this type of action will likely get punished somehow”. That means, it also works for informal out-of-court punishment without exact data. If you punch me, you can expect me to punch you back with some probability—you do not know the exact probability, nor the exact force, nor the chance that instead of punching you I will overreact and stab you with a knife… so there is an uncertainty… but you know that some kind of punishment is likely to happen, on average, and your estimate is a part of your calculation.
There is also another thing to consider, which is that if someone commits a crime, it can be a strong evidence that the person is likely to make a similar type of crime again, and therefore you may want to take precautions. In the extreme case, if you execute the criminal, they will never commit a crime again. You can also put the criminal in prison, put them on a blacklist, deprive them of certain rights, etc. Here, you are not trying to influence their calculation; you are taking it as given, and trying to eliminate the threat. From this perspective, you may want to execute the retarded killer, just to make sure he never kills again; but if you can put him in an institution that prevents him from killing again, that would achieve the goal equally well.
People may confuse these two things (punishment as precommitment vs removal of known threat), or they may feel that eliminating the threat is okay only if the person deserves it somehow. (To deserve = we precommitted to use punishment X against people who did Y, and this specific person did Y.) For example, if you could use a mind scanner to determine that some person will kill a random stranger with probability 99%, but the person hasn’t done anything bad yet… many people would object against limiting that person’s rights. (And there is a good reason for doing so—such predictions could be wrong, and people could be motivated to make wrong predictions about those they don’t like.) The really complicated case is if the person has already killed someone… in such way that precommitting to punish them wouldn’t change their calculation… and is 99% likely to do it again. Then we have the conflict of principles “we shouldn’t precommit to punish people for actions where our precommitment has no impact on their doing the action” and “we shouldn’t eliminate the perceived threat, unless they already did something that we have precommitted to punish” (a combination of which would let an insane criminal walk free), and the common sense that says that of course this person is likely to kill again so we should use some rationalization to argue that the aforementioned principles do not apply in this case.
Shortly, if the criminal says “why are you punishing me? I have no free will”, the answer is “you are deciding using an algorithm, and we have precommitted to punish people in your reference class, to influence their algorithm and reduce the total number of crimes they will commit as a group… which apparently had no effect on you specifically, given that you did the crime anyway, but will prevent some of the others from doing the same thing”. Of course, if you put it this way, it will sound wrong to most people.
I like this. Free will is the feeling when you don’t know the causes of your thoughts and actions.
It’s definetely a huge part of the puzzle. But not all of it. Free will is also a feeling of not knowing the choices you will make in the future. And the process of determining this choice due to all the causes.
Suppose Omega perfectly knows all the prior causes of my decisions, it has my source code and all the inputs. Omega would still have to run the source code with these inputs, to actually execute my decision making algorithm, so that it can determine my actions. But nevertheless my actions are determined by my decision making algorithm. This part of free will is completely real.
(Map vs territory distinction, essentially. Free will exists on the map, not in the territory. It is not an illusion, in the sense that it is actually there on the map; without perfect self-knowledge it can’t be otherwise.)
Yes! But with a caveat. This state of not knowing which action will actually be executed seems to be essential for the work of the decision making algorithm. Options need to be marked as reachable so that our tree search found the best one. Also the destinction between map and the territiry becomes fuzzy when the territory is our map making engine. Our decision making algorithm is embedded in our brain in this sense our freedom of will is more than just part of the map.
To say that I could have done otherwise means that there could be a parallel reality where the unknown causes of my thoughts and actions were slightly different, and therefore produced a different outcome, and yet the ignorance before the outcome felt the same.
That’s not what could-have-done-orherwise is generally intended to mean. It might the closest approximation you can achieve assuming determinism. But that’s an assumption , not a fact.
Map vs territory distinction, essentially. Free will exists on the map, not in the territory.
If you had perfect knowledge of the territory, you would not even need the map. Again, you are treating assumptions as facts .
I like this. Free will is the feeling when you don’t know the causes of your thoughts and actions.
To say that I could have done otherwise means that there could be a parallel reality where the unknown causes of my thoughts and actions were slightly different, and therefore produced a different outcome, and yet the ignorance before the outcome felt the same. That is, the same feeling is connected to different outcomes… in different realities. I can’t use the feeling to predict the specific outcome; I can only get a distribution of likely outcomes. And yet, because I am in a specific reality, the specific outcome is determined; I just can’t predict it.
(Map vs territory distinction, essentially. Free will exists on the map, not in the territory. It is not an illusion, in the sense that it is actually there on the map; without perfect self-knowledge it can’t be otherwise.)
How should I feel about my lack of free will?
It is an unnecessary hypothesis; removing it doesn’t change the actual calculation (except maybe for the part where you used to reflect on the fact that you were deciding, and then sometimes felt confused about the concept of free will). Just do whatever you think is the right thing to do. You see a few options you could take. Ignore the fact that one of them is already “predetermined” in some sense. The fact that you consider multiple options and choose the best one is how the “predetermined” one gets selected. It’s like, when you try to calculate 123×456, the result is also predetermined, and yet you need to do the calculation to actually get it. In the same way, consider the possible consequences of the options you have, and then you will choose one, based on some combination of logic and whim that will happen in your brain at given moment.
How this relates to punishment of crime?
We assume that when the potential criminal runs their calculation to decide whether to do the crime or not, the information about the possible punishment is part of the calculation. When it is not, for example if the criminal is too retarded to even understand the concept of punishment or the fact that it would apply here… then this type of punishment just needlessly hurts the criminal, without actually providing protection for potential victims.
Note that this works even if the criminal is not aware of the specific punishment for the specific crime, only is vaguely aware that “this type of action will likely get punished somehow”. That means, it also works for informal out-of-court punishment without exact data. If you punch me, you can expect me to punch you back with some probability—you do not know the exact probability, nor the exact force, nor the chance that instead of punching you I will overreact and stab you with a knife… so there is an uncertainty… but you know that some kind of punishment is likely to happen, on average, and your estimate is a part of your calculation.
There is also another thing to consider, which is that if someone commits a crime, it can be a strong evidence that the person is likely to make a similar type of crime again, and therefore you may want to take precautions. In the extreme case, if you execute the criminal, they will never commit a crime again. You can also put the criminal in prison, put them on a blacklist, deprive them of certain rights, etc. Here, you are not trying to influence their calculation; you are taking it as given, and trying to eliminate the threat. From this perspective, you may want to execute the retarded killer, just to make sure he never kills again; but if you can put him in an institution that prevents him from killing again, that would achieve the goal equally well.
People may confuse these two things (punishment as precommitment vs removal of known threat), or they may feel that eliminating the threat is okay only if the person deserves it somehow. (To deserve = we precommitted to use punishment X against people who did Y, and this specific person did Y.) For example, if you could use a mind scanner to determine that some person will kill a random stranger with probability 99%, but the person hasn’t done anything bad yet… many people would object against limiting that person’s rights. (And there is a good reason for doing so—such predictions could be wrong, and people could be motivated to make wrong predictions about those they don’t like.) The really complicated case is if the person has already killed someone… in such way that precommitting to punish them wouldn’t change their calculation… and is 99% likely to do it again. Then we have the conflict of principles “we shouldn’t precommit to punish people for actions where our precommitment has no impact on their doing the action” and “we shouldn’t eliminate the perceived threat, unless they already did something that we have precommitted to punish” (a combination of which would let an insane criminal walk free), and the common sense that says that of course this person is likely to kill again so we should use some rationalization to argue that the aforementioned principles do not apply in this case.
Shortly, if the criminal says “why are you punishing me? I have no free will”, the answer is “you are deciding using an algorithm, and we have precommitted to punish people in your reference class, to influence their algorithm and reduce the total number of crimes they will commit as a group… which apparently had no effect on you specifically, given that you did the crime anyway, but will prevent some of the others from doing the same thing”. Of course, if you put it this way, it will sound wrong to most people.
I mostly agree.
It’s definetely a huge part of the puzzle. But not all of it. Free will is also a feeling of not knowing the choices you will make in the future. And the process of determining this choice due to all the causes.
Suppose Omega perfectly knows all the prior causes of my decisions, it has my source code and all the inputs. Omega would still have to run the source code with these inputs, to actually execute my decision making algorithm, so that it can determine my actions. But nevertheless my actions are determined by my decision making algorithm. This part of free will is completely real.
Yes! But with a caveat. This state of not knowing which action will actually be executed seems to be essential for the work of the decision making algorithm. Options need to be marked as reachable so that our tree search found the best one. Also the destinction between map and the territiry becomes fuzzy when the territory is our map making engine. Our decision making algorithm is embedded in our brain in this sense our freedom of will is more than just part of the map.
That’s not what could-have-done-orherwise is generally intended to mean. It might the closest approximation you can achieve assuming determinism. But that’s an assumption , not a fact.
If you had perfect knowledge of the territory, you would not even need the map. Again, you are treating assumptions as facts .