Faced with a burning orphanage, you ponder your next action for long agonizing moments, uncertain of what you will do. Finally, the thought of a burning child overcomes your fear of fire, and you run into the building and haul out a toddler.
There’s a strain of philosophy which says that this scenario is not sufficient for what they call “free will”. It’s not enough for your thoughts, your agonizing, your fear and your empathy, to finally give rise to a judgment. It’s not enough to be the source of your decisions.
No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not.
As previously discussed, the left-hand structure is preferred, even given deterministic physics, because it is more local; and because it is not possible to compute the Future without computing the Present as an intermediate.
So it is proper to say, “If-counterfactual the past changed and the present remained the same, the future would remain the same,” but not to say, “If the past remained the same and the present changed, the future would remain the same.”
Are you the true source of your decision to run into the burning orphanage? What if your parents once told you that it was right for people to help one another? What if it were the case that, if your parents hadn’t told you so, you wouldn’t have run into the burning orphanage? Doesn’t that mean that your parents made the decision for you to run into the burning orphanage, rather than you?
Or: If we imagine that your parents had raised you differently, and yet somehow, exactly the same brain had ended up standing in front of the orphanage, then the same action would have resulted. Your present self and brain, screensoff the influence of your parents—this is true even if the past fully determines the future.
But above all: There is no single true cause of an event. Causality proceeds in directed acyclic networks. I see no good way, within the modern understanding of causality, to translate the idea that an event must have a single cause. Every asteroid large enough to reach Earth’s surface could have prevented the assassination of John F. Kennedy, if it had been in the right place to strike Lee Harvey Oswald. There can be any number of prior events, which if they had counterfactually occurred differently, would have changed the present. After spending even a small amount of time working with the directed acyclic graphs of causality, the idea that a decision can only have a single true source, sounds just plain odd.
So there is no contradiction between “My decision caused me to run into the burning orphanage”, “My upbringing caused me to run into the burning orphanage”, “Natural selection built me in such fashion that I ran into the burning orphanage”, and so on. Events have long causal histories, not single true causes.
Knowing the intuitions behind “free will”, we can construct other intuition pumps. The feeling of freedom comes from the combination of not knowing which decision you’ll make, and of having the options labeled as primitively reachable in your planning algorithm. So if we wanted to pump someone’s intuition against the argument “Reading superhero comics as a child, is the true source of your decision to rescue those toddlers”, we reply:
“But even if you visualize Batman running into the burning building, you might not immediately know which choice you’ll make (standard source of feeling free); and you could still take either action if you wanted to (note correctly phrased counterfactual and appeal to primitive reachability). The comic-book authors didn’t visualize this exact scenario or its exact consequences; they didn’t agonize about it (they didn’t run the decision algorithm you’re running). So the comic-book authors did not make this decision for you. Though they may have contributed to it being you who stands before the burning orphanage and chooses, rather than someone else.”
How could anyone possibly believe that they are the ultimate and only source of their actions? Do they think they have no past?
If we, for a moment, forget that we know all this that we know, we can see what a believer in “ultimate free will” might say to the comic-book argument: “Yes, I read comic books as a kid, but the comic books didn’t reach into my brain and force me to run into the orphanage. Other people read comic books and don’t become more heroic. I chose it.”
Let’s say that you’re confronting some complicated moral dilemma that, unlike a burning orphanage, gives you some time to agonize—say, thirty minutes; that ought to be enough time.
You might find, looking over each factor one by one, that none of them seem perfectly decisive—to force a decision entirely on their own.
You might incorrectly conclude that if no one factor is decisive, all of them together can’t be decisive, and that there’s some extra perfectly decisive thing that is your free will.
Looking back on your decision to run into a burning orphanage, you might reason, “But I could have stayed out of that orphanage, if I’d needed to run into the building next door in order to prevent a nuclear war. Clearly, burning orphanages don’t compel me to enter them. Therefore, I must have made an extra choice to allow my empathy with children to govern my actions. My nature does not command me, unless I choose to let it do so.”
Well, yes, your empathy with children could have been overridden by your desire to prevent nuclear war, if (counterfactual) that had been at stake.
This is actually a hand-vs.-fingers confusion; all of the factors in your decision, plus the dynamics governing their combination, are your will. But if you don’t realize this, then it will seem like no individual part of yourself has “control” of you, from which you will incorrectly conclude that there is something beyond their sum that is the ultimate source of control.
But this is like reasoning that if no single neuron in your brain could control your choice in spite of every other neuron, then all your neurons together must not control your choice either.
Whenever you reflect, and focus your whole attention down upon a single part of yourself, it will seem that the part does not make your decision, that it is not you, because the you-that-sees could choose to override it (it is a primitively reachable option). But when all of the parts of yourself that you see, and all the parts that you do not see, are added up together, they are you; they are even that which reflects upon itself.
So now we have the intuitions that:
The sensation of the primitive reachability of actions, is incompatible with their physical determinism
A decision can only have a single “true” source; what is determined by the past cannot be determined by the present
If no single psychological factor you can see is perfectly responsible, then there must be an additional something that is perfectly responsible
When you reflect upon any single factor of your decision, you see that you could override it, and this “you” is the extra additional something that is perfectly responsible
The combination of these intuitions has led philosophy into strange veins indeed.
I once saw one such vein described neatly in terms of “Author” control and “Author*” control, though I can’t seem to find or look up the paper.
Consider the control that an Author has over the characters in their books. Say, the sort of control that I have over Brennan.
By an act of will, I can make Brennan decide to step off a cliff. I can also, by an act of will, control Brennan’s inner nature; I can make him more or less heroic, empathic, kindly, wise, angry, or sorrowful. I can even make Brennan stupider, or smarter up to the limits of my own intelligence. I am entirely responsible for Brennan’s past, both the good parts and the bad parts; I decided everything that would happen to him, over the course of his whole life.
So you might think that having Author-like control over ourselves—which we obviously don’t—would at least be sufficient for free will.
But wait! Why did I decide that Brennan would decide to join the Bayesian Conspiracy? Well, it is in character for Brennan to do so, at that stage of his life. But if this had not been true of Brennan, I would have chosen a different character that would join the Bayesian Conspiracy, because I wanted to write about the beisutsukai. Could I have chosen not to want to write about the Bayesian Conspiracy?
To have Author* self-control is not only have control over your entire existence and past, but to have initially written your entire existence and past, without having been previously influenced by it—the way that I invented Brennan’s life without having previously lived it. To choose yourself into existence this way, would be Author* control. (If I remember the paper correctly.)
Paradoxical? Yes, of course. The point of the paper was that Author* control is what would be required to be the “ultimate source of your own actions”, the way some philosophers seemed to define it.
I don’t see how you could manage Author* self-control even with a time machine.
I could write a story in which Jane went back in time and created herself from raw atoms using her knowledge of Artificial Intelligence, and then Jane oversaw and orchestrated her own entire childhood up to the point she went back in time. Within the story, Jane would have control over her existence and past—but not without having been “previously” influenced by them. And I, as an outside author, would have chosen which Jane went back in time and recreated herself. If I needed Jane to be a bartender, she would be one.
Even in the unlikely event that, in real life, it is possible to create closed timelike curves, and we find that a self-recreating Jane emerges from the time machine without benefit of human intervention, that Jane still would not have Author* control. She would not have written her own life without having been “previously” influenced by it. She might preserve her personality; but would she have originally created it? And you could stand outside time and look at the cycle, and ask, “Why is this cycle here?” The answer to that would presumably lie within the laws of physics, rather than Jane having written the laws of physics to create herself.
And you run into exactly the same trouble, if you try to have yourself be the sole ultimate Author* source of even a single particular decision made by you—which is to say it was decided by your beliefs, inculcated morals, evolved emotions, etc. - which is to say your brain calculated it—which is to say physics determined it. You can’t have Author* control over one single decision, even with a time machine.
So a philosopher would say: Either we don’t have free will, or free will doesn’t require being the sole ultimate Author* source of your own decisions, QED.
I have a somewhat different perspective, and say: Your sensation of freely choosing, clearly does not provide you with trustworthy information to the effect that you are the ‘ultimate and only source’ of your own actions. This being the case, why attempt to interpret the sensation as having such a meaning, and then say that the sensation is false?
Surely, if we want to know which meaning to attach to a confusing sensation, we should ask why the sensation is there, and under what conditions it is present or absent.
Then I could say something like: “This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals.”
This is a condition that can fail in the presence of jail cells, or a decision so overwhelmingly forced that I never perceived any uncertainty about it.
There—now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation’s veracity. I have no problems about saying that I have “free will” appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices.
Certainly I do not “lack free will” if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.
Usually I don’t talk about “free will” at all, of course! That would be asking for trouble—no, begging for trouble—since the other person doesn’t know about my redefinition. The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window.
But I generally prefer to reinterpret my sensations sensibly, as opposed to refuting a confused interpretation and then calling the sensation “false”.
The Ultimate Source
This post is part of the Solution to “Free Will”.
Followup to: Timeless Control, Possibility and Could-ness
Faced with a burning orphanage, you ponder your next action for long agonizing moments, uncertain of what you will do. Finally, the thought of a burning child overcomes your fear of fire, and you run into the building and haul out a toddler.
There’s a strain of philosophy which says that this scenario is not sufficient for what they call “free will”. It’s not enough for your thoughts, your agonizing, your fear and your empathy, to finally give rise to a judgment. It’s not enough to be the source of your decisions.
No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not.
But we already drew this diagram:
As previously discussed, the left-hand structure is preferred, even given deterministic physics, because it is more local; and because it is not possible to compute the Future without computing the Present as an intermediate.
So it is proper to say, “If-counterfactual the past changed and the present remained the same, the future would remain the same,” but not to say, “If the past remained the same and the present changed, the future would remain the same.”
Are you the true source of your decision to run into the burning orphanage? What if your parents once told you that it was right for people to help one another? What if it were the case that, if your parents hadn’t told you so, you wouldn’t have run into the burning orphanage? Doesn’t that mean that your parents made the decision for you to run into the burning orphanage, rather than you?
On several grounds, no:
If it were counterfactually the case that your parents hadn’t raised you to be good, then it would counterfactually be the case that a different person would stand in front of the burning orphanage. It would be a different person who arrived at a different decision. And how can you be anyone other than yourself? Your parents may have helped pluck you out of Platonic person-space to stand in front of the orphanage, but is that the same as controlling the decision of your point in Platonic person-space?
Or: If we imagine that your parents had raised you differently, and yet somehow, exactly the same brain had ended up standing in front of the orphanage, then the same action would have resulted. Your present self and brain, screens off the influence of your parents—this is true even if the past fully determines the future.
But above all: There is no single true cause of an event. Causality proceeds in directed acyclic networks. I see no good way, within the modern understanding of causality, to translate the idea that an event must have a single cause. Every asteroid large enough to reach Earth’s surface could have prevented the assassination of John F. Kennedy, if it had been in the right place to strike Lee Harvey Oswald. There can be any number of prior events, which if they had counterfactually occurred differently, would have changed the present. After spending even a small amount of time working with the directed acyclic graphs of causality, the idea that a decision can only have a single true source, sounds just plain odd.
So there is no contradiction between “My decision caused me to run into the burning orphanage”, “My upbringing caused me to run into the burning orphanage”, “Natural selection built me in such fashion that I ran into the burning orphanage”, and so on. Events have long causal histories, not single true causes.
Knowing the intuitions behind “free will”, we can construct other intuition pumps. The feeling of freedom comes from the combination of not knowing which decision you’ll make, and of having the options labeled as primitively reachable in your planning algorithm. So if we wanted to pump someone’s intuition against the argument “Reading superhero comics as a child, is the true source of your decision to rescue those toddlers”, we reply:
“But even if you visualize Batman running into the burning building, you might not immediately know which choice you’ll make (standard source of feeling free); and you could still take either action if you wanted to (note correctly phrased counterfactual and appeal to primitive reachability). The comic-book authors didn’t visualize this exact scenario or its exact consequences; they didn’t agonize about it (they didn’t run the decision algorithm you’re running). So the comic-book authors did not make this decision for you. Though they may have contributed to it being you who stands before the burning orphanage and chooses, rather than someone else.”
How could anyone possibly believe that they are the ultimate and only source of their actions? Do they think they have no past?
If we, for a moment, forget that we know all this that we know, we can see what a believer in “ultimate free will” might say to the comic-book argument: “Yes, I read comic books as a kid, but the comic books didn’t reach into my brain and force me to run into the orphanage. Other people read comic books and don’t become more heroic. I chose it.”
Let’s say that you’re confronting some complicated moral dilemma that, unlike a burning orphanage, gives you some time to agonize—say, thirty minutes; that ought to be enough time.
You might find, looking over each factor one by one, that none of them seem perfectly decisive—to force a decision entirely on their own.
You might incorrectly conclude that if no one factor is decisive, all of them together can’t be decisive, and that there’s some extra perfectly decisive thing that is your free will.
Looking back on your decision to run into a burning orphanage, you might reason, “But I could have stayed out of that orphanage, if I’d needed to run into the building next door in order to prevent a nuclear war. Clearly, burning orphanages don’t compel me to enter them. Therefore, I must have made an extra choice to allow my empathy with children to govern my actions. My nature does not command me, unless I choose to let it do so.”
Well, yes, your empathy with children could have been overridden by your desire to prevent nuclear war, if (counterfactual) that had been at stake.
This is actually a hand-vs.-fingers confusion; all of the factors in your decision, plus the dynamics governing their combination, are your will. But if you don’t realize this, then it will seem like no individual part of yourself has “control” of you, from which you will incorrectly conclude that there is something beyond their sum that is the ultimate source of control.
But this is like reasoning that if no single neuron in your brain could control your choice in spite of every other neuron, then all your neurons together must not control your choice either.
Whenever you reflect, and focus your whole attention down upon a single part of yourself, it will seem that the part does not make your decision, that it is not you, because the you-that-sees could choose to override it (it is a primitively reachable option). But when all of the parts of yourself that you see, and all the parts that you do not see, are added up together, they are you; they are even that which reflects upon itself.
So now we have the intuitions that:
The sensation of the primitive reachability of actions, is incompatible with their physical determinism
A decision can only have a single “true” source; what is determined by the past cannot be determined by the present
If no single psychological factor you can see is perfectly responsible, then there must be an additional something that is perfectly responsible
When you reflect upon any single factor of your decision, you see that you could override it, and this “you” is the extra additional something that is perfectly responsible
The combination of these intuitions has led philosophy into strange veins indeed.
I once saw one such vein described neatly in terms of “Author” control and “Author*” control, though I can’t seem to find or look up the paper.
Consider the control that an Author has over the characters in their books. Say, the sort of control that I have over Brennan.
By an act of will, I can make Brennan decide to step off a cliff. I can also, by an act of will, control Brennan’s inner nature; I can make him more or less heroic, empathic, kindly, wise, angry, or sorrowful. I can even make Brennan stupider, or smarter up to the limits of my own intelligence. I am entirely responsible for Brennan’s past, both the good parts and the bad parts; I decided everything that would happen to him, over the course of his whole life.
So you might think that having Author-like control over ourselves—which we obviously don’t—would at least be sufficient for free will.
But wait! Why did I decide that Brennan would decide to join the Bayesian Conspiracy? Well, it is in character for Brennan to do so, at that stage of his life. But if this had not been true of Brennan, I would have chosen a different character that would join the Bayesian Conspiracy, because I wanted to write about the beisutsukai. Could I have chosen not to want to write about the Bayesian Conspiracy?
To have Author* self-control is not only have control over your entire existence and past, but to have initially written your entire existence and past, without having been previously influenced by it—the way that I invented Brennan’s life without having previously lived it. To choose yourself into existence this way, would be Author* control. (If I remember the paper correctly.)
Paradoxical? Yes, of course. The point of the paper was that Author* control is what would be required to be the “ultimate source of your own actions”, the way some philosophers seemed to define it.
I don’t see how you could manage Author* self-control even with a time machine.
I could write a story in which Jane went back in time and created herself from raw atoms using her knowledge of Artificial Intelligence, and then Jane oversaw and orchestrated her own entire childhood up to the point she went back in time. Within the story, Jane would have control over her existence and past—but not without having been “previously” influenced by them. And I, as an outside author, would have chosen which Jane went back in time and recreated herself. If I needed Jane to be a bartender, she would be one.
Even in the unlikely event that, in real life, it is possible to create closed timelike curves, and we find that a self-recreating Jane emerges from the time machine without benefit of human intervention, that Jane still would not have Author* control. She would not have written her own life without having been “previously” influenced by it. She might preserve her personality; but would she have originally created it? And you could stand outside time and look at the cycle, and ask, “Why is this cycle here?” The answer to that would presumably lie within the laws of physics, rather than Jane having written the laws of physics to create herself.
And you run into exactly the same trouble, if you try to have yourself be the sole ultimate Author* source of even a single particular decision made by you—which is to say it was decided by your beliefs, inculcated morals, evolved emotions, etc. - which is to say your brain calculated it—which is to say physics determined it. You can’t have Author* control over one single decision, even with a time machine.
So a philosopher would say: Either we don’t have free will, or free will doesn’t require being the sole ultimate Author* source of your own decisions, QED.
I have a somewhat different perspective, and say: Your sensation of freely choosing, clearly does not provide you with trustworthy information to the effect that you are the ‘ultimate and only source’ of your own actions. This being the case, why attempt to interpret the sensation as having such a meaning, and then say that the sensation is false?
Surely, if we want to know which meaning to attach to a confusing sensation, we should ask why the sensation is there, and under what conditions it is present or absent.
Then I could say something like: “This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals.”
This is a condition that can fail in the presence of jail cells, or a decision so overwhelmingly forced that I never perceived any uncertainty about it.
There—now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation’s veracity. I have no problems about saying that I have “free will” appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices.
Certainly I do not “lack free will” if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.
Usually I don’t talk about “free will” at all, of course! That would be asking for trouble—no, begging for trouble—since the other person doesn’t know about my redefinition. The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window.
But I generally prefer to reinterpret my sensations sensibly, as opposed to refuting a confused interpretation and then calling the sensation “false”.