[I made significant edits when moving this to the main page—so if you read it in Discussion, it’s different now. It’s clearer about the distinction between two different meanings of “free”, and why linking one meaning of “free” with morality implies a focus on an otherworldly soul.]
It was funny to me that many people thought Crime and Punishment was advocating outcome-based justice. If you read the post carefully, nothing in it advocates outcome-based justice. I only wanted to show how people think, so I could write this post.
Talking about morality causes much confusion, because most philosophers—and most people—do not have a distinct concept of morality. At best, they have just one word that composes two different concepts. At worst, their “morality” doesn’t contain any new primitive concepts at all; it’s just a macro: a shorthand for a combination of other ideas.
I think—and have, for as long as I can remember—that morality is about doing the right thing. But this is not what most people think morality is about!
Free will and morality
Kant argued that the existence of morality implies the existence of free will. Roughly: If you don’t have free will, you can’t be moral, because you can’t be responsible for your actions.1
The Stanford Encyclopedia of Philosophy says: “Most philosophers suppose that the concept of free will is very closely connected to the concept of moral responsibility. Acting with free will, on such views, is just to satisfy the metaphysical requirement on being responsible for one’s action.” (“Free will” in this context refers to a mysterious philosophical phenomenological concept related to consciousness—not to whether someone pointed a gun at the agent’s head.)
I was thrown for a loop when I first came across people saying that morality has something to do with free will. If morality is about doing the right thing, then free will has nothing to do with it. Yet we find Kant, and others, going on about how choices can be moral only if they are free.
The pervasive attitudes I described in Crime and Punishment threw me for the exact same loop. Committing a crime is, generally, regarded as immoral. (I am not claiming that it is immoral. I’m talking descriptively about general beliefs.) Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the “moral” question of whether the criminal had free will. If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.
The only way this can make sense, is if morality does not mean doing the right thing. I need the term “morality” to mean a set of values, so that I can talk to people about values without confusing both of us. But Kant and company say that, without free will, implementing a set of values is not moral behavior. For them, the question of what is moral is not merely the question of what values to choose (although that may be part of it). So what is this morality thing?
Don’t judge my body—judge my soul
My theory #1: Most people think that being moral means acting in a way that will earn you credit with God.
When theory #1 holds, “being moral” is shorthand for “acting in your own long-term self-interest”. Which is pretty much the opposite of what we usually pretend being moral means.
(Finding a person who believes free will is needed for morality, and also that one should be moral even if neither God nor the community could observe, does not disprove that theory #1 is a valid characterization of the logic behind linking morals and free will. The world is full of illogical people. My impression, however, is that the people who insist that free will is needed for morality are the same people who insist that religion is needed for morality. This makes sense, if religion is needed to provide an observer to provide credit.)
My less-catchy but more-general theory #2, which includes #1 as a special case: Most people conceive of morality in a way that assumes soul-body duality. This also includes people who don’t believe in a God who rewards and punishes in the afterlife, but still believe in a soul that can be virtuous or unvirtuous independent of how virtuous the body it is encased in is.
When you see (philosophical) free will being made a precondition for moral behavior, it means that the speaker is not concerned with doing the right thing. They are concerned with winning transcendent virtue points for their soul.
Moral behavior is intentional, but need not be free
I think both sides agree that morality has to do with intentions. You can’t be moral unintentionally. That’s because morality is (again, AFAIK we all agree) a property of a cognitive agent, not a property of the agent and its environment. Something that an agent doesn’t know about its environment has no impact on whether we judge that agent’s actions to be moral. Knowing the agent’s intentions helps us know if this is an agent that we can expect to do the right thing in the future. But computers, machines, even thermostats, can have intentions ascribed to them. To decide how we should be disposed towards these agents, we don’t need to worry about the phenomenological status of these intentions, or whether there are quantum doohickeys in their innards giving them free will. Just about what they’re likely to do.
If people were concerned with doing the right thing, and getting credit for it in this world, they would only need to ask about an agent’s intentions. They would care whether Jim’s actions were free in the “no one pointed a gun at him and made him do it” sense, because if Joe made Jim do it, then Joe should be given the credit or blame. But they wouldn’t need to ask whether Jim’s intentions were free in the “free will vs. determinism” or “free will vs. brain deficiency” sense. Having an inoperable brain condition would not affect how we used a person’s actions to predict whether they were likely to do similar things in the future—they’re still going to have the brain condition. We only change our credit assignment due to a brain condition if we are trying to assign credit to the non-physical part of a person (their soul).
(At this point I should also mention theory #3: Most people fail to distinguish between “done with philosophical free will” and “intentional”. They thus worry about philosophical free will when they mean to worry about intention.)2
Why we should separate the concepts of “morality” and “free will”
The majority opinion of what a word means is, by definition, the descriptively correct usage of the word. I’m not arguing that the majority usage is descriptively wrong. I’m arguing that it’s prescriptively wrong, for these reasons:
It isn’t parsimonious. It confuses the question of figuring out what values are good, and what behaviors are good, with the philosophical problem of free will. Each of these problems is difficult enough on its own!
It is inconsistent with our other definitions. People map questions about what is right and wrong onto questions about morality. They will get garbage out of their thinking if that concept, internally, is about something different. They end up believing there are no objective morals—not necessarily because they’ve thought it through logically, but because their conflicting definitions make them incapable of coherent thought on the subject.
It implies that morality is impossible without free will. Since a lot of people on LW don’t believe in free will, they would conclude that they don’t believe in morality if they subscribed to Kant’s view.
When questions of blame and credit take center stage, people lose the capacity to think about values. This is demonstrated by some Christians who talk a lot about morality, butassume, without even noticing they’re doing it, that “moral” is a macro for “God said do this”. They failed to notice that they had encoded two concepts into one word, and never got past the first concept.
For morality to be about oughtness, so that we are able to reason about values, we need to divorce it completely from free will. Free will is still an interesting and possibly important problem. But we shouldn’t mix it in together with the already-difficult-enough problem of what actions and values are moral.
1. I am making the most-favorable re-interpretation. Kant’s argument is worse, as it takes a nonsensical detour from morality, through rationality, back to free will.
2. This is the preferred theory under, um, Goetz’s Cognitive Razor: Prefer the explanation for someone’s behavior that supposes the least internal complexity of them.
Separate morality from free will
[I made significant edits when moving this to the main page—so if you read it in Discussion, it’s different now. It’s clearer about the distinction between two different meanings of “free”, and why linking one meaning of “free” with morality implies a focus on an otherworldly soul.]
It was funny to me that many people thought Crime and Punishment was advocating outcome-based justice. If you read the post carefully, nothing in it advocates outcome-based justice. I only wanted to show how people think, so I could write this post.
Talking about morality causes much confusion, because most philosophers—and most people—do not have a distinct concept of morality. At best, they have just one word that composes two different concepts. At worst, their “morality” doesn’t contain any new primitive concepts at all; it’s just a macro: a shorthand for a combination of other ideas.
I think—and have, for as long as I can remember—that morality is about doing the right thing. But this is not what most people think morality is about!
Free will and morality
Kant argued that the existence of morality implies the existence of free will. Roughly: If you don’t have free will, you can’t be moral, because you can’t be responsible for your actions.1
The Stanford Encyclopedia of Philosophy says: “Most philosophers suppose that the concept of free will is very closely connected to the concept of moral responsibility. Acting with free will, on such views, is just to satisfy the metaphysical requirement on being responsible for one’s action.” (“Free will” in this context refers to a mysterious philosophical phenomenological concept related to consciousness—not to whether someone pointed a gun at the agent’s head.)
I was thrown for a loop when I first came across people saying that morality has something to do with free will. If morality is about doing the right thing, then free will has nothing to do with it. Yet we find Kant, and others, going on about how choices can be moral only if they are free.
The pervasive attitudes I described in Crime and Punishment threw me for the exact same loop. Committing a crime is, generally, regarded as immoral. (I am not claiming that it is immoral. I’m talking descriptively about general beliefs.) Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the “moral” question of whether the criminal had free will. If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.
The only way this can make sense, is if morality does not mean doing the right thing. I need the term “morality” to mean a set of values, so that I can talk to people about values without confusing both of us. But Kant and company say that, without free will, implementing a set of values is not moral behavior. For them, the question of what is moral is not merely the question of what values to choose (although that may be part of it). So what is this morality thing?
Don’t judge my body—judge my soul
My theory #1: Most people think that being moral means acting in a way that will earn you credit with God.
When theory #1 holds, “being moral” is shorthand for “acting in your own long-term self-interest”. Which is pretty much the opposite of what we usually pretend being moral means.
My less-catchy but more-general theory #2, which includes #1 as a special case: Most people conceive of morality in a way that assumes soul-body duality. This also includes people who don’t believe in a God who rewards and punishes in the afterlife, but still believe in a soul that can be virtuous or unvirtuous independent of how virtuous the body it is encased in is.
Moral behavior is intentional, but need not be free
Why we should separate the concepts of “morality” and “free will”
For morality to be about oughtness, so that we are able to reason about values, we need to divorce it completely from free will. Free will is still an interesting and possibly important problem. But we shouldn’t mix it in together with the already-difficult-enough problem of what actions and values are moral.It isn’t parsimonious. It confuses the question of figuring out what values are good, and what behaviors are good, with the philosophical problem of free will. Each of these problems is difficult enough on its own!
It is inconsistent with our other definitions. People map questions about what is right and wrong onto questions about morality. They will get garbage out of their thinking if that concept, internally, is about something different. They end up believing there are no objective morals—not necessarily because they’ve thought it through logically, but because their conflicting definitions make them incapable of coherent thought on the subject.
It implies that morality is impossible without free will. Since a lot of people on LW don’t believe in free will, they would conclude that they don’t believe in morality if they subscribed to Kant’s view.
When questions of blame and credit take center stage, people lose the capacity to think about values. This is demonstrated by some Christians who talk a lot about morality, but assume, without even noticing they’re doing it, that “moral” is a macro for “God said do this”. They failed to notice that they had encoded two concepts into one word, and never got past the first concept.
1. I am making the most-favorable re-interpretation. Kant’s argument is worse, as it takes a nonsensical detour from morality, through rationality, back to free will.
2. This is the preferred theory under, um, Goetz’s Cognitive Razor: Prefer the explanation for someone’s behavior that supposes the least internal complexity of them.