Separate morality from free will
[I made significant edits when moving this to the main page—so if you read it in Discussion, it’s different now. It’s clearer about the distinction between two different meanings of “free”, and why linking one meaning of “free” with morality implies a focus on an otherworldly soul.]
It was funny to me that many people thought Crime and Punishment was advocating outcome-based justice. If you read the post carefully, nothing in it advocates outcome-based justice. I only wanted to show how people think, so I could write this post.
Talking about morality causes much confusion, because most philosophers—and most people—do not have a distinct concept of morality. At best, they have just one word that composes two different concepts. At worst, their “morality” doesn’t contain any new primitive concepts at all; it’s just a macro: a shorthand for a combination of other ideas.
I think—and have, for as long as I can remember—that morality is about doing the right thing. But this is not what most people think morality is about!
Free will and morality
Kant argued that the existence of morality implies the existence of free will. Roughly: If you don’t have free will, you can’t be moral, because you can’t be responsible for your actions.1
The Stanford Encyclopedia of Philosophy says: “Most philosophers suppose that the concept of free will is very closely connected to the concept of moral responsibility. Acting with free will, on such views, is just to satisfy the metaphysical requirement on being responsible for one’s action.” (“Free will” in this context refers to a mysterious philosophical phenomenological concept related to consciousness—not to whether someone pointed a gun at the agent’s head.)
I was thrown for a loop when I first came across people saying that morality has something to do with free will. If morality is about doing the right thing, then free will has nothing to do with it. Yet we find Kant, and others, going on about how choices can be moral only if they are free.
The pervasive attitudes I described in Crime and Punishment threw me for the exact same loop. Committing a crime is, generally, regarded as immoral. (I am not claiming that it is immoral. I’m talking descriptively about general beliefs.) Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the “moral” question of whether the criminal had free will. If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.
The only way this can make sense, is if morality does not mean doing the right thing. I need the term “morality” to mean a set of values, so that I can talk to people about values without confusing both of us. But Kant and company say that, without free will, implementing a set of values is not moral behavior. For them, the question of what is moral is not merely the question of what values to choose (although that may be part of it). So what is this morality thing?
Don’t judge my body—judge my soul
My theory #1: Most people think that being moral means acting in a way that will earn you credit with God.
When theory #1 holds, “being moral” is shorthand for “acting in your own long-term self-interest”. Which is pretty much the opposite of what we usually pretend being moral means.
My less-catchy but more-general theory #2, which includes #1 as a special case: Most people conceive of morality in a way that assumes soul-body duality. This also includes people who don’t believe in a God who rewards and punishes in the afterlife, but still believe in a soul that can be virtuous or unvirtuous independent of how virtuous the body it is encased in is.
Moral behavior is intentional, but need not be free
Why we should separate the concepts of “morality” and “free will”
It isn’t parsimonious. It confuses the question of figuring out what values are good, and what behaviors are good, with the philosophical problem of free will. Each of these problems is difficult enough on its own!
It is inconsistent with our other definitions. People map questions about what is right and wrong onto questions about morality. They will get garbage out of their thinking if that concept, internally, is about something different. They end up believing there are no objective morals—not necessarily because they’ve thought it through logically, but because their conflicting definitions make them incapable of coherent thought on the subject.
It implies that morality is impossible without free will. Since a lot of people on LW don’t believe in free will, they would conclude that they don’t believe in morality if they subscribed to Kant’s view.
When questions of blame and credit take center stage, people lose the capacity to think about values. This is demonstrated by some Christians who talk a lot about morality, but assume, without even noticing they’re doing it, that “moral” is a macro for “God said do this”. They failed to notice that they had encoded two concepts into one word, and never got past the first concept.
1. I am making the most-favorable re-interpretation. Kant’s argument is worse, as it takes a nonsensical detour from morality, through rationality, back to free will.
2. This is the preferred theory under, um, Goetz’s Cognitive Razor: Prefer the explanation for someone’s behavior that supposes the least internal complexity of them.
- How to annoy misanthropes and bleeding-hearts by 7 Jul 2011 2:27 UTC; 35 points) (
- 6 Oct 2011 0:31 UTC; 0 points) 's comment on Freewill vs. Determinism by (
All I’m getting from this is “the term ‘morality’ is hopelessly confused.”
Relevant tweet: http://twitter.com/#!/vladimirnesov/status/34254933578481664
Brilliant, yes. So what would be oxygen?
It’s like people tried to arrive at universally valid sexual preferences by rationalizing away seemingly inconsistent decisions based upon unimportant body parts like breasts and penises when most people were in agreement that those parts had nothing to do with being human. And we all ought to be attracted to humans only, shouldn’t we?
My current hypothesis is that most of the purpose of evolving morality is signaling that you are predictably non-defecting enough to deal with. This is not very well worked out—but it does predict that if you take it to edge cases, or build syllogisms from stated moral beliefs, or other such overextension, it’ll just get weird (because the core is to project that you are a non-defecting player—that’s the only bit that gets tested against the world), and I think observation shows plenty of this (e.g. 1, 2).
I find your ideas intriguing and wish to subscribe to your newsletter.
That said, I’m not sure the evolution of morality can productively be separated from the evolution of disgust, and disgust does seem to have a non-signaling purpose.
It certainly does. It helps to inform you who can be trusted as a coalition partner.
Furthermore, if your feeling of disgust results in your being less nice toward the disgusting party, then your righteousness tends to deter disgusting behavior—at least when you are there to express disapproval. That is a signaling function, to be sure, but it is signalling directed at the target of your disgust, not at third parties.
Also, if I feel disgust in situations that historically correlate with becoming ill—for example, eating rotten food—I’m less likely to become ill. We can be disgusted by things besides other primates, after all.
Morality is also involved in punishment, signalling virtue, and manipulating the behaviour of others—so they stop doing the bad deeds that you don’t like.
Certainly. I think my central thesis is that morality is a set of cached answers to a really complicated game theory problem given initial conditions (e.g. you are in a small tribe; you are in a big city and poor; you are a comfortable Western suburbanite), some cached in your mind, some cached in your genes, so it’s unsurprising that using intelligence to extrapolate from the cached answers without keeping a close eye on the game theoretical considerations of whatever the actual problem you’re trying to solve is will lead to trouble.
And more in this vein. I really dislike this post. The author proclaims that he is shocked, shocked that other people are wrong, even though he himself is right. Then he proceeds to analyze why almost everyone else got it wrong, without once trying to justify his own position using any argument other than professed astonishment that any thinking person could disagree.
Downvoted.
I think you took this post in unnecessarily bad faith, Perplexed...unless this is an area where you’ve already had frustrating head-banging-on-wall discussions, in which case I understand. I did not detect any particular ‘shocked-ness’ in the author’s explanation of how he understands morality.
Okay, reading back I can see your point, but I still don’t find it offensive in any way. As far as I can tell, all that he’s claiming is that a) people claim morality is about one thing (doing the right thing) but they discuss it and act on it as if it’s something different (the freedom to choose, or soul-karma-points). If he’s right, it wouldn’t be the first time that a word had multiple meanings to different people, but it would explain why morality is such a touchy subject in discussion. I read this post and thought “wow, I never noticed that before, that’s interesting...that could explain a lot.”
My one complaint is that ‘doing the right thing’ is presented as atomic, as obvious, which I’m pretty sure it isn’t. What paradigm do you personally use to determine ‘right’, Phil?
I’ll try to reword the post to be clearer about what I’m claiming.
It isn’t a matter of who is “right” about what morality means. If anything, the majority is always “right” about what words mean. But that majority position has two big problems:
It makes the word useless and confusing. “Morality” then doesn’t represent a real concept; it’s just a way of hiding self-interest.
It rules out actually believing in values. The word “morality” is positioned so as to suck up any thoughts about what is the right thing to do, and convince the unsuspecting thinker that these thoughts are nonsense.
I really dislike this comment. It emotes claims, without offering any justification of those claims. Furthermore, I disagree with those claims. I shall now try to justify my disagreement.
A definition of (explanation of) ‘morality’ (morality) as a convention characterizing how people should feel about actions (one’s own, or other people’s) is neither useless nor confusing. Defining correct moral behavior by reference to a societal consensus is no more useless or confusing than defining correct use of language by a societal consensus.
Furthermore, this kind of definition has one characteristic which makes it more useful as a prescription of how to behave than is any ‘stand-alone’ prescription which does not invoke society. It is more useful because it contains within itself the answer to both central questions of morality or ethics.
Q. What is the right thing to do? A. What society says.
Q. Why ought I to do the right thing? A. Because if you don’t, society will make your life miserable.
I don’t see why you claim that. Unless, that is, you have a non-standard definition of ‘values’. Do you perhaps intend to be using a definition of morals and values which excludes any actions taken for pragmatic reasons? Gee, I hope you don’t intend to defend that position by stating that most people share your disdain of the merely practical.
If I seem overly confrontational here, I apologize. But, Phil, you really are not even trying to imagine that there might be other rational positions on these questions.
I don’t think you’re reading very carefully. That is not what I was calling useless. Do you understand why I kept talking about free will?
Maybe you are right that I’m not reading carefully enough. You called the word ‘morality’ useless if it were taken to have a particular meaning. I responded that the meaning in question is not useless. Yes, I see the distinction, but I don’t see how that distinction matters.
No I don’t. Free will means entirely too many different things to too many different people. I usually fail to understand what other people mean by it. So I find it best to simply “taboo” the phrase. Or, if written in a sentence of text, I simply ignore the sentence as probably meaningless.
I’m objecting to the view that morality requires free will. I’m not as interested in taking a stand on how people learn morality, or whether there is such a thing as objective morality, or whether it’s just a social consensus, except that I would like to use terms so that it’s still possible to think about these issues.
Kant’s view at best confounds the problem of choosing values, and the problem of free will. At worst, it makes the problem of values impossible to think about, whether or not you believe in free will. (Perversely, focusing on whether or not your actions are pleasing to God obliterates your ability to make moral judgements.)
I think you are missing the point regarding Kant’s mention of free will here. You need to consider Kant’s explanation of why it is acceptable to enslave or kill animals, but unacceptable to enslave or kill human beings. Hint: it has nothing to do with ‘consciousness’.
His reason for excluding the possibility that entities without free will are moral agents was not simply to avoid having to participate in discussions regarding whether a bowling ball has behaved morally. Limiting morality to entities with free will has consequences in Kant’s philosophy.
Edit: minor change in wording.
There was a case in my local area where a teenager beat another teeanger to death with a bat. On another blog, some commenters were saying that since his brain wasn’t fully developed yet (based on full brain development being attained close to 30), he shouldn’t be held to adult standards (namely sentencing standards). This was troubling to me, because while I don’t advocate the cruelty of our current prison system, I do worry about the message that lax sentencing sends. The commenets seem to naturally allow for adult freedom (the kids were all unsupervised, and no one said that was problem), then plead biological determinism. To me, morality is about how communities react to transgressions. “Ought not to” has no utility outside of consequences. Those may be social, like the experience of shame, to physical, like imprisonment. I think discussing morality as being solely a quality within individual agents is a dead end.
And thanks for starting this discussion. This is the type of rationality that I find not just interesting but important.
I think that “free will” can be understood as either itself an everyday concept, or else a philosopher’s way of talking about and possibly distorting an everyday concept. The term has two components which we can talk about separately.
A “willed” act is a deliberate act, done consciously, intentionally. It is consciously chosen from among other possible acts. Examples of acts which are not willed are accidental acts, such as bumping into someone because you didn’t know they were there, taking someone else’s purse because you confused it with your own, etc.
A “free” act is uncoerced. A coerced act is one that is done under compulsion. For example if a mugger points his gun at you, giving him your wallet is a coerced act.
We are more likely to judge acts immoral if they are both willed and free. We are less likely to frown on accidents. If someone took your purse on purpose, because they wanted your money, you would probably think badly of them. But if they took your purse by accident because it looked just like their purse, then you would be much less likely to be upset with them once they had returned the purse apologetically. And similarly, if someone has done you some harm but it turns out they only did it because they were being coerced, then you are more likely to forgive them and not to hold it against them, than if they did it freely, e.g. out of personal malice toward you.
This is all very earthly, very everyday and practical, and has no special relationship to religion or God. There are good practical reasons for letting harms slide if they are accidental or coerced, and for not letting harms slide if they are deliberate and uncoerced.
The upshot is that we are much less likely to consider actions immoral if they are unwilled or unfree—i.e., accidental or coerced.
That’s what I referred to as “intentional”. A computer program with goals can have internal representations that comprise its intentions about its goals, even if it isn’t conscious and has no free will. When I wrote, “Knowing the agent’s intentions helps us know if this is an agent that we can expect to do the right thing in the future”,
that was saying the same thing as when you wrote, “If someone has done you some harm but it turns out they only did it because they were being coerced, then you are more likely to forgive them and not to hold it against them, than if they did it freely, e.g. out of personal malice toward you.”
That’s not the usage of “free will” that philosophers such as Kant are talking about, when they talk about free will. When philosophers debate whether people have free will, they’re not wondering whether or not people can be coerced into doing things if you point a gun at them.
So, what you’re saying is true, but is already incorporated into the post, and is a supplemental issue, not the main point. I thought I already made the main points you made in this comment in the OP, so it concerns me that 9 people upvoted this—I wonder what they think I was talking about?
I rewrote the opening section to be more clear that I’m talking about philosophical free will. I see now how it would be misleading if you weren’t assuming that context from the name “Kant”.
Checking the Wikipedia article on free will:
That seems to be pretty close to what I wrote. So apparently the compatibilists have an idea of what free will is similar to the one I described.
It’s interesting that at least twice, now, you said what “free will” isn’t, but you haven’t said what it is. I think that nowhere do you successfully explain what free will supposedly is. The closest you come is here:
That’s not an explanation. It says that free will is not something, and it says that what it is, is a “mysterious philosophical phenomenological concept related to consciousness”—which tells the reader pretty much nothing.
And now in your comment here, you say
but you leave it at that. Again you’re saying what free will supposedly is not. You don’t go on to explain what the philosophers are talking about.
I think that “free will” is an idea with origins in daily life which different philosophers have attempted to clarify in different ways. Some of them did, in my opinion, a good job—the compatibilists—and others did, in my opinion, a bad job—the incompatibilists. Your exposure seems to have been only to the incompatibilists. So, having learned the incompatibilist notion of free will, you apparently find yourself ill-prepared to explain the concept to anyone else, limiting yourself to saying what it is not and to saying that it is “mysterious”. I take this as a clue about the incompatibilist concept of the free will.
Whether an agent is moral and whether an action is moral are fundamentally different questions, operating on different types. There are three domains in which we can ask moral questions: outcomes, actions, and agents. Whether actions are moral is about doing the right thing, as we originally thought. Whether a person or agent is moral, on the other hand, is a prediction of whether that agent will make moral decisions in the future.
An immoral decision is evidence that the agent who made it is immoral. However, there are some things that might screen off this evidence, which is what Kant was (confusedly) talking about. For example, if Dr. Evil points a mind-control ray at someone and makes them do evil things, and the mind control ray is then destroyed, then the things they did while under its influence have no bearing on whether they’re a moral or immoral person, because they have no predictive value. On the other hand, if someone did something bad because the atoms in their brain were arranged in the wrong way, and their atoms are still arranged that way, that’s evidence that they’re immoral; but if they were to volunteer for a procedure that rearranges their brain such that they won’t do bad things anymore, then after the procedure they’ll be a moral person.
Strengthening a moral agent or weakening an immoral agent has positive outcome-utility. Good actions by an agent and good outcomes causally connected to an agent’s actions are evidence that they’re agent-moral, and conversely bad actions and bad outcomes causally connected to an agent’s actions are evidence that they’re agent-immoral. But these are only evidence; they are not agent-morality itself.
They’re not as different as the majority view makes them out to be. A moral agent is one that uses decision processes that systematically produce moral actions. Period. Whereas the majority view is that a moral agent is not one whose decision processes are structured to produce moral actions, but one who has a virtuous free will. A rational extension of this view would be to say that someone who has a decision process that consistently produces immoral actions can still be moral if their free will is very strong and very virtuous, and manages to counterbalance their decision process.
The example above about a mind control ray has to do with changing the locus of intentionality controlling a person. It doesn’t have to do with the philosophical problem of free will. Does Dr. Evil have free will? It doesn’t matter, for the purposes of determining whether his cognitive processes consistently produce immoral actions.
It’s more complicated than that, because agent-morality is a scale, not a boolean, and how morally a person acts depends on the circumstances they’re placed in. So a judgment of how moral someone is must have some predictive aspect.
Suppose you have agents X and Y, and scenarios A and B. X will do good in scenario A but will do evil in scenario B, while Y will do the opposite. Now if I tell you that scenario A will happen, then you should conclude that X is a better person than Y; but if I instead tell you that scenario B will happen, then you should conclude that Y is a better person than X.
I don’t think “locus of intentionality” is the right way to think about this (except perhaps as a simplified model that reduces to conditioning on circumstances). In a society where mind control rays were common, but some people were immune, we would say that people who are immune are more moral than people who aren’t. In the society we actually have, we say that those who refuse in the Milgram experiment are more moral, and that people who refuse to do evil under the threat of force are more moral, and I don’t think a “locus of intentionality” model handles these cases cleanly.
Ultimately, your claim appears to be, “The punitive part of morality is inappropriate. It is based on free will. Therefore, free will is irrelevant to morality.” I admit you don’t phrase it that way, but with your only concern being lack of literal coercion and likelihood of reoffense, your sense of morality seems to be inconsistent with people’s actual beliefs.
You will find very few people who will say that a soldier acting in response to PTSD deserves the exact same sentence as a sociopath acting out of a sadistic desire to kill, even if each is equally likely to reoffend. Unless I misunderstand you, and you don’t get that result, it seems like a serious problem for your morality.
On that note, the law recognizes 5 levels of intent (deliberate, grossly reckless, reckless, grossly negligent, negligent). So you may have erred in reducing intent to a binary. These levels make sense even without “philosophical” free will, which I think is basically a red herring in its entirety, though I still believe in diminished culpability.
That, but I think there’s some reciprocal after-effects that also come into play. What I mean is that when you view what being moral implies with respect to one’s religion, you get what you suggested—being moral entails an increase in heaven (or whatever) being likely.
A very interesting effect I’ve noticed going the other way, is that religion lets you discuss morality in far, far, far more “lofty” terms that what a non-theistic individual might come up with. The “worldly” discussions are about utility, catagorical imperatives, maxims, means/ends, etc… but “common-talk” religious morality involves “being Christ-like,” “being a light to others,” “showing them Christ’s love,” “being a witness,” “acting as a suffering servant” and the like.
These just plain sound amazingly magical while [to a religious person especially] the other discussion about morality can sound cold and calculating. It reinforces the notion of doing something supremely fantastic in one’s quest to be “moral” via the religious lens.
Now, I think all of these basically translate to:
But it’s an interesting point not unrelated to your theory.
I’ve always viewed there as being a third theory of morality: People who do bad things, are more likely to do other bad things. If my friend lies to me, they’re more likely to lie to me in the future. But they’re also more likely to steal from me, assault me, etc..
A brain defect (such as compulsive lying) therefor needs to be accounted for—the person is likely to commit domain-specific actions like lying, but this doesn’t generalize out to other domains. So, I might not believe my friend who is a compulsive liar when says anything outrageous, but I won’t be worried that he’s going to rob my house or blackmail me.
It’s questionable how valid this metric is, but I’ll confess it feels “intuitive right” and emotionally satisfying to me, as long as one is able to identify and adjust for domain-specific flaws.
I’d just like to point out a little flaw in your construction of other people’s morality, and offer what I think is a better model for understanding this issue.
First, I wouldn’t say that people have a morality that agrees with God. They have a God that agrees with their morality. Reading bible passages to people is unlikely to wobble their moral compass; they’ll just say those no longer apply or you’re taking them too literally or some such. God isn’t so much of a source of morality as a post hoc rationalization of a deeper impulse.
Second, this whole system makes a lot of sense if you think of it in terms of “how likely is it for me to do that?” Kind of like a Rawlsian Veil of Ignorance. If the defendant is a sociopath who killed people for fun, I have a pretty easy time saying, “I can easily restrain myself from killing people for fun. He should have done that. Let him burn!” Conversely, when the defendant is a soldier who has PTSD, I think, “You know, if I’d been through what he’d been through, I may very well have done the same thing, even though it was wrong. We should go easy on him.”
This also explains various problems the law had back when people were exceedingly racist or sexist, as people would not have though, “I could just as easily have been of a different race.”
I admit I haven’t fleshed this out fully, but it seems to agree with the end results more consistently than most other theories.
I would be interested in seeing a more fleshed out version if at all possible.
Without Kant’s “nonsensical” detour through rationality, you don’t understand his position at all. There is no particular agreement on what “free will” means, and Kant chose to stick fairly closely to one particular line of thought on the subject. He maintained that you’re only really free when you act rationally, which means that you’re only really free when you do the right thing. Kant also held that a being with the capacity for rationality should be treated as if free even if you had little reason to think they were being rational (and so were free) at the moment (as, indeed, was the usual course of things). Hence his stance on how to treat wrong-doers; you treat, say, a murderer (murder is definitely irrational for Kant!) as if he’s being rational, that is, as if killing someone were a reasonable response in certain circumstances, by applying this principle to your treatment of him; by executing him. All very convoluted, to be sure, but while I am not going to insist that it all works (since I don’t think it does), it certainly does link freedom and morality very closely together without having any absurd implication that morality isn’t concerned with doing the right thing.
I agree that I don’t understand Kant. It’s impossible to understand something that doesn’t make sense. The best you can do is try to construct the most-similar argument that does make sense.
The word “certainly” appears to be an attempt to compensate for a lack of a counter-argument. When I’ve said “A, B, A&B=>C, therefore C”, responding to my argument requires you to address A, B, or A&B=>C, and not just assert “not(C)”.
Kant’s focus on assigning credit or blame as being an essential part of morality implies that the end goal of moral behavior is not to get good outcomes, but the credit or blame assigned, as I explained at length in the post. This “morality may be concerned with doing the right thing—as a precondition—but it isn’t about doing the right thing.
Kant used his peculiar meaning of free will, but at the end turned around and applied it as if he had been using the definition I use in this post. If Kant truly meant that “free” means “rational”, then making a long argument starting from the precept that man is rational so that he could claim at the end, “Now I have proven man is rational!” would not make any sense. And if Kant was inconsistent or incoherent, I can’t be blamed for picking one possible interpretation.
Win.
As I see it, there are:
Actions which do not lead to the best outcome for everyone
Actions which need to be punished in order to lead to the best outcome for everyone
(other suggestions welcome)
I have used one taboo word here: “best”. But we’ll assume everyone at least broadly agrees on its definition (least death and pain, most fun etc).
People can then start applying other taboo-able words, which may arbitrarily apply to one of the above concepts, or to a confused mixture of them.
Morality
The right thing
Intending to do the right thing
Should
Ought
So asking whether morality is about “doing the right thing” or “intending to do the right thing” or “oughtness” is just going to lead to confusion—different people comparing one fuzzy macro against another.
The connection to free will as I see it comes from 2 - punishing an agent’s actions won’t achieve anything if the agent lacks sufficient free will. Again, asking what its connection is to morality etc. is probably a question which should be dissolved.
Right on. Free will is nonsense but morality is important. I see moral questions as questions that do not have a clear cut answer that can be found be consulting some rules (religious or not). We have to figure out what is the right thing to do. And we will be judged by how well we do it.
“Free will is nonsense”
It’s not nonsense.
http://wiki.lesswrong.com/wiki/Free_will http://wiki.lesswrong.com/wiki/Free_will_(solution)
I have been pointed at those pieces before. I read them originally and I have re-read them not long ago. Nothing in them changes my conviction (1) that it is dangerous to communication to use the term ‘free will’ in any sense other than freedom from causality, (2) I do not accept a non-material brain/mind nor a non-causal thought process. Also I believe that (3) using the phrase ‘determinism’ in any sense other that the ability to predict is dangerous to communication, and (4) we cannot predict in any effective way the processes of our own brain/minds. Therefore free will vs determinism is not a productive argument. Both concepts are flawed. In the end, we make decisions and we are (usually) responsible for them in a moral-ethical-legal sense. And those decision are neither the result of free will or of determinism. You can believe in magical free will or redefine the phrase to avoid the magic—but I decline to do either.
“that it is dangerous to communication to use the term ‘free will’ in any sense other than freedom from causality”
Why is that? There are many things that can keep your will from being done. Eliminating them makes your will more free. Furthermore, freedom from causality is pretty much THE most dangerous definition for free will, because it makes absolutely, positively no sense. Freedom from causality is RANDOMNESS.
“Therefore free will vs determinism is not a productive argument.”
We don’t have this argument here. We believe that free will requires determinism. You aren’t free if you have no idea what the hell is about to happen.
FYI: You can make quotes look extra cool by placing a ‘>’ at the start of the line. More information on comment formatting can be found in the help link below the comment box.
Tango Yankee.
Does that mean we should stop exonerating people who did bad things under duress? (iIOW, your stipulation about FW would change the way the word is used in law).
Does that mean we should stop saying that classical chaos is deterministic? (IOW, your stipulation about “deterministic” would change the way the word is used by physicists).
I believe the “free will” thing is because without it, you could talk about whether or not a rock is moral. You could just say whether or not the universe is moral.
I consider morality to be an aspect of the universe (a universe with happier people is better, even if nobody’s responsible), so I don’t see any importance of free will.
I don’t understand, you cannot talk about whether a rock is moral?
Given that a rock appears to have no way to recieve input from the universe, create a plan to satisfy its goals, and act, I would consider a rock morally neutral—In the same way that I consider someone to be morally neutral when they fail to prevent a car from being stolen while they are in a coma in another country.
I believe you are missing Kant’s point regarding free will. People have free will. Rocks don’t. And that is why it makes moral sense for you to want a universe with happy people, and not a universe with happy rocks!
People deserve happiness because they are morally responsible for causing happiness. Rocks take no responsibility, hence those of us who do take responsibility are under no obligation to worry about the happiness of rocks.
Utilitarians of the LessWrong variety tend to think that possession of consciousness is important in determining whether some entity deserves our moral respect. Kant tended to think that possession of free will is important.
As a contractarian regarding morals, I lean toward Kant’s position, though I would probably express the idea in different language.
Generally speaking, I’m uneasy about any reduction from a less-confused concept to a more-confused concept. Free will is a more confused concept than moral significance. Also, I can imagine things changing my perspective on free will that would not also change my perspective on moral significance. For example, if we interpret free will as unsolvability by rivals, then the birth of a superintelligence would cause everyone to lose their free will, but have no effect on anyone’s moral significance.
A cognitive agent with intentions sounds like it’s at least in the same conceptual neighborhood as free will. Perhaps free will has roughly the same role in their models of moral action as intentions do in your model.
If a tornado kills someone we don’t say that it acted immorally but if a man does we do (typically). What’s the difference between the man and the tornado? While the tornado was just a force of nature, it seems like there’s some sense in which the man was an active agent, some way in which the man (unlike the tornado) had control of his actions, chose to kill, or willed the consequences of his actions.
One approach, which many philosophers have taken, is to give the label “free will” to that meaning of agency/control/choice/will/whatever which allows the man to have moral responsibility while the tornado does not, and then work to define what exactly it consists of. That might not be the best move to make, given the many existing definitions and connotations of the term “free will” and all of the attachments and confusions they create, but it’s not an inexplicable one.
If punishing tornados changed their behaviour, then we would try to punish tornados. An event appears to be intentional (chosen) when it’s controlled by contingencies of reward and punishment.
There are exceptions to this characterisation of will. When there is a power imbalance between those delegating rewards and punishments and those being influenced by rewards and punishments, the decision is sometimes seen as less than free, and deemed exploitation. Parents and governments are generally given more leeway with regards to power imbalances.
When particular rewards have negative social consequences, they’re sometimes called addictive. When particular punishments have negative social consequences, their use is sometimes called coercive and/or unjust.
I don’t understand this sentence. Morality is a property of a system that can be explained in terms of its parts. A cognitive agent is also a system of parts, parts which on their own do not exhibit morality.
If something is judged to be beautiful then the pattern that identifies beauty is in the mind of the agent and exhibited by the object that is deemed beautiful. If the agent ceases to be then the beautiful object does still exhibit the same pattern. Likewise if a human loses its ability to proclaim that the object is beautiful, it is still beautiful. If you continued to remove certain brain areas, or one neuron at a time, at what point does beauty cease to exist?
I meant that we attribute morality to an agent. Suppose agent A1 makes a decision in environment E1 that I approve of morally, based on value set V. You can’t come up with another environment E2, such that if A1 were in environment E2, and made the same decision using the same mental steps and having exactly the same mental representations, I will say it was immoral for A1 in environment E2 according to value set V.
You can easily come up with an environment E2 where the outcome of A1′s actions are bad. If you change the environment enough, you can come up with an E2 where A1′s values consistently lead to bad outcomes, and so A1 “should” change its values (for some complicated and confusing value of “should”). But, if we’re judging the morality of A1′s behavior according to a constant set of values, then properties of the environment which are unknown to A1 will have no impact on our (or at least my) judgement of whether A1′s decision was moral.
A simpler way of saying all this is: Information unknown to agent A has no impact on our judgement of whether A’s actions are moral.
This is a tricky problem. Is morality, like beauty, something that exists in the mind of the beholder? Like aesthetic judgements, it exists relative to a set of values, so probably yes.
So you (or perhaps some extrapolated version of you) would say that a thermostat in a human’s house set to 65 degrees F is moral, because it does the right thing, while a thermostat set to 115 is immoral because it does the wrong thing. Meanwhile one of those free will people would say that a thermostat is neither moral nor immoral, it is just a thermostat.
The main difference seems to be the importance of “moral responsibility,” which, yes, is mixed up with god, but more importantly is a key part of human emotions, mostly emotions dealing with punishment and reward. It is entirely possible to imagine a picture of morality that only makes sense in light of some agents being morally responsible, and that picture seems to be all over the place in our culture. Free will, nebulous though it is, certainly is linked to moral responsibility because “only you are ultimately responsible for your actions” is one of the many partial definitions of free will.
Right—but, that’s where this post starts off. I described the view you just described in your second paragraph, and acknowledged that it’s the majority view, then argued against it.
I don’t think you said this. Your two options for people were “Most people conceive of morality in a way that assumes soul-body duality.” and “They worry about philosophical free will when they mean to worry about intention.”
You seem to be neglecting the possibility that “morality” exists not to refer to a clear set of things in the world, but instead to refer to an important thing that the human mind does.
If you want an alternative to the word ‘morality’ that means what you want ‘morality’ to mean, I have found good results using the phrase “right-and-wrong-ness”.
Do note that this often takes a turn through intuitionism, and it can be hard to drag less-clear thinkers out of that mire.
While morality seems closely related to (a) signaling to other people that you have the same values and are trustworthy and won’t defect or (b) being good to earn “points”, neither of these definitions feel right to me.
I hesitate to take (a) because morality feels more like a personal, internal institution that operates for the interests of the agent. Even if the outcome is for the interests of society, and that this is some explanation for why it evolved, that doesn’t seem to reflect how it works.
I feel that (b) seems to miss the point: we aren’t good to pragmatically “get points” for something, with a use of a ‘morality’ term separate from pragmatism or cooperation, we’re acknowledging that points are given based on something more subtle and complex than pragmatism or cooperation (e.g., ‘God’s preferences’ is a handle). (I mean, we’re good because we want to be, and ‘getting points’ is just a way of describing this. We wouldn’t do anything unless it meant getting some kind of points, either real or abstract.)
I wrote down a hypothesis for morality a week ago and decided I would think about it later.
Nazgulnarsil wrote:
I’m considering that moral means not subverting one’s own utility function.
Humans seems to have a lot of plasticity in choosing what their values and what their values are about. We can think about things a certain way and develop perspectives that lead to values and actions that are extremely different from our initial ideas of what moral is. (For example, people presumably just like myself have torn children from their parents and sent them to starve in death camps.) It stands to reason we would need a strong internal protection system—some system of checks and balances—to keep our values intact.
Suppose we consider that we should always do whatever is pragmatically correct (pragmatic behavior includes altruistic, cooperative behavior) except when an action is suspected to subvert our utility function. I imagine that our utility function could be subverted if an action makes us feel hypocritical, and thus forces us to devalue a value that we had.
For example, we all value other people (especially particular people). But if we would kill someone for pragmatic reasons (that is, we have some set of reasons for wanting to do so that outweigh reasons for not wanting to), we can still decide we wouldn’t kill them for this one other reason: we want to value not killing other people.
This is very subtle. Already, we do value not killing other people, but this has already been weighted in the decision and we still decide we would—pragmatically—commit the murder. But we realize that if we commit the murder for these pragmatic reasons,even though it seems for the best given our current utility function, we can no longer pretend that we value life so much and we may view a slippery slope where it will be easier to kill someone in the future because now we know this value isn’t so strong.
If we do commit the murder anyway, because we are pragmatic rather than moral, then the role of guilt could be to realign and reset our values. “I killed him because I had to but I feel really bad about it; this means I really do value life.”
So finally, morality could be about protecting values we have that aren’t inherently stable.
I think I’m on the same page with you re kant. Tell me if I’ve understood the other ideas you’re advancing in this post:
The problem of understanding morality just is the problem of understanding which actions are moral.
An action is moral only if (but not if and only if) it was intended to be moral.
Did I miss the point?
Can you spell out what you mean by this? Are intentions something a thermostat has intrinsically, or something that I can ascribe to it?
Asking whether a thermostat is intentions intrinsically, or whether we only ascribe intentions to them, is what I meant by asking about the phenomenological status of these intentions. If I ask whether Jim really has intentions, or whether Jim is a zombie whom I am merely ascribing intentions to, I’m really asking (I think) whether Jim has free will. If morality is just about doing the right thing, then we don’t need to ask that.
The free will question may still be interesting and important; but I’d like to separate it from the question of what actions are moral. I want there to be only one fundamental moral question: What is the right set of values? The “morality requires free will” viewpoint introduces an entirely different question, which I think should be its own thing.
I don’t think Kant thought about getting to the afterlife. My impression of Kant is that he was essentially agnostic about both God and the afterlife (although he considered them to be a very interrelated pair of questions) but thought it was healthier for individuals and society to believe in them.
I’ll strike that—I didn’t mean that he was obsessed with a particular story about heaven, the way Martin Luther was. I meant, more abstractly, that he saw the central question as when to give people credit for their actions.
You don’t think the two are related? I think that a pretty good case can be made that:
You should give people credit for their actions when they do the right thing.
If your own intuitions aren’t sufficiently convincing at instructing you regarding “What is the right thing to do?”, you can get a ‘second opinion’ by observing what kinds of things people receive credit for.
The first question is related to the second question in ethical systems in which you get credit for doing the right things. They should still be two separate questions.
In some types of Christianity, they aren’t related, because there is no “right thing to do”, there is only what God tells you to do. This is described as “the right thing to do”, but it’s what I called a macro rather than a primitive: There is no new ontological category of “right things”; you just need to learn what things God says to do.
“Moral” and “legal” mean different things anyway. It makes sense that someone did the legally wrong thing, but were not culpable. We regularly make such decisions where some is exonerated on grounds of being a minor, insane, etc. There is a link between legality and morality; if it is expressed as something like “illegal acts are those acts which are morally wrong when committed by a moral agent”
I don’t see how that follows. If we believe that intentions and volitions exist, and have naturalistic roots in certain brain mechanisms, then their possessing a brain condition could affect our credit assignment. Naturalists can be libertarians too
That is to beg the question against the idea that morality is in fact dependent on philosophical free will. The point remains that practical/legal ethics can and should be considered separately from philosophical free will (but not from practical FW of the kind removed by having a gun-pointed-at-your-head). However practical/legal/social ethics already are considered largely separately from the philosophical questions. It might be the case that the two constelations of issues are merged in the thinking of the general public. but that is not greatly impactful since actual law and philosophy are written by different groups of differently trained specialists.
There are quite a lot of people who think there are no objective capital-m Morals, in the philosophical sense. Noticeably, they don’t go around eating babies, or behaving much differently to everyone else. Presumably they have settled for small-m practical morals. So, again, something very like the distinction you are calling for is already in place.
I don’t see any evidence for that.
Meaning philosophical free will? Surely having a gun pointed at one’s head is quite relevant to the issue of why one did not do as one ought.
ETA In summary: it is not FW versus morality. There is a link between FW and morality at the level or philosophical discourse, and another link between (another version of) FW and (another version of) morality at the pragmatic/legal level. Goetz’s requirements can be satisfied by sticking at the legal/practical level. However, that is not novel. The stuff about God and the soul is largely irrelevant.
This problem with Goetz’s Cognitive Razor, is that humans are internally complex.
It seems like the right perspective to think about things goes something like this:
Facts about the world can be good or bad. It is good, for instance, when people are happy and healthy, and bad when they are not.
It is bad that Alice fell and hit her head.
It is bad that Bob, due to dizziness, stumbled and hit his head.
It is bad that Carol, due to a sudden bout of violent behavior, momentarily decided to punch Dan in the head.
It is bad that Erin carried out a plan over a period of weeks to punch Fred in the head.
These are all pretty much equally bad, but 4 and possibly 3 are also someone’s responsibility, and therefore morality is involved.
Some facts about the world are some people’s responsibility. These seem to be some fraction of the facts that are true about their brain—yes 4 but not 2. Good things that are people’s responsibility are moral in a different sense than good things that are not people’s responsibility.
but this responsibility is philosophically very fuzzy and mostly isn’t a useful concept.
Interesting breakdown.
My interpretation is that facts about the world are interpreted as good or bad by a brain capable of feeling pain, the usual indicator that a world-state is ‘bad’, and pleasure, the indicator that it is ‘good’. Outside of the subjective, there are facts but not values. In the subjective, there are values of good and bad.
If I understand correctly what you’re saying, it’s that a fact having positive or negative value assigned to it by a brain (i.e. Alice falling and hitting her head) does not necessarily imply that this fact has a moral flavour attached to it by the same brain. It’s not wrong that Alice fell, it’s just bad...but it is wrong that Carol hit Fred. Am I reading your argument correctly?
What you’re saying is true, but doesn’t touch on the distinction that the post is about. The post contrasts two positions, both of which would agree with everything you just said.
It’s a step on the way to dissolve or pseudo-dissolve the question.
Separating concepts is itself a moral action. Moral actions should relate to moral agents. Most of the moral agents who use these concepts aren’t here on lesswrong. They include the kind of people who hear ″free will is an illusion″ from a subjectively credible source and mope around for the rest of their lives.
“What happens then when agents’ self-efficacy is undermined? It is not that their basic desires and drives are defeated. It is rather, I suggest, that they become skeptical that they can control those desires; and in the face of that skepticism, they fail to apply the effort that is needed even to try. If they were tempted to behave badly, then coming to believe in fatalism makes them less likely to resist that temptation.
“
”
—Richard Holton[210]
Baumeister and colleagues found that provoking disbelief in free will seems to cause various negative effects. The authors concluded, in their paper, that it is belief in determinism that causes those negative effects.[205] This may not be a very justified conclusion, however.[210] First of all, free will can at least refer to either libertarian (indeterministic) free will or compatibilistic (deterministic) free will. Having participants read articles that simply “disprove free will” is unlikely to increase their understanding of determinism, or the compatibilistic free will that it still permits.[210]
In other words, “provoking disbelief in free will” probably causes a belief in fatalism. As discussed earlier in this article, compatibilistic free will is illustrated by statements like “my choices have causes, and an effect – so I affect my future”, whereas fatalism is more like “my choices have causes, but no effect – I am powerless”. Fatalism, then, may be what threatens people’s sense of self-efficacy. Lay people should not confuse fatalism with determinism, and yet even professional philosophers occasionally confuse the two. It is thus likely that the negative consequences below can be accounted for by participants developing a belief in fatalism when experiments attack belief in “free will”.[210] To test the effects of belief in determinism, future studies would need to provide articles that do not simply “attack free will”, but instead focus on explaining determinism and compatibilism. Some studies have been conducted indicating that people react strongly to the way in which mental determinism is described, when reconciling it with moral responsibility. Eddy Nahmias has noted that when peoples actions are framed with respect to their beliefs and desires (rather than their neurological underpinnings) they are more likely to dissociate determinism from moral responsibility.[211]
Various social behavioural traits have been correlated with the belief in deterministic models of mind, some of which involved the experimental subjection of individuals to libertarian and deterministic perspectives.
After researchers provoked volunteers to disbelieve in free will, participants lied, cheated, and stole more. Kathleen Vohs has found that those whose belief in free will had been eroded were more likely to cheat.[212] In a study conducted by Roy Baumeister, after participants read an article arguing against free will, they were more likely to lie about their performance on a test where they would be rewarded with cash.[213] Provoking a rejection of free will has also been associated with increased aggression and less helpful behaviour[214][215] as well as mindless conformity.[216] Disbelief in free will can even cause people to feel less guilt about transgressions against others.[217]
Baumeister and colleagues also note that volunteers disbelieving in free will are less capable of counterfactual thinking.[205][218] This is worrying because counterfactual thinking (“If I had done something different…”) is an important part of learning from one’s choices, including those that harmed others.[219] Again, this cannot be taken to mean that belief in determinism is to blame; these are the results we would expect from increasing people’s belief in fatalism.[210]
Along similar lines, Tyler Stillman has found that belief in free will predicts better job performance.[220]”
to me, morality means not disastrously/majorly subverting another’s utility function for a trivial increase in my own utility.
edit: wish the downvoters would give me some concrete objections.
Do you mean that “not disastrously/majorly subverting another’s utility function for a trivial increase in my own utility” is ethical, in the sense that this is a safety measure so that you don’t accidentally cause net negative utility with regard to your own utility function (as a result of limited computing power)?
Or do you mean that you assign negative utility to causing someone else negative utility according to their utility function?
causing negative utility is not the same as disastrously subverting their utility function.
It’s strange that you haven’t explained what you mean by ‘disastrously subverting’.
slipping the pill that makes you want to kill people into gandhi’s drink without his knowledge is the simplest example.
Now I just think it’s odd that you have “refraining from non-consensual modification of others’ wants/values” as the sole meaning of “morality”.
The “it is strange”, “I think it is odd” style of debate struck me as disingenuous.
Okay, “stupid” if you prefer :)
Better. :)
I was really just annoyed at the lack of clarity in that statement. I could have just said so, in fewer words (or said nothing).
Your critique was justified, and your less presumptuous “struck me as” made it easier for me to think rather than argue.
I can see why you would be. That is, after I clicked back through half a dozen comments to explore the context I could see why you would be annoyed. Until I got back here the only problem with nazgulnarsil’s comments was the inexcusably negligent punctuation.
Exploring the intuition behind my objection, purely for the sake of curiosity, your style of questioning is something that often works in debates completely independently of merit. To use your word it presumes a state of judgment from which you can label nazgulnarsil’s position ‘strange’ without really needing to explain directly. The metaphorical equivalent of a contemptuous sneer. Because it is a tactic that is so effective independently of merit in the context I instinctively cry ‘foul’.
The thing is, a bit of contempt is actually warranted in this case. Taken together with his earlier statements the effective position nazgulnarsil was taking was either inconsistent or indicative of utterly bizarre. But as I have learned the hard way more than once you can’t afford to cede the intellectual high and be dismissive unless you first make sure that the whole story is clear to the casual observer within one leap.
I suspect if your comment had included a link to the two comments which taken together make nazgulnarsil’s position strange it would have met my vocal approval.
If we’re just talking about rhetoric here, I prefer “odd” to “stupid” but would prefer “wrong” or “unjustified” (depending on which one you actually mean) to either.
That strikes me as a low bar. Would you disastrously subvert someone else’s utility function to majorly increase yours?
“Subversion” seems unspecific. Does that mean, would I go back in time and use my amazing NLP powers or whatever to convince Hitler to try art school again instead of starting a world war and putting millions into death camps? Or is this “subversion” more active and violent?
it goes both ways. those who try to disastrously subvert others as part of their utility get less moral consideration.
depends. no hard and fast rule. http://www.youtube.com/watch?v=KiFKm6l5-vE
Suppose I program a robot to enumerate many possible courses of action, determine how each one will affect every person involved, take a weighted average and then carry out the course of action which produces the most overall happiness. But I deliberately select a formula which will decide to kill you. Suppose the robot is sophisticated enough to suffer. Is it right to make the robot suffer? Is it right to make me suffer? Does it make a difference whether the key to the formula is a weird weighting or an integer underflow?