There is plenty of room for willpower in ethics-as-taste once you have a sufficiently complicated model of human psychology in mind. Humans are not monolithic decision makers (let alone do they have a coherent utility function, as others have mentioned).
Consider the “elephant and rider” model of consciousness (I thought Yvain wrote a post about this but I couldn’t find it; in any case I’m not referring to this post by lukeprog, which is talking about something else). In this model, we divide the mind into two parts—we’ll say my mind just for concreteness. The first part represents my conscious mind. The second part represents my unconscious mind.
In the elephant rider metaphor, my conscious mind is the rider and my unconscious mind is the elephant. The rider is in “control” of the elephant in the sense that he sits on top of the elephant and tells it where to go and the elephant listens, by and large. However, the rider doesn’t make the elephant do anything too objectionable. If the elephant really wanted to throw the rider off its back and escape, there’s nothing the rider could do to stop it. The relationship between the conscious mind and the unconscious mind is similar. My conscious mind acts as if it has full control of me, but it’s typically doing things that my unconscious mind doesn’t object to much or even heartily endorses. However, if my unconscious mind decides that I just need to punch this guy, consequences be damned, and floods my conscious mind with overwhelming emotions of anger and offence, there’s little my conscious mind can do to regain control.
Now, this is obviously a gross simplification of the actual psychology of humans. It may make more sense to think of the rider as the collection of agents/programs/modules that make up my conscious mind and the elephant as everything else—but then again, it’s not even clear that the relevant distinction is conscious vs. unconscious, or even if there’s a hard line at all.
In any case, willpower to do the right thing falls out of this simple model precisely because it doesn’t view humans as having monolithic minds. Assume the rider really wants the elephant to go somewhere and do something that the elephant objects strongly to. As long as the elephant doesn’t object too much, the rider can do some things to get the elephant to comply, though the rider can’t do them too often without completely losing control of the elephant. For example, the rider might smack the elephant on the head with a stick every time it goes the wrong direction. The elephant may comply at first, but enough smacking and that elephant just might revolt, and anyway it requires considerably more effort on the rider’s part just to keep the elephant moving in the right direction. Analogously, when my conscious mind wants my unconscious mind to comply with something it doesn’t want, it requires effort from my conscious mind to keep my unconscious mind from derailing the choice. For example, it took considerable effort from me last night to study for my microeconomics exam on Friday instead of watching the NLDS.
This effort on the conscious mind’s part is exactly what willpower is. Suppose my unconscious mind thinks I should take a large sum of money from known genocidal dictators in exchange for weapons but my conscious mind thinks this would be a downright evil thing to do. It may take considerable effort on the part of my conscious mind in order to keep my unconscious mind from taking full control of me and collecting the dividends.
Purchasing fuzzies and utilons separately also makes sense in this context. Fuzzies satisfy the elephant—my unconscious mind—so that the rider can maintain control. My unconscious mind wants to feel like I did the right thing, but my conscious mind wants to actually do the right thing. I can’t ignore my unconscious mind and purchase no fuzzies because it will eventually assert full control and get what it wants—all of those fuzzies—at the expense of everything that my conscious mind wants. So in order to keep this from happening, Eliezer suggests that I consciously purchase fuzzies in a cost effective manner to keep my unconscious mind in check (to purchase willpower essentially), and then separetely purchase utilons using cold calculation. I (i.e. my conscious mind) need to purchase fuzzies to stay in control, but it’s all in the service of getting utilons. The idea isn’t to maximize two utility functions, but to purchase willpower to continue maximizing one—keep that unconscious mind in check! (or rather those unconscious modules)
You could respond to all of this glibly by arguing that the preferences at stake here are the preferences of the entire me, not just my conscious mind or any other subset you come up with—and you’ve taken that tack before. I think you’re wrong, but I’m not entirely sure how to argue against that position—other than asserting that given my experience, “I” feel like only a subset of “Matt_Simpson.” In any case, that’s where the disagreement is at.
I’ll accept that willpower means something like the conscious mind trying to reign in the subconscious. But when you use that to defend the “ethics as willpower” view, you’re assuming that the subconscious usually wants to do immoral things, and the conscious mind is the source of morality.
On the contrary, my subconscious is at least as likely to propose moral actions as my conscious. My subconscious mind wants to be nice to people. If anything, it’s my conscious mind that comes up with evil plans; and my subconscious that kicks back.
I think there’s a connection with the mythology of the werewolf. Bear with me. Humans have a tradition at least 2000 years long of saying that humans are better than animals because they’re rational. We characterize beasts as bestial; and humans as humane. So we have the legend of the werewolf, in which a rational man is overcome by his animal (subconscious) nature and does horrible things.
Yet if you study wolves, you find they are often better parents and more devoted partners than humans are. Being more rational may let you be more effective at being moral; but it doesn’t appear to give you new moral values.
(I once wrote a story about a wolf that was cursed with becoming human under the full moon, and did horrible things to become the pack alpha that it never could have conceived of as a wolf. It wasn’t very good.)
In one of Terry Pratchett’s novels (I think it is The Fifth Elephant) he writes that werewolves face as much hostility among wolves as among humans, because the wolves are well aware which of us is actually the more brutal animal.
I’ll accept that willpower means something like the conscious mind trying to reign in the subconscious. But when you use that to defend the “ethics as willpower” view, you’re assuming that the subconscious usually wants to do immoral things, and the conscious mind is the source of morality.
On the contrary, my subconscious is at least as likely to propose moral actions as my conscious. My subconscious mind wants to be nice to people. If anything, it’s my conscious mind that comes up with evil plans; and my subconscious that kicks back.
I agree. I’m not sure if you’re accusing me of holding the position or not so just to be clear, I wasn’t defending ethics as willpower—I was carving out a spot for willpower in ethics as taste. I’m not sure whether the conscious or unconscious is more likely to propose evil plans; only that both do sometimes (and thus the simple conscious/unconscious distinction is too simple).
On the contrary, my subconscious is at least as likely to propose moral actions as my conscious. My subconscious mind wants to be nice to people. If anything, it’s my conscious mind that comes up with evil plans; and my subconscious that kicks back.
What do you call the part of your mind that judges whether proposed actions are good or evil?
You referred to some plans as good and some plans as evil; therefore, something in your mind must be making those judgements (I never said anything about specializing).
In that case, I call that part of my mind “my mind”.
The post could be summarized as arguing that the division of decisions into moral and amoral components, if it is even neurally real, is not notably more important than the division of decisions into near and far components, or sensory and abstract components, or visual and auditory componets, etc.
Oh yes, I should probably state my position. I want to call the judgement about whether a particular action is good or evil the “moral component”, and everything else the “amoral” component. Thus ethics amounts to two things:
1) making the judgement about whether the action is good or evil as accurate as possible (this is the “wisdom” part)
2) acting in accordance with this judgement, i.e., performing good actions and not performing evil actions (this is the “willpower” part)
Why do you want to split things up that way? As opposed to splitting them up into the part requiring a quick answer and the part you can think about a long time (certainly practical), or the part related to short-term outcome versus the part related to long-term outcome, or other ways of categorizing decisions?
There is plenty of room for willpower in ethics-as-taste once you have a sufficiently complicated model of human psychology in mind. Humans are not monolithic decision makers (let alone do they have a coherent utility function, as others have mentioned).
Consider the “elephant and rider” model of consciousness (I thought Yvain wrote a post about this but I couldn’t find it; in any case I’m not referring to this post by lukeprog, which is talking about something else). In this model, we divide the mind into two parts—we’ll say my mind just for concreteness. The first part represents my conscious mind. The second part represents my unconscious mind.
In the elephant rider metaphor, my conscious mind is the rider and my unconscious mind is the elephant. The rider is in “control” of the elephant in the sense that he sits on top of the elephant and tells it where to go and the elephant listens, by and large. However, the rider doesn’t make the elephant do anything too objectionable. If the elephant really wanted to throw the rider off its back and escape, there’s nothing the rider could do to stop it. The relationship between the conscious mind and the unconscious mind is similar. My conscious mind acts as if it has full control of me, but it’s typically doing things that my unconscious mind doesn’t object to much or even heartily endorses. However, if my unconscious mind decides that I just need to punch this guy, consequences be damned, and floods my conscious mind with overwhelming emotions of anger and offence, there’s little my conscious mind can do to regain control.
Now, this is obviously a gross simplification of the actual psychology of humans. It may make more sense to think of the rider as the collection of agents/programs/modules that make up my conscious mind and the elephant as everything else—but then again, it’s not even clear that the relevant distinction is conscious vs. unconscious, or even if there’s a hard line at all.
In any case, willpower to do the right thing falls out of this simple model precisely because it doesn’t view humans as having monolithic minds. Assume the rider really wants the elephant to go somewhere and do something that the elephant objects strongly to. As long as the elephant doesn’t object too much, the rider can do some things to get the elephant to comply, though the rider can’t do them too often without completely losing control of the elephant. For example, the rider might smack the elephant on the head with a stick every time it goes the wrong direction. The elephant may comply at first, but enough smacking and that elephant just might revolt, and anyway it requires considerably more effort on the rider’s part just to keep the elephant moving in the right direction. Analogously, when my conscious mind wants my unconscious mind to comply with something it doesn’t want, it requires effort from my conscious mind to keep my unconscious mind from derailing the choice. For example, it took considerable effort from me last night to study for my microeconomics exam on Friday instead of watching the NLDS.
This effort on the conscious mind’s part is exactly what willpower is. Suppose my unconscious mind thinks I should take a large sum of money from known genocidal dictators in exchange for weapons but my conscious mind thinks this would be a downright evil thing to do. It may take considerable effort on the part of my conscious mind in order to keep my unconscious mind from taking full control of me and collecting the dividends.
Purchasing fuzzies and utilons separately also makes sense in this context. Fuzzies satisfy the elephant—my unconscious mind—so that the rider can maintain control. My unconscious mind wants to feel like I did the right thing, but my conscious mind wants to actually do the right thing. I can’t ignore my unconscious mind and purchase no fuzzies because it will eventually assert full control and get what it wants—all of those fuzzies—at the expense of everything that my conscious mind wants. So in order to keep this from happening, Eliezer suggests that I consciously purchase fuzzies in a cost effective manner to keep my unconscious mind in check (to purchase willpower essentially), and then separetely purchase utilons using cold calculation. I (i.e. my conscious mind) need to purchase fuzzies to stay in control, but it’s all in the service of getting utilons. The idea isn’t to maximize two utility functions, but to purchase willpower to continue maximizing one—keep that unconscious mind in check! (or rather those unconscious modules)
You could respond to all of this glibly by arguing that the preferences at stake here are the preferences of the entire me, not just my conscious mind or any other subset you come up with—and you’ve taken that tack before. I think you’re wrong, but I’m not entirely sure how to argue against that position—other than asserting that given my experience, “I” feel like only a subset of “Matt_Simpson.” In any case, that’s where the disagreement is at.
I’ll accept that willpower means something like the conscious mind trying to reign in the subconscious. But when you use that to defend the “ethics as willpower” view, you’re assuming that the subconscious usually wants to do immoral things, and the conscious mind is the source of morality.
On the contrary, my subconscious is at least as likely to propose moral actions as my conscious. My subconscious mind wants to be nice to people. If anything, it’s my conscious mind that comes up with evil plans; and my subconscious that kicks back.
I think there’s a connection with the mythology of the werewolf. Bear with me. Humans have a tradition at least 2000 years long of saying that humans are better than animals because they’re rational. We characterize beasts as bestial; and humans as humane. So we have the legend of the werewolf, in which a rational man is overcome by his animal (subconscious) nature and does horrible things.
Yet if you study wolves, you find they are often better parents and more devoted partners than humans are. Being more rational may let you be more effective at being moral; but it doesn’t appear to give you new moral values.
(I once wrote a story about a wolf that was cursed with becoming human under the full moon, and did horrible things to become the pack alpha that it never could have conceived of as a wolf. It wasn’t very good.)
In one of Terry Pratchett’s novels (I think it is The Fifth Elephant) he writes that werewolves face as much hostility among wolves as among humans, because the wolves are well aware which of us is actually the more brutal animal.
I agree. I’m not sure if you’re accusing me of holding the position or not so just to be clear, I wasn’t defending ethics as willpower—I was carving out a spot for willpower in ethics as taste. I’m not sure whether the conscious or unconscious is more likely to propose evil plans; only that both do sometimes (and thus the simple conscious/unconscious distinction is too simple).
Oh! Okay, I think we agree.
What do you call the part of your mind that judges whether proposed actions are good or evil?
I would need evidence that there is a part of my mind that specializes in judging whether proposed actions are good or evil.
You referred to some plans as good and some plans as evil; therefore, something in your mind must be making those judgements (I never said anything about specializing).
In that case, I call that part of my mind “my mind”.
The post could be summarized as arguing that the division of decisions into moral and amoral components, if it is even neurally real, is not notably more important than the division of decisions into near and far components, or sensory and abstract components, or visual and auditory componets, etc.
Notice I said mind not brain. So I’m not arguing that it necessarily always takes place in the same part of the brain.
Oh yes, I should probably state my position. I want to call the judgement about whether a particular action is good or evil the “moral component”, and everything else the “amoral” component. Thus ethics amounts to two things:
1) making the judgement about whether the action is good or evil as accurate as possible (this is the “wisdom” part)
2) acting in accordance with this judgement, i.e., performing good actions and not performing evil actions (this is the “willpower” part)
Why do you want to split things up that way? As opposed to splitting them up into the part requiring a quick answer and the part you can think about a long time (certainly practical), or the part related to short-term outcome versus the part related to long-term outcome, or other ways of categorizing decisions?