The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of “bad person” and make them “deserve” bad treatment. Consequentialists don’t on a primary level want anyone to be treated badly, full stop; thus is it written: “Saddam Hussein doesn’t deserve so much as a stubbed toe.” But if consequentialists don’t believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences.
I would say “utilitarians” rather than “consequentialists” here; while both terms are vague, consequentialism is generally more about the structure of your values, and there’s no structural reason a consequentialist (/ determinist) couldn’t consider it desirable for blameworthy people to be punished. (Or, with regard to preventative imprisonment of innocents, undesirable for innocents to be punished, over and above the undesirability of the harm that the punishment constitutes.)
I installed a mental filter that does a find and replace from “utilitarian” to “consequentialist” every time I use it outside very technical discussion, simply because the sort of people who don’t read Less Wrong already have weird and negative associations with “utilitarian” that I can completely avoid by saying “consequentialist” and usually keep the meaning of whatever I’m saying intact.
Less Wrong does deserve better than me mindlessly applying that filter. But you’d need a pretty convoluted consequentialist system to promote blame (and if you were willing to go that far, you could call a deontologist someone who wants to promote states of the world in which rules are followed and bad people are punished, and therefore a consequentialist at heart). Likewise, you could imagine a preference utilitarian who wants people to be punished just because e or a sufficient number of other people prefer it. I’m not sufficiently convinced enough to edit the article, though I’ll try to be more careful about those terms in the future.
I installed a mental filter that does a find and replace from “utilitarian” to “consequentialist” every time I use it outside very technical discussion,
I, for what it’s worth, think this is a good heuristic.
you’d need a pretty convoluted consequentialist system to promote blame (and if you were willing to go that far, you could call a deontologist someone who wants to promote states of the world in which rules are followed and bad people are punished, and therefore a consequentialist at heart). Likewise, you could imagine a preference utilitarian who wants people to be punished just because e or a sufficient number of other people prefer it.
I’m not sure how complicated it would have to be. You might have some standard of benevolence (how disposed you are to do things that make people happy) and hold that other things being equal, it is better for benevolent people to be happy. True, you’d have to specify a number of parameters here, but it isn’t clear that you’d need enough to make it egregiously complex. (Or, on a variant, you could say how malevolent various past actions are and hold that outcomes are better when malevolent actions are punished to a certain extent.)
Also, I don’t think you can do a great job representing deontological views as trying to minimize the extent to which rules are broken by people in general. The reason has to do with the fact that deontological duties are usually thought to be agent-relative (and time-relative, probably). Deontologists think that I have a special duty to see to it that I don’t break promises in a way that I don’t have a duty to see to it that you don’t break promises. They wouldn’t be happy, for instance, if I broke a promise to see to it that you kept two promises of roughly equal importance. Now, if you think of the deontologists as trying to satisfy some agent-relative and time-relative goal, you might be able to think of them as just trying to maximize the satisfaction of that goal. (I think this is right.) If you find this issue interesting (I don’t think it is all that interesting personally), googling “Consequentializing Moral Theories” should get you in touch with some of the relevant philosophy.
Agreed. (Though I agree with the general structure of your post.)
A better name for your position might be “basic desert skepticism”. On this view, no one is intrinsically deserving of blame. One reason is that I don’t think the determinism/indeterminism business really settles whether it is OK to blame people for certain things. As I’m sure you’ve heard, and I’d imagine people have pointed out on this blog, the prospects of certain people intrinsically deserving blame, independently of benefits to anyone, are not much more cheering if everything they do is a function of the outcome of indeterministic dynamical laws.
Another reason is that you can have very similar opinions if you’re not a consequentialist. Someone might believe that it is quite appropriate, in itself, to be extra concerned about his own welfare, yet agree with you about when it is a good idea to blame folks.
Another reason is that you can have very similar opinions if you’re not a consequentialist. Someone might believe that it is quite appropriate, in itself, to be extra concerned about his own welfare, yet agree with you about when it is a good idea to blame folks.
Hmm? There’s no reason a consequentialist can’t be extra concerned about his own welfare. (Did I misunderstand this?)
Well, you clearly could be extra concerned about your own welfare because it is instrumentally more valuable than the welfare of others (if you’re happy you do more good than your neighbor, perhaps). Or, you could be a really great guy and hold the view that it’s good for great guys to be extra happy. But I was thinking that if you thought that your welfare was extra important just because it was yours you wouldn’t count as a consequentialist.
As I was mentioning in the last post, there’s some controversy about exactly how to spell out the consequentialist/non-consequentialist distinction. But probably the most popular way is to say that consequentialists favor promoting agent-neutral value. And thinking your welfare is special, as such, doesn’t fit that mold.
Still, there are some folks who say that anyone who thinks you should maximize the promotion of some value or other counts as a consequentialist. I think this doesn’t correspond as well to the way the term is used and what people naturally associate with it, but this is a terminological point, and not all that interesting.
Consequentialism doesn’t work by reversing the hedonic/deontological error of focusing on the agent, by refusing to consider the agent at all. A consequentialist cares about what happens with the whole world, the agent included. I’d say it’s the only correct understanding for human consequentialists to care especially about themselves, though of course not exclusively.
Consequentialism doesn’t work by reversing the hedonic/deontological error of focusing on the agent, by refusing to consider the agent at all. A consequentialist cares about what happens with the whole world, the agent included.
I hope I didn’t say anything to make you think I disagree with this.
I’d say it’s the only correct understanding for human consequentialists to care especially about themselves, though of course not exclusively.
I noted that there might be instrumental reasons to care about yourself extra if you’re a consequentialist. But how could one outcome be better than another, just because you, rather than someone else, received a greater benefit?
Example: you and another person are going to die very soon and in the same kind of way. There is only enough morphine for one of you. Apart from making one of your deaths less painful, there will be nothing relevant hangs on who gets the morphine.
I take it that it isn’t open to the consequentialist to say, “I should get the morphine. It would be better if I got it, and the only reason it would be better is because I was me, rather than him, who received it.”
Your preference is not identical with the other person’s preference. You prefer to help yourself more than the other person, and the other person similarly. There is no universal moral. (You might want to try the metaethics sequence.)
Our question is this: is there a consequentialist view according to which it is right for someone to care more about his own welfare, as such? I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can’t make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa).
I agree that a preference utilitarian could believe that in a version of the example I described, it could better to help yourself. But that is not the case I described, and doesn’t show that consequentialists can care extra about themselves, as such. My “consequentialist” said:
“I should get the morphine. It would be better if I got it, and the only reason it would be better is because I was me, rather than him, who received it.”
Yours identifies a different reason. He says, “I should get the morphine. This is because there would be more total preference satisfaction if I did this.” This is a purely agent-neutral view.
My “consequentialist” is different from your consequentialist. Mine doesn’t think he should do what maximizes preference satisfaction. He maximizes weighted preference satisfaction, where his own preference satisfaction is weighted by a real number greater than 1. He also doesn’t think his preferences are more important in some agent-neutral sense. He thinks that other agents should use a similar procedure, weighing their own preferences more than the preferences of others.
You can bring out the difference between by considering a case where all that matters to the agents is having a minimally painful death. My “consequentialist” holds that even in this case, he should save himself (and likewise for the other guy). I take it that on the view you’re describing, saving yourself and saving the other person are equally good options in this new case. Therefore, as I understand it, the view you described is not a consequentialist view according to which agents should always care more about themselves, as such. Perhaps we are engaged in a terminological dispute about what counts as caring about your own welfare more than the welfare of others, just because it is yours?
I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can’t make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa)
I don’t think this is a necessary property for a value system to be called consequentialist. Value systems can differ in which properties of agents they care about, and a lot of value systems single the agent that implements them out as a special case.
This is where things get murky. The traditional definition is this:
Consequentialism: an act is right if no other option has better consequences
You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you’re really special, you end up thinking that the relevant sense of “better” is relative to an agent. So some people defend a view like this:
Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs.
When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we’d do better to stick with this definition:
Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v.
I don’t think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.
When a view like this is on the table consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.)
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Here’s a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Yeah, the objection wasn’t supposed to be that because there was an implausible consequentialist view on that definition of “consequentialism”, it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn’t really worth making.
Sort-of nitpick:
I would say “utilitarians” rather than “consequentialists” here; while both terms are vague, consequentialism is generally more about the structure of your values, and there’s no structural reason a consequentialist (/ determinist) couldn’t consider it desirable for blameworthy people to be punished. (Or, with regard to preventative imprisonment of innocents, undesirable for innocents to be punished, over and above the undesirability of the harm that the punishment constitutes.)
I installed a mental filter that does a find and replace from “utilitarian” to “consequentialist” every time I use it outside very technical discussion, simply because the sort of people who don’t read Less Wrong already have weird and negative associations with “utilitarian” that I can completely avoid by saying “consequentialist” and usually keep the meaning of whatever I’m saying intact.
Less Wrong does deserve better than me mindlessly applying that filter. But you’d need a pretty convoluted consequentialist system to promote blame (and if you were willing to go that far, you could call a deontologist someone who wants to promote states of the world in which rules are followed and bad people are punished, and therefore a consequentialist at heart). Likewise, you could imagine a preference utilitarian who wants people to be punished just because e or a sufficient number of other people prefer it. I’m not sufficiently convinced enough to edit the article, though I’ll try to be more careful about those terms in the future.
I, for what it’s worth, think this is a good heuristic.
I’m not sure how complicated it would have to be. You might have some standard of benevolence (how disposed you are to do things that make people happy) and hold that other things being equal, it is better for benevolent people to be happy. True, you’d have to specify a number of parameters here, but it isn’t clear that you’d need enough to make it egregiously complex. (Or, on a variant, you could say how malevolent various past actions are and hold that outcomes are better when malevolent actions are punished to a certain extent.)
Also, I don’t think you can do a great job representing deontological views as trying to minimize the extent to which rules are broken by people in general. The reason has to do with the fact that deontological duties are usually thought to be agent-relative (and time-relative, probably). Deontologists think that I have a special duty to see to it that I don’t break promises in a way that I don’t have a duty to see to it that you don’t break promises. They wouldn’t be happy, for instance, if I broke a promise to see to it that you kept two promises of roughly equal importance. Now, if you think of the deontologists as trying to satisfy some agent-relative and time-relative goal, you might be able to think of them as just trying to maximize the satisfaction of that goal. (I think this is right.) If you find this issue interesting (I don’t think it is all that interesting personally), googling “Consequentializing Moral Theories” should get you in touch with some of the relevant philosophy.
Agreed. (Though I agree with the general structure of your post.)
A better name for your position might be “basic desert skepticism”. On this view, no one is intrinsically deserving of blame. One reason is that I don’t think the determinism/indeterminism business really settles whether it is OK to blame people for certain things. As I’m sure you’ve heard, and I’d imagine people have pointed out on this blog, the prospects of certain people intrinsically deserving blame, independently of benefits to anyone, are not much more cheering if everything they do is a function of the outcome of indeterministic dynamical laws.
Another reason is that you can have very similar opinions if you’re not a consequentialist. Someone might believe that it is quite appropriate, in itself, to be extra concerned about his own welfare, yet agree with you about when it is a good idea to blame folks.
Hmm? There’s no reason a consequentialist can’t be extra concerned about his own welfare. (Did I misunderstand this?)
Well, you clearly could be extra concerned about your own welfare because it is instrumentally more valuable than the welfare of others (if you’re happy you do more good than your neighbor, perhaps). Or, you could be a really great guy and hold the view that it’s good for great guys to be extra happy. But I was thinking that if you thought that your welfare was extra important just because it was yours you wouldn’t count as a consequentialist.
As I was mentioning in the last post, there’s some controversy about exactly how to spell out the consequentialist/non-consequentialist distinction. But probably the most popular way is to say that consequentialists favor promoting agent-neutral value. And thinking your welfare is special, as such, doesn’t fit that mold.
Still, there are some folks who say that anyone who thinks you should maximize the promotion of some value or other counts as a consequentialist. I think this doesn’t correspond as well to the way the term is used and what people naturally associate with it, but this is a terminological point, and not all that interesting.
Consequentialism doesn’t work by reversing the hedonic/deontological error of focusing on the agent, by refusing to consider the agent at all. A consequentialist cares about what happens with the whole world, the agent included. I’d say it’s the only correct understanding for human consequentialists to care especially about themselves, though of course not exclusively.
I hope I didn’t say anything to make you think I disagree with this.
I noted that there might be instrumental reasons to care about yourself extra if you’re a consequentialist. But how could one outcome be better than another, just because you, rather than someone else, received a greater benefit?
Example: you and another person are going to die very soon and in the same kind of way. There is only enough morphine for one of you. Apart from making one of your deaths less painful, there will be nothing relevant hangs on who gets the morphine.
I take it that it isn’t open to the consequentialist to say, “I should get the morphine. It would be better if I got it, and the only reason it would be better is because I was me, rather than him, who received it.”
Your preference is not identical with the other person’s preference. You prefer to help yourself more than the other person, and the other person similarly. There is no universal moral. (You might want to try the metaethics sequence.)
Our question is this: is there a consequentialist view according to which it is right for someone to care more about his own welfare, as such? I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can’t make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa).
I agree that a preference utilitarian could believe that in a version of the example I described, it could better to help yourself. But that is not the case I described, and doesn’t show that consequentialists can care extra about themselves, as such. My “consequentialist” said:
Yours identifies a different reason. He says, “I should get the morphine. This is because there would be more total preference satisfaction if I did this.” This is a purely agent-neutral view.
My “consequentialist” is different from your consequentialist. Mine doesn’t think he should do what maximizes preference satisfaction. He maximizes weighted preference satisfaction, where his own preference satisfaction is weighted by a real number greater than 1. He also doesn’t think his preferences are more important in some agent-neutral sense. He thinks that other agents should use a similar procedure, weighing their own preferences more than the preferences of others.
You can bring out the difference between by considering a case where all that matters to the agents is having a minimally painful death. My “consequentialist” holds that even in this case, he should save himself (and likewise for the other guy). I take it that on the view you’re describing, saving yourself and saving the other person are equally good options in this new case. Therefore, as I understand it, the view you described is not a consequentialist view according to which agents should always care more about themselves, as such. Perhaps we are engaged in a terminological dispute about what counts as caring about your own welfare more than the welfare of others, just because it is yours?
I don’t think this is a necessary property for a value system to be called consequentialist. Value systems can differ in which properties of agents they care about, and a lot of value systems single the agent that implements them out as a special case.
This is where things get murky. The traditional definition is this:
Consequentialism: an act is right if no other option has better consequences
You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you’re really special, you end up thinking that the relevant sense of “better” is relative to an agent. So some people defend a view like this:
Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs.
When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we’d do better to stick with this definition:
Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v.
I don’t think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.
Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Here’s a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.
Yeah, the objection wasn’t supposed to be that because there was an implausible consequentialist view on that definition of “consequentialism”, it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn’t really worth making.