Exactly what is repugnant about utilitarianism? (Moi, I find that it leads to favoring torture over 3^^^3 specks, which is beyond facepalming; I’d like to hear your view.)
I guess the moral assumptions based on which you condem utilitarianism are the same you would propose instead. What moral theory do you espouse?
Islam and Christianity are big on slavery, but it’s mainly a finite list of do’s and don’ts from a Celestial Psychopath. Obey those, and you can go to a movie. Take a nap. The subjugation is grotesque, but it has an end, at least in this life.
Not so with utilitarianism. The world is a big machine that produces utility, and your job is to be a cog in that machine. Your utility is 1 seven billionth of the equation—which rounds to zero. It is your duty in life to chug and chug and chug like a good little cog without any preferential treatment from you, for you or anyone else you actually care about, all through your days without let.
And that’s only if you don’t better serve the Great Utilonizer ground into a human paste to fuel the machine.
A cog, or fuel. Toil without relent, or harvest my organs? Which is less of a horror?
Of course, some others don’t get much better consideration. They, too, are potential inputs to the great utility machine. Chew up this guy here, spit out 3 utilons. A net increase in utilons! Fire up the woodchipper!
But at least one can argue that there is a net increase of utilons. Somebody benefited. And whatever your revulsion at torture to avoid dust specks, hey, the utilon calculator says it’s a net plus, summed over the people involved.
No, what I object to is having a believer who reduces himself to less than a slave, to raw materials for an industrial process, held up as a moral ideal. It strikes me as even more grotesque and more totalitarian than the slavery lauded by the monotheisms.
I disagree, but my reasons are a little intricate. I apologize, therefore, for the length of what follows.
There are at least three sorts of questions you might want to use a moral system to answer. (1) “Which possible world is better?”, (2) “Which possible action is better?”, (3) “Which kind of person is better?”. Many moral systems take one of these as fundamental (#1 for consequentialist systems, #2 for deontological systems, #3 for virtue ethics) but in practice you are going to be interested in answers to all of them, and the actual choices you need to make are between actions, not between possible worlds or characters.
Suppose you have a system for answering question 1, and on a given occasion you need to decide what to do. One way to do this is by choosing the action that produces the best possible world (making whatever assumptions about the future you need to), but it isn’t the only way. There is no inconsistency in saying “Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I’m going to do Y instead”; that just means that you care about other things besides morality. Which pretty much everyone does.
(The same actually applies to systems that handle question 2 more directly. There is no inconsistency in saying “The gods have commanded that we do X, but I am going to do Y instead because it’s easier”. Though there might be danger in it, if the gods are real.)
Many moral systems have the property that if you follow them and care about nothing but morality then your life ends up entirely governed by that system, and your own welfare ends up getting (by everyday standards) badly neglected. If this is a problem, it is a problem with caring about nothing but morality, not a problem with utilitarianism or (some sorts of) divine command theory or whatever.
A moral system can explicitly allow for this; e.g., a rule-based system that tells you what you may and must do can simply leave a lot of actions neither forbidden nor compulsory, or can command you to take some care of your own welfare. A consequentialist system can’t do this directly—what sort of world is better shouldn’t depend on who’s asking, so if you decide your actions solely by asking “what world is best?” you can’t make special allowances for your own interest—but so what? You can take utilitarianism as your source of answers to moral questions, and then explicitly trade off moral considerations against your own interests in whatever way you please. (And utilitarianism doesn’t tell you you mustn’t. It only tells you that if you do that you will end up with a less-than-optimal world, but you knew that already.)
A utilitarian doesn’t have to see their job as being a cog in the Great Utility Machine of the world. They can see their job however they please. All that being a utilitarian means is that when they come to ask a moral question, looking at the consequences and comparing utility is how they do it. Whether they then go ahead and maximize utility is a separate matter.
So, how should a utilitarian look at someone who cares about nothing but (utilitarian) morality—as a “moral ideal” or a grotesquely subjugated slave or what? That’s up to them, and utilitarianism doesn’t answer the question. (In particular, I’m not aware of any reason to think that considering such a person a “moral ideal” is a necessary part of maximizing utility.) It might, I suppose, be nice to have a moral system with the property that a life that’s best-according-to-that-system is attractive and nice to think about; but it would also be nice to have a physical theory with the property that if it’s true then we all get to live happily for ever, and a metaphysics with the property that it confirms all our intuitions about the universe; and, in each case, so we can but adopting those theories probably won’t work out well. Likewise, I suggest, for morality.
As for your rhetoric about machines and industrial processes: I don’t think “large-scale” is at all the same thing as “industrial”. Imagine, if you will, someone who would by admired by the Buddhist or Christian moral traditions, who is filled with love and compassion for everyone s/he sees and works hard to make their lives better even at great personal cost. Now expand this person’s awareness and compassion to encompass everyone in the world. What you get is pretty close to the “grotesquely subjugated” utilitarian saint, but there’s nothing machine-like or industrial about them: they do what they do out of an intensely personal awareness of everyone’s welfare or suffering. Their life might still be subjugated or grotesque, but that has nothing to do with industrial machinery.
You might want to protest that I’m cheating: that it’s wrong to call someone a utilitarian if they consider anything other than utility when making decisions. I think this would be a bit like some theists’ insistence that no one can properly be called an “atheist” if they admit that slightest smidgeon of doubt about the existence of deities. And I respond in roughly the same way in this case as in the other: You may use the words however you please, but if you restrict the word “utilitarian” to those who are completely singleminded about morality, you end up with hardly anyone coming under that description, and for consistency you should do the same for every other moral system out there, and you end up having a single big bucket of not-completely-singleminded people into which just about everyone goes. Isn’t it better to classify people in a way that better matches the actual distribution of beliefs and attitudes, and say that someone is a utilitarian if they answer “what’s morally better?” questions by some kind of consideration of overall utility?
Lots to comment on here. That last paragraph certainly merits some comment.
Yes, most people are almost entirely inconsistent about the morality they profess to believe. At least in the “civilized world”. I get the impression of more widespread fervent and sincere beliefs in the less civilized world.
Do Christians in the US really believe all their rather wacky professions of faith? Or even the most tame, basic professions of faith? Very very few, I think. There are Christians who really believe, and I tend to like them, despite the wackiness. Honest, consistent, earnest people appeal to me.
For the great mass, I increasingly think they just make talking noises appropriate to their tribe. It’s not that they lie, it’s more that correspondence to reality is so far down the list of motivations, or even evaluations, that it’s not relevant to the noises that come from their mouths.
It’s the great mass of people who seem to instinctively say whatever is socially advantageous in their tribe that give be the heebie jeebies. They are completely alien—which, given the relative numbers, means I am totally alien. A stranger in a strange land.
Isn’t it better to classify people in a way that better matches the actual distribution of beliefs and attitudes
Yes.
and say that someone is a utilitarian if they answer “what’s morally better?” questions by some kind of consideration of overall utility?
That’s what the tribesman do, for the purposes of tribesman.
For the purposes of judging an ideology, which I had done, my judgment is based on what it would mean for people to actually adhere to the ideology, and not just make noises that they believe it.
For a number of purposes, knowing who has allegiance to what tribe matters. I don’t find the utilitarian tribe here morally abominable, but I do think preaching the faith they do is harmful, and I wish they’d knock it off, as I wish people in general would stop preaching all the various obscenities that they preach.
Then again, what does a Martian know about what is harmful for Earthlings?
Other issues.
“Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I’m going to do Y instead”
Not utilitarianism. In utilitarianism, your happiness and welfare counts 1 seven billionth—that’s not even a rounding error, it’s undetectable.
Imagine, if you will, someone who would by admired by the Buddhist or Christian moral traditions, who is filled with love and compassion for everyone s/he sees and works hard to make their lives better even at great personal cost.
I’ve always found statements like this tremendously contradictory.
If he’s really so filled with love for other people, why is helping them “a great personal cost”, and not a great personal benefit? Me, I enjoy being useful, particularly to people I care about. Helping them is an opportunity, not a cost.
There is no inconsistency in saying “The gods have commanded that we do X, but I am going to do Y instead because it’s easier”.
What is there, for a supposed believer, is disobedience and sin. You seem tremendously cavalier about violating your professed moral code. Which, given your code, is probably a good thing, though my preference is for people to profess a decent faith that they actually follow, rather than an abomination that they don’t.
I’m repeating myself here, but: I think you are mixing up two things: utilitarianism versus other systems, and singleminded caring about nothing but morality versus not. It is the latter that generates attitudes and behaviour and outcomes that you find so horrible, not the former.
You are of course at liberty to say that the term “utilitarian” should only be applied to a person who not only holds that the way to answer moral questions is by something like comparison of net utility, but also acts consistently and singlemindedly to maximize net utility as they conceive it. The consequence, of course, will be that in your view there are no utilitarians and that anyone who identifies as a utilitarian is a hypocrite. Personally, I find that just as unhelpful a use of language as some theists’ insistence that “atheist” can only mean someone who is absolutely 100% certain, without the tiniest room for doubt, that there is no god. It feels like a tactical definition whose main purpose is to put other people in the wrong even before any substantive discussion of their opinions and actions begins.
why is helping them “a great personal cost”, and not a great personal benefit?
It’s both. (Just as a literal purchase may be both at great cost, and of great benefit.) Which is one reason why, if this person—or someone who feels and acts similarly on the basis of utilitarian rather than religious ethics—acts in this way because they genuinely think it’s the best thing to do, then I don’t think it’s appropriate to complain about how grotesquely subjugated they are.
Under utilitarianism, human farming for research purposes and organ harvesting would be justified if it benefited enough future persons.
Under utilitarianism the ideal life is one spent barely subsisting while giving away all material wealth to effective altruism/charity. (reason being—unless you are barely subsisting, there is someone who would benefit from your wealth more than you).
Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.
Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.
Perhaps not with utility theory’s usual definition of “prefer”, but in practice there is a commonsense way in which I can prefer A more than you prefer B, since we’re both humans with almost identical brain architecture.
Interesting, so your utilitarianism depends on agents having similar minds, it doesn’t try to a be a universal ethical theory for sapient beings.
What exactly is that way in which you can prefer something more than I can? It is not common sense to me, unless you are talking about hedonic utilitarianism. Are you using intensity of desire or intensity of satisfaction as a criteria? Neither one seems satisfactory. People’s preferences do not always (or even mostly) align with either. I suppose what I’m asking is for you to provide a systematic way of comparing interpersonal utility.
If I say “I prefer not to be tortured more than you prefer a popsicle”, any sane human would agree. This is the commonsense way in which utility can be compared between humans. Of course, it isn’t perfect, but we could easily imagine ways to make it better, say by running some regression algorithms on brain-scans of humans desiring popsicles and humans desiring not-to-be-tortured, and extrapolating to other human minds. (That would still be imperfect, but we can make it arbitrarily good.)
This isn’t just necessary if you’re a utilitarian, it’s necessary if your moral system in any way involves tradeoffs between humans’ preferences, i.e. it’s necessary for pretty much every human who’s ever lived.
So you are a hedonic utilitarian? You think that morality can be reduced to intensity of desire? I already pointed out that human preferences do not reduce to intensity of desire.
I’m not any sort of utilitarian, and that has nothing to do with my point, which is that there obviously is a sense in which I can prefer A more than you prefer B.
that’s more like being conditional that we cooperate. If my enemy would say that I could find it offensive and it doesn’ty compel me to change my actions. If you try to use utlitarian theory to (en)force a cooperation the argument doesn’t go throught.
I am very interested in this.
Exactly what is repugnant about utilitarianism? (Moi, I find that it leads to favoring torture over 3^^^3 specks, which is beyond facepalming; I’d like to hear your view.)
I guess the moral assumptions based on which you condem utilitarianism are the same you would propose instead. What moral theory do you espouse?
It’s inhuman, totalitarian slavery.
Islam and Christianity are big on slavery, but it’s mainly a finite list of do’s and don’ts from a Celestial Psychopath. Obey those, and you can go to a movie. Take a nap. The subjugation is grotesque, but it has an end, at least in this life.
Not so with utilitarianism. The world is a big machine that produces utility, and your job is to be a cog in that machine. Your utility is 1 seven billionth of the equation—which rounds to zero. It is your duty in life to chug and chug and chug like a good little cog without any preferential treatment from you, for you or anyone else you actually care about, all through your days without let.
And that’s only if you don’t better serve the Great Utilonizer ground into a human paste to fuel the machine.
A cog, or fuel. Toil without relent, or harvest my organs? Which is less of a horror?
Of course, some others don’t get much better consideration. They, too, are potential inputs to the great utility machine. Chew up this guy here, spit out 3 utilons. A net increase in utilons! Fire up the woodchipper!
But at least one can argue that there is a net increase of utilons. Somebody benefited. And whatever your revulsion at torture to avoid dust specks, hey, the utilon calculator says it’s a net plus, summed over the people involved.
No, what I object to is having a believer who reduces himself to less than a slave, to raw materials for an industrial process, held up as a moral ideal. It strikes me as even more grotesque and more totalitarian than the slavery lauded by the monotheisms.
I disagree, but my reasons are a little intricate. I apologize, therefore, for the length of what follows.
There are at least three sorts of questions you might want to use a moral system to answer. (1) “Which possible world is better?”, (2) “Which possible action is better?”, (3) “Which kind of person is better?”. Many moral systems take one of these as fundamental (#1 for consequentialist systems, #2 for deontological systems, #3 for virtue ethics) but in practice you are going to be interested in answers to all of them, and the actual choices you need to make are between actions, not between possible worlds or characters.
Suppose you have a system for answering question 1, and on a given occasion you need to decide what to do. One way to do this is by choosing the action that produces the best possible world (making whatever assumptions about the future you need to), but it isn’t the only way. There is no inconsistency in saying “Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I’m going to do Y instead”; that just means that you care about other things besides morality. Which pretty much everyone does.
(The same actually applies to systems that handle question 2 more directly. There is no inconsistency in saying “The gods have commanded that we do X, but I am going to do Y instead because it’s easier”. Though there might be danger in it, if the gods are real.)
Many moral systems have the property that if you follow them and care about nothing but morality then your life ends up entirely governed by that system, and your own welfare ends up getting (by everyday standards) badly neglected. If this is a problem, it is a problem with caring about nothing but morality, not a problem with utilitarianism or (some sorts of) divine command theory or whatever.
A moral system can explicitly allow for this; e.g., a rule-based system that tells you what you may and must do can simply leave a lot of actions neither forbidden nor compulsory, or can command you to take some care of your own welfare. A consequentialist system can’t do this directly—what sort of world is better shouldn’t depend on who’s asking, so if you decide your actions solely by asking “what world is best?” you can’t make special allowances for your own interest—but so what? You can take utilitarianism as your source of answers to moral questions, and then explicitly trade off moral considerations against your own interests in whatever way you please. (And utilitarianism doesn’t tell you you mustn’t. It only tells you that if you do that you will end up with a less-than-optimal world, but you knew that already.)
A utilitarian doesn’t have to see their job as being a cog in the Great Utility Machine of the world. They can see their job however they please. All that being a utilitarian means is that when they come to ask a moral question, looking at the consequences and comparing utility is how they do it. Whether they then go ahead and maximize utility is a separate matter.
So, how should a utilitarian look at someone who cares about nothing but (utilitarian) morality—as a “moral ideal” or a grotesquely subjugated slave or what? That’s up to them, and utilitarianism doesn’t answer the question. (In particular, I’m not aware of any reason to think that considering such a person a “moral ideal” is a necessary part of maximizing utility.) It might, I suppose, be nice to have a moral system with the property that a life that’s best-according-to-that-system is attractive and nice to think about; but it would also be nice to have a physical theory with the property that if it’s true then we all get to live happily for ever, and a metaphysics with the property that it confirms all our intuitions about the universe; and, in each case, so we can but adopting those theories probably won’t work out well. Likewise, I suggest, for morality.
As for your rhetoric about machines and industrial processes: I don’t think “large-scale” is at all the same thing as “industrial”. Imagine, if you will, someone who would by admired by the Buddhist or Christian moral traditions, who is filled with love and compassion for everyone s/he sees and works hard to make their lives better even at great personal cost. Now expand this person’s awareness and compassion to encompass everyone in the world. What you get is pretty close to the “grotesquely subjugated” utilitarian saint, but there’s nothing machine-like or industrial about them: they do what they do out of an intensely personal awareness of everyone’s welfare or suffering. Their life might still be subjugated or grotesque, but that has nothing to do with industrial machinery.
You might want to protest that I’m cheating: that it’s wrong to call someone a utilitarian if they consider anything other than utility when making decisions. I think this would be a bit like some theists’ insistence that no one can properly be called an “atheist” if they admit that slightest smidgeon of doubt about the existence of deities. And I respond in roughly the same way in this case as in the other: You may use the words however you please, but if you restrict the word “utilitarian” to those who are completely singleminded about morality, you end up with hardly anyone coming under that description, and for consistency you should do the same for every other moral system out there, and you end up having a single big bucket of not-completely-singleminded people into which just about everyone goes. Isn’t it better to classify people in a way that better matches the actual distribution of beliefs and attitudes, and say that someone is a utilitarian if they answer “what’s morally better?” questions by some kind of consideration of overall utility?
Lots to comment on here. That last paragraph certainly merits some comment.
Yes, most people are almost entirely inconsistent about the morality they profess to believe. At least in the “civilized world”. I get the impression of more widespread fervent and sincere beliefs in the less civilized world.
Do Christians in the US really believe all their rather wacky professions of faith? Or even the most tame, basic professions of faith? Very very few, I think. There are Christians who really believe, and I tend to like them, despite the wackiness. Honest, consistent, earnest people appeal to me.
For the great mass, I increasingly think they just make talking noises appropriate to their tribe. It’s not that they lie, it’s more that correspondence to reality is so far down the list of motivations, or even evaluations, that it’s not relevant to the noises that come from their mouths.
It’s the great mass of people who seem to instinctively say whatever is socially advantageous in their tribe that give be the heebie jeebies. They are completely alien—which, given the relative numbers, means I am totally alien. A stranger in a strange land.
Yes.
That’s what the tribesman do, for the purposes of tribesman.
For the purposes of judging an ideology, which I had done, my judgment is based on what it would mean for people to actually adhere to the ideology, and not just make noises that they believe it.
For a number of purposes, knowing who has allegiance to what tribe matters. I don’t find the utilitarian tribe here morally abominable, but I do think preaching the faith they do is harmful, and I wish they’d knock it off, as I wish people in general would stop preaching all the various obscenities that they preach.
Then again, what does a Martian know about what is harmful for Earthlings?
Other issues.
Not utilitarianism. In utilitarianism, your happiness and welfare counts 1 seven billionth—that’s not even a rounding error, it’s undetectable.
I’ve always found statements like this tremendously contradictory.
If he’s really so filled with love for other people, why is helping them “a great personal cost”, and not a great personal benefit? Me, I enjoy being useful, particularly to people I care about. Helping them is an opportunity, not a cost.
What is there, for a supposed believer, is disobedience and sin. You seem tremendously cavalier about violating your professed moral code. Which, given your code, is probably a good thing, though my preference is for people to profess a decent faith that they actually follow, rather than an abomination that they don’t.
I’m repeating myself here, but: I think you are mixing up two things: utilitarianism versus other systems, and singleminded caring about nothing but morality versus not. It is the latter that generates attitudes and behaviour and outcomes that you find so horrible, not the former.
You are of course at liberty to say that the term “utilitarian” should only be applied to a person who not only holds that the way to answer moral questions is by something like comparison of net utility, but also acts consistently and singlemindedly to maximize net utility as they conceive it. The consequence, of course, will be that in your view there are no utilitarians and that anyone who identifies as a utilitarian is a hypocrite. Personally, I find that just as unhelpful a use of language as some theists’ insistence that “atheist” can only mean someone who is absolutely 100% certain, without the tiniest room for doubt, that there is no god. It feels like a tactical definition whose main purpose is to put other people in the wrong even before any substantive discussion of their opinions and actions begins.
It’s both. (Just as a literal purchase may be both at great cost, and of great benefit.) Which is one reason why, if this person—or someone who feels and acts similarly on the basis of utilitarian rather than religious ethics—acts in this way because they genuinely think it’s the best thing to do, then I don’t think it’s appropriate to complain about how grotesquely subjugated they are.
What do you believe my code to be, and why?
Seconding the question “What moral theory do you espouse?”
That was beautiful.
Under utilitarianism, human farming for research purposes and organ harvesting would be justified if it benefited enough future persons.
Under utilitarianism the ideal life is one spent barely subsisting while giving away all material wealth to effective altruism/charity. (reason being—unless you are barely subsisting, there is someone who would benefit from your wealth more than you).
Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.
Perhaps not with utility theory’s usual definition of “prefer”, but in practice there is a commonsense way in which I can prefer A more than you prefer B, since we’re both humans with almost identical brain architecture.
Interesting, so your utilitarianism depends on agents having similar minds, it doesn’t try to a be a universal ethical theory for sapient beings.
What exactly is that way in which you can prefer something more than I can? It is not common sense to me, unless you are talking about hedonic utilitarianism. Are you using intensity of desire or intensity of satisfaction as a criteria? Neither one seems satisfactory. People’s preferences do not always (or even mostly) align with either. I suppose what I’m asking is for you to provide a systematic way of comparing interpersonal utility.
If I say “I prefer not to be tortured more than you prefer a popsicle”, any sane human would agree. This is the commonsense way in which utility can be compared between humans. Of course, it isn’t perfect, but we could easily imagine ways to make it better, say by running some regression algorithms on brain-scans of humans desiring popsicles and humans desiring not-to-be-tortured, and extrapolating to other human minds. (That would still be imperfect, but we can make it arbitrarily good.)
This isn’t just necessary if you’re a utilitarian, it’s necessary if your moral system in any way involves tradeoffs between humans’ preferences, i.e. it’s necessary for pretty much every human who’s ever lived.
So you are a hedonic utilitarian? You think that morality can be reduced to intensity of desire? I already pointed out that human preferences do not reduce to intensity of desire.
I’m not any sort of utilitarian, and that has nothing to do with my point, which is that there obviously is a sense in which I can prefer A more than you prefer B.
that’s more like being conditional that we cooperate. If my enemy would say that I could find it offensive and it doesn’ty compel me to change my actions. If you try to use utlitarian theory to (en)force a cooperation the argument doesn’t go throught.