If you say “all that matters is pain and pleasure”, and I say “no! I care about other things!”, and you’re like “nope, not listening. PAIN AND PLEASURE ARE THE ONLY THINGS”, and then proceed to enact policies which minimize pain and maximize pleasure, without regard for any of the other things that I care about, and all the while I’m telling you that no, I care about these other things! Stop ignoring them! Other things matter to me! but you’re not listening because you’ve decided that only pain and pleasure can possibly matter to anyone, despite my protestations otherwise...
… well, I hope you can see how that would bother me.
It’s not just a matter of us caring about different things. If it were only that, we could acknowledge the fact, and proceed to some sort of compromise. Hedonistic utilitiarians, however, do not acknowledge that it’s possible, or that it’s valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided! Clearly.
Hedonistic utilitiarians, however, do not acknowledge that it’s possible, or that it’s valid, to care about things that are not pain or pleasure.
They may think it’s incorrect if they’re realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.
[Description of situation] … well, I hope you can see how that would bother me.
Here are 3 non-exhaustive ways in which the situation you described could be bothersome:
(i) If your first order ethical theory (as opposed to your meta-ethics), perhaps combined with very plausible facts about human nature, requires otherwise. For instance if it speaks in favour of toleration or liberty here.
(ii) If you’re a cognitivist of the sort who thinks she could be wrong, it could increase your credence that you’re wrong.
(iii) If you’d at least on reflection give weight to the evident distress SaidAchmiz feels in this scenario, as most HUs would.
Hedonistic utilitiarians, however, do not acknowledge that it’s possible, or that it’s valid, to care about things that are not pain or pleasure.
They may think it’s incorrect if they’re realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.
No, I don’t think this is right. I think you (and Kaj_Sotala) are confusing these two questions:
Is it correct to hold an ethical view that is something other than hedonistic utilitarianism?
Does it make any sense to intrinsically value anything other than pleasure, or intrinsically disvalue things other than pain?
#1 is a meta-ethical question; moral realism or cognitivism may lead you to answer “no”, if you’re a hedonistic utilitarian. #2 is an ethical question; it’s about the content of hedonistic utilitarianism.
If I intrinsically care about, say, freedom, that’s not an ethical claim. It’s just a preference. “Humans may have preferences about things other than pain/pleasure, and those preferences are morally important” is an ethical claim which I might formulate, about that preference that I have.
Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain.
Moral realism (which, as blacktrance correctly notes, is implied by any utilitarianism) may lead a hedonistic utilitarian to say that my aforementioned ethical claim is incorrect.
As for your scenarios, I’m not sure what you meant by listing them. My point was that my scenario, which describes a situation involving a hypothetical me, Said Achmiz, would be bothersome to me, Said Achmiz. Is it really not clear why it would be?
If I intrinsically care about, say, freedom, that’s not an ethical claim. It’s just a preference. [...]
Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain.
Ethical subjectivism (which I subscribe to) would say that “ethical claims” are just a specific subset of our preferences; indeed, I’m rather skeptical of the notion of there being a distinction between ethical claims and preferences in the first place. But HU wouldn’t necessarily say that someone’s preference for something else than pleasure or pain would be mistaken—if it’s interpreted within a subjectivist framework, HU is just a description of preferences that are different. See my response to blacktrance.
But HU wouldn’t necessarily say that someone’s preference for something else than pleasure or pain would be mistaken—if it’s interpreted within a subjectivist framework, HU is just a description of preferences that are different.
I really don’t think that this is correct. If this were true, first of all, hedonistic utilitarianism would simply reduce to preference utilitarianism. In actual fact, neither view is merely about one’s own terminal values.
If someone, personally, cares only about pain and pleasure, but acknowledges that other people may have other things as terminal values, and thinks that The Good lies in satisfying everyone’s preferences maximally — which, for themselves, means maximizing pleasure and minimizing pain, and for other people may mean other things — then that person is not a hedonistic utilitarian. They are a preference utilitarian. Referring to them as an HU is simply not correct, because that’s not how the term is used in the philosophical literature.
On the other hand, if someone cares only about pain and pleasure — both theirs and other peoples’ — and would prefer that everyone’s pleasure be maximized and everyone’s pain be minimized; but this person is not a moral realist, and has no opinion on what constitutes The Good or thinks there’s no fact of the matter about whether an act is right or wrong; well, then this person is not a utilitarian at all. Again, describing this person as a hedonistic or any other kind of utilitarian completely fails to match up with how the term is used in the philosophical literature.
As for ethical subjectivism — uh, I don’t think that’s an actual thing. I’d not heard of anything by that name until today. I don’t like going by wikipedia’s definitions of philosophical principles, so I tried tracking it down to a source, such as perhaps a major philosopher espousing the view or at least describing it coherently. No such luck. Take a look at that list of references on its wikipedia page; two are to a single book (written in 1959 by some guy I’ve never heard of — have you? — and the shortness whose wikipedia page suggests that he wasn’t anyone interesting), and one is to a barely-related page that mentions the thing once, in passing, by a different name. I’m not convinced. As best I can tell, it’s a label that some modern-day historians of philosophy have used to describe… a not-quite-consistent family of views. (Divine command theory, for one.)
But let’s attempt to take it at face value. You say:
Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.
Very well. Are their attitudes correct, do they think? If they say there’s no fact of the matter about that, then they’re not a utilitiarian. “Utilitiarianism” is a quite established term in the literature. You can’t just apply it to any old thing.
Of course, this is Lesswrong; we don’t argue about definitions; we’re interested in what people actually think. However in this case I think getting our terms straight is important, for two reasons:
When most people say they’re utilitarians, they mean it in the usual sense, I think. So to understand what’s going on in these discussions, and in the heads of the people we’re talking to, we need to know what is the usual sense.
If you hold some view which is not one of the usual views with commonly-known terms, you shouldn’t call it by one of the commonly-known terms, because then I won’t have any idea what you’re talking about and we’ll keep getting into comment threads like this one.
On the other hand, if someone cares only about pain and pleasure — both theirs and other peoples’ — and would prefer that everyone’s pleasure be maximized and everyone’s pain be minimized; but this person is not a moral realist, and has no opinion on what constitutes The Good or thinks there’s no fact of the matter about whether an act is right or wrong; well, then this person is not a utilitarian at all. Again, describing this person as a hedonistic or any other kind of utilitarian completely fails to match up with how the term is used in the philosophical literature.
You may be right to say that my use of “utilitarian” is different from how it’s conventionally used in the literature
… though, I just looked at the SEP entry on Consequentialism, and I note that aside for the title of one book in the bibliography, nowhere in the article is the word “realism” even mentioned. Nor does there seem to be an entry in the list of claims making up classic utilitarianism that would seem to require moral realism. I guess you could kind of interpret one of these three conditions as requiring moral realism:
Universal Consequentialism = moral rightness depends on the consequences for all people or sentient beings (as opposed to only the individual agent, members of the individual’s society, present people, or any other limited group).
Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).
Agent-neutrality = whether some consequences are better than others does not depend on whether the consequences are evaluated from the perspective of the agent (as opposed to an observer).
… but it doesn’t seem obvious to me why someone who was both an ethical subjectivist couldn’t say that “I’m a classical utiliarian, in that (among other things) the best description of my ethical system is that I think that the goodness of an action should be determined based on how it affects all sentient beings, that benefits to one person matter just as much as similar benefits to others, and that the perspective of the people evaluating the consequences doesn’t matter. Though of course others could have ethical systems that were not well described by these items, and that wouldn’t make them wrong”.
Or maybe the important part in your comment was the part ”...but this person is not a moral realist, and has no opinion on what constitutes The Good”? But a subjectivist doesn’t say that he has no opinion on what constitutes The Good: he definitely has an opinion, and there may clearly be a right and wrong answer with regard to the kind of actions that are implied by his personal moral system; it’s just that the thing that constitutes The Good will be different for people with different moral systems.
Consequenialism supplies a realistic ontology, since it’s goods are facts about the real world, and utilitarian supplies an objective epistemology, since different utilitarians of the same stripe can converge. That adds up to some of the ingredients of realism, but not all of them. What is specifically lacking is an justification of comsequentialist ends as being objectively good, and not just subjectively desirable.
Consequenialism supplies a realistic ontology, since it’s goods are facts about the real world,
For this to make it realist, the fact that the truth of those facts has value would also have to be mind-independent. Even subjectivists typically value facts about the external world (e.g. their pleasure).
(I like this quote from that article, btw: “So many debates in philosophy revolve around the issue of objectivity versus subjectivity that one may be forgiven for assuming that someone somewhere understands this distinction.”)
You may be right to say that my use of “utilitarian” is different from how it’s conventionally used in the literature; I’m pretty unfamiliar with the actual ethical literature. But if we have people who have the attitude of “I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I’m a moral realist” and people who have the attitude of “I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I’m a moral non-realist”, then it feels a little odd to have different terms for them, given that they probably have more in common with each other (with regard to the actions that they take and the views that they hold) than e.g. two people who are both moral realists but differ on consequentialism vs. deontology.
At least in a context where we are trying to categorize people into different camps based on what they think we should actually do, it would seem to make sense if we just called both the moral realist and moral non-realist “utilitarians”, if they both fit the description of a utilitarian otherwise.
Hedonistic utilitiarians, however, do not acknowledge that it’s possible, or that it’s valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided!
I don’t think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don’t see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position.
Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn’t follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don’t think that the people who disagree are misguided in any sense, I just think that they value different things.
I agree with blacktrance’s reply to you, and also see my reply to tog in a different subthread for some commentary. However, I’m sufficiently unsure of what you’re saying to be certain that your comment is fully answered by either of those things. For example:
HUs who think that you are simply caring about different things
If you [the hypothetical you] think that it’s possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I’m not quite sure how you can remain a hedonistic utilitarian. You’d have to say something like: “Yes, many people intrinsically value all sorts of things, but those preferences are morally irrelevant, and it is ok to frustrate those preferences as much as necessary, in order to minimize pain and maximize pleasure.” You would, in other words, have to endorse a world where all the things that people value are mercilessly destroyed, and the things they most abhor and despise come to pass, if only this world had the most pleasure and least pain.
Now, granted, people sometimes endorse the strangest things, and I wouldn’t even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.
If I’ve misinterpreted your comment and thereby failed to address your points, apologies; please clarify.
If you [the hypothetical you] think that it’s possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I’m not quite sure how you can remain a hedonistic utilitarian.
Well, if you’re really curious about how one could be a hedonistic utilitarian while also thinking that it’s possible to care intrinsically about things other than pain and pleasure, one could think something like:
“So there’s this confusing concept called ‘preferences’ that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person’s nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn’t seem like a morally important one.
“So what does make a preference morally important? If we define a preference as ‘an internal disposition that affects the choices that you make’, it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don’t necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don’t feel like they’d be worth moral consideration any more than the automatic decisions made by a computer program would.
“But then there’s the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someone tried to change their caring about X.
“And of course people can have various intrinsic preferences for things, which can mean that they do things even though that doesn’t produce them any suffering or pleasure. Or it can mean that doing something gives them pleasure or lets them avoid suffering by itself, even when doing that something doesn’t lead to any other consequence. The first kind of intrinsic preference I already concluded was morally irrelevant; the second kind is worth respecting, again because violating it would cause suffering, or reduce pleasure, or both. And I get tired of saying something clumsy like ‘increasing pleasure and decreasing suffering’ all the time, so let’s just call that ‘increasing well-being’ for short.
“Now unfortunately people have lots of different intrinsic preferences, and they often conflict. We can’t satisfy them all, as nice as it would be, so I have to choose my side. Since I chose my favored preferences on the basis that pleasure is good and suffering is bad, it would make sense to side with the preferences that, in the long term, produce the greatest amount of well-being in the world. For instance, some people may want the freedom to lie and cheat and murder, whereas other people want to have a peaceful and well-organized society. I think the preferences for living in peace will lead to greater well-being in the long term, so I will side with them, even if that means that the preferences of the sociopaths and murderers will be frustrated.
“Now there’s also this kind of inconvenient issue that if we rewire people’s brains so that they’ll always experience the maximal amount of pleasure, then that will produce more well-being in the long run, even if those people don’t currently want to have their brains rewired. I previously concluded that I should side with kinds of preferences that produce the greatest amount of well-being in the world, and the preference of ‘let’s rewire everyone’s brains’ does seem to produce by far the greatest amount of well-being in the world. So I should side with that preference, even though it goes against the intrinsic preferences of a lot of other people, but so did the decision to impose a lawful and peaceful society on the sociopaths and murderers, so that’s okay by me.
“Of course, other people may disagree, since they care about different things than pain and pleasure. And they’re not any more or less right—they just have different criteria for what counts as a moral action. But if it’s either them imposing their worldview on me, or me imposing my worldview on them, well, I’d rather have it be me imposing mine on them.”
I wouldn’t even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.
Right, I wasn’t objecting to your statement of not wanting to have such a worldview imposed on you. I was only objecting to the statement that hedonistic utilitarians would necessarily have to think that others were misguided in some sense.
Any form of utilitarianism implies moral realism, as utilitarianism is a normative ethical theory and normative ethical theories presuppose moral realism.
I feel that this discussion is rapidly descending into a debate over definitions, but as a counter-example, take ethical subjectivism, which is a form of moral non-realism and which Wikipedia defines as claiming that:
Ethical sentences express propositions.
Some such propositions are true.
Those propositions are about the attitudes of people.
Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.
Someone could be an ethical subjectivist and want to maximize world utility, but such a person would not be a utilitarian, because utilitarianism holds that other people should maximize world utility. If you merely say “I want to maximize world utility and others to do the same”, that is not utilitarianism—a utilitarian would say that you ought to maximize world utility, even if you don’t want to, and it’s not a matter of attitudes. Yes, this is arguing over definitions to some extent, but it’s important because I often see this kind of confusion about utilitarianism on LW.
Could you provide a reference for that? At least the SEP entry on the topic doesn’t clearly state this. I’m also unsure of what difference this makes in practice—I guess we could come up with a new word for all the people who are both moral antirealist and utilitarian-aside-for-being-moral-antirealists, but I’m not sure if the difference in their behavior and beliefs is large enough for that to be worth it.
The SEP entry for consequentialism says it “is the view that normative properties depend only on consequences”, implying a belief in normative properties, which means moral realism.
If you want to describe people’s actions, a utilitarian and a world-utility-maximizing non-realist would act similarly, but there would be differences in attitude: a utilitarian would say and feel like he is doing the morally right thing and those who disagree with him are in error, whereas the non-realist would merely feel like he is doing what he wants and that there is nothing special about wanting to maximize world utility—to him, it’s just another preference, like collecting stamps or eating ice cream.
A non-consequentialist could be a moral realist as well, such as if they were a deontologist, so it’s not a good measurement.
Also, consequentialism and moral realism aren’t always well-defined terms.
Edit: That survey’s results are strange. Twenty people answered that they’re moral realists but non-cognitivists, though moral realism is necessarily cognitivist.
If you say “all that matters is pain and pleasure”, and I say “no! I care about other things!”, and you’re like “nope, not listening. PAIN AND PLEASURE ARE THE ONLY THINGS”, and then proceed to enact policies which minimize pain and maximize pleasure, without regard for any of the other things that I care about, and all the while I’m telling you that no, I care about these other things! Stop ignoring them! Other things matter to me! but you’re not listening because you’ve decided that only pain and pleasure can possibly matter to anyone, despite my protestations otherwise...
… well, I hope you can see how that would bother me.
It’s not just a matter of us caring about different things. If it were only that, we could acknowledge the fact, and proceed to some sort of compromise. Hedonistic utilitiarians, however, do not acknowledge that it’s possible, or that it’s valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided! Clearly.
They may think it’s incorrect if they’re realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.
Here are 3 non-exhaustive ways in which the situation you described could be bothersome:
(i) If your first order ethical theory (as opposed to your meta-ethics), perhaps combined with very plausible facts about human nature, requires otherwise. For instance if it speaks in favour of toleration or liberty here.
(ii) If you’re a cognitivist of the sort who thinks she could be wrong, it could increase your credence that you’re wrong.
(iii) If you’d at least on reflection give weight to the evident distress SaidAchmiz feels in this scenario, as most HUs would.
No, I don’t think this is right. I think you (and Kaj_Sotala) are confusing these two questions:
Is it correct to hold an ethical view that is something other than hedonistic utilitarianism?
Does it make any sense to intrinsically value anything other than pleasure, or intrinsically disvalue things other than pain?
#1 is a meta-ethical question; moral realism or cognitivism may lead you to answer “no”, if you’re a hedonistic utilitarian. #2 is an ethical question; it’s about the content of hedonistic utilitarianism.
If I intrinsically care about, say, freedom, that’s not an ethical claim. It’s just a preference. “Humans may have preferences about things other than pain/pleasure, and those preferences are morally important” is an ethical claim which I might formulate, about that preference that I have.
Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain.
Moral realism (which, as blacktrance correctly notes, is implied by any utilitarianism) may lead a hedonistic utilitarian to say that my aforementioned ethical claim is incorrect.
As for your scenarios, I’m not sure what you meant by listing them. My point was that my scenario, which describes a situation involving a hypothetical me, Said Achmiz, would be bothersome to me, Said Achmiz. Is it really not clear why it would be?
Ethical subjectivism (which I subscribe to) would say that “ethical claims” are just a specific subset of our preferences; indeed, I’m rather skeptical of the notion of there being a distinction between ethical claims and preferences in the first place. But HU wouldn’t necessarily say that someone’s preference for something else than pleasure or pain would be mistaken—if it’s interpreted within a subjectivist framework, HU is just a description of preferences that are different. See my response to blacktrance.
I really don’t think that this is correct. If this were true, first of all, hedonistic utilitarianism would simply reduce to preference utilitarianism. In actual fact, neither view is merely about one’s own terminal values.
If someone, personally, cares only about pain and pleasure, but acknowledges that other people may have other things as terminal values, and thinks that The Good lies in satisfying everyone’s preferences maximally — which, for themselves, means maximizing pleasure and minimizing pain, and for other people may mean other things — then that person is not a hedonistic utilitarian. They are a preference utilitarian. Referring to them as an HU is simply not correct, because that’s not how the term is used in the philosophical literature.
On the other hand, if someone cares only about pain and pleasure — both theirs and other peoples’ — and would prefer that everyone’s pleasure be maximized and everyone’s pain be minimized; but this person is not a moral realist, and has no opinion on what constitutes The Good or thinks there’s no fact of the matter about whether an act is right or wrong; well, then this person is not a utilitarian at all. Again, describing this person as a hedonistic or any other kind of utilitarian completely fails to match up with how the term is used in the philosophical literature.
As for ethical subjectivism — uh, I don’t think that’s an actual thing. I’d not heard of anything by that name until today. I don’t like going by wikipedia’s definitions of philosophical principles, so I tried tracking it down to a source, such as perhaps a major philosopher espousing the view or at least describing it coherently. No such luck. Take a look at that list of references on its wikipedia page; two are to a single book (written in 1959 by some guy I’ve never heard of — have you? — and the shortness whose wikipedia page suggests that he wasn’t anyone interesting), and one is to a barely-related page that mentions the thing once, in passing, by a different name. I’m not convinced. As best I can tell, it’s a label that some modern-day historians of philosophy have used to describe… a not-quite-consistent family of views. (Divine command theory, for one.)
But let’s attempt to take it at face value. You say:
Very well. Are their attitudes correct, do they think? If they say there’s no fact of the matter about that, then they’re not a utilitiarian. “Utilitiarianism” is a quite established term in the literature. You can’t just apply it to any old thing.
Of course, this is Lesswrong; we don’t argue about definitions; we’re interested in what people actually think. However in this case I think getting our terms straight is important, for two reasons:
When most people say they’re utilitarians, they mean it in the usual sense, I think. So to understand what’s going on in these discussions, and in the heads of the people we’re talking to, we need to know what is the usual sense.
If you hold some view which is not one of the usual views with commonly-known terms, you shouldn’t call it by one of the commonly-known terms, because then I won’t have any idea what you’re talking about and we’ll keep getting into comment threads like this one.
… though, I just looked at the SEP entry on Consequentialism, and I note that aside for the title of one book in the bibliography, nowhere in the article is the word “realism” even mentioned. Nor does there seem to be an entry in the list of claims making up classic utilitarianism that would seem to require moral realism. I guess you could kind of interpret one of these three conditions as requiring moral realism:
… but it doesn’t seem obvious to me why someone who was both an ethical subjectivist couldn’t say that “I’m a classical utiliarian, in that (among other things) the best description of my ethical system is that I think that the goodness of an action should be determined based on how it affects all sentient beings, that benefits to one person matter just as much as similar benefits to others, and that the perspective of the people evaluating the consequences doesn’t matter. Though of course others could have ethical systems that were not well described by these items, and that wouldn’t make them wrong”.
Or maybe the important part in your comment was the part ”...but this person is not a moral realist, and has no opinion on what constitutes The Good”? But a subjectivist doesn’t say that he has no opinion on what constitutes The Good: he definitely has an opinion, and there may clearly be a right and wrong answer with regard to the kind of actions that are implied by his personal moral system; it’s just that the thing that constitutes The Good will be different for people with different moral systems.
Consequenialism supplies a realistic ontology, since it’s goods are facts about the real world, and utilitarian supplies an objective epistemology, since different utilitarians of the same stripe can converge. That adds up to some of the ingredients of realism, but not all of them. What is specifically lacking is an justification of comsequentialist ends as being objectively good, and not just subjectively desirable.
For this to make it realist, the fact that the truth of those facts has value would also have to be mind-independent. Even subjectivists typically value facts about the external world (e.g. their pleasure).
Ethical subjectivism is also discussed in the Stanford Encyclopedia of Philosophy.
(I like this quote from that article, btw: “So many debates in philosophy revolve around the issue of objectivity versus subjectivity that one may be forgiven for assuming that someone somewhere understands this distinction.”)
You may be right to say that my use of “utilitarian” is different from how it’s conventionally used in the literature; I’m pretty unfamiliar with the actual ethical literature. But if we have people who have the attitude of “I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I’m a moral realist” and people who have the attitude of “I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I’m a moral non-realist”, then it feels a little odd to have different terms for them, given that they probably have more in common with each other (with regard to the actions that they take and the views that they hold) than e.g. two people who are both moral realists but differ on consequentialism vs. deontology.
At least in a context where we are trying to categorize people into different camps based on what they think we should actually do, it would seem to make sense if we just called both the moral realist and moral non-realist “utilitarians”, if they both fit the description of a utilitarian otherwise.
I don’t think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don’t see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position.
Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn’t follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don’t think that the people who disagree are misguided in any sense, I just think that they value different things.
I agree with blacktrance’s reply to you, and also see my reply to tog in a different subthread for some commentary. However, I’m sufficiently unsure of what you’re saying to be certain that your comment is fully answered by either of those things. For example:
If you [the hypothetical you] think that it’s possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I’m not quite sure how you can remain a hedonistic utilitarian. You’d have to say something like: “Yes, many people intrinsically value all sorts of things, but those preferences are morally irrelevant, and it is ok to frustrate those preferences as much as necessary, in order to minimize pain and maximize pleasure.” You would, in other words, have to endorse a world where all the things that people value are mercilessly destroyed, and the things they most abhor and despise come to pass, if only this world had the most pleasure and least pain.
Now, granted, people sometimes endorse the strangest things, and I wouldn’t even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.
If I’ve misinterpreted your comment and thereby failed to address your points, apologies; please clarify.
Well, if you’re really curious about how one could be a hedonistic utilitarian while also thinking that it’s possible to care intrinsically about things other than pain and pleasure, one could think something like:
“So there’s this confusing concept called ‘preferences’ that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person’s nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn’t seem like a morally important one.
“So what does make a preference morally important? If we define a preference as ‘an internal disposition that affects the choices that you make’, it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don’t necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don’t feel like they’d be worth moral consideration any more than the automatic decisions made by a computer program would.
“But then there’s the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someone tried to change their caring about X.
“And of course people can have various intrinsic preferences for things, which can mean that they do things even though that doesn’t produce them any suffering or pleasure. Or it can mean that doing something gives them pleasure or lets them avoid suffering by itself, even when doing that something doesn’t lead to any other consequence. The first kind of intrinsic preference I already concluded was morally irrelevant; the second kind is worth respecting, again because violating it would cause suffering, or reduce pleasure, or both. And I get tired of saying something clumsy like ‘increasing pleasure and decreasing suffering’ all the time, so let’s just call that ‘increasing well-being’ for short.
“Now unfortunately people have lots of different intrinsic preferences, and they often conflict. We can’t satisfy them all, as nice as it would be, so I have to choose my side. Since I chose my favored preferences on the basis that pleasure is good and suffering is bad, it would make sense to side with the preferences that, in the long term, produce the greatest amount of well-being in the world. For instance, some people may want the freedom to lie and cheat and murder, whereas other people want to have a peaceful and well-organized society. I think the preferences for living in peace will lead to greater well-being in the long term, so I will side with them, even if that means that the preferences of the sociopaths and murderers will be frustrated.
“Now there’s also this kind of inconvenient issue that if we rewire people’s brains so that they’ll always experience the maximal amount of pleasure, then that will produce more well-being in the long run, even if those people don’t currently want to have their brains rewired. I previously concluded that I should side with kinds of preferences that produce the greatest amount of well-being in the world, and the preference of ‘let’s rewire everyone’s brains’ does seem to produce by far the greatest amount of well-being in the world. So I should side with that preference, even though it goes against the intrinsic preferences of a lot of other people, but so did the decision to impose a lawful and peaceful society on the sociopaths and murderers, so that’s okay by me.
“Of course, other people may disagree, since they care about different things than pain and pleasure. And they’re not any more or less right—they just have different criteria for what counts as a moral action. But if it’s either them imposing their worldview on me, or me imposing my worldview on them, well, I’d rather have it be me imposing mine on them.”
Right, I wasn’t objecting to your statement of not wanting to have such a worldview imposed on you. I was only objecting to the statement that hedonistic utilitarians would necessarily have to think that others were misguided in some sense.
Any form of utilitarianism implies moral realism, as utilitarianism is a normative ethical theory and normative ethical theories presuppose moral realism.
I feel that this discussion is rapidly descending into a debate over definitions, but as a counter-example, take ethical subjectivism, which is a form of moral non-realism and which Wikipedia defines as claiming that:
Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.
Someone could be an ethical subjectivist and want to maximize world utility, but such a person would not be a utilitarian, because utilitarianism holds that other people should maximize world utility. If you merely say “I want to maximize world utility and others to do the same”, that is not utilitarianism—a utilitarian would say that you ought to maximize world utility, even if you don’t want to, and it’s not a matter of attitudes. Yes, this is arguing over definitions to some extent, but it’s important because I often see this kind of confusion about utilitarianism on LW.
Could you provide a reference for that? At least the SEP entry on the topic doesn’t clearly state this. I’m also unsure of what difference this makes in practice—I guess we could come up with a new word for all the people who are both moral antirealist and utilitarian-aside-for-being-moral-antirealists, but I’m not sure if the difference in their behavior and beliefs is large enough for that to be worth it.
Non egoistic subjectivists?
The SEP entry for consequentialism says it “is the view that normative properties depend only on consequences”, implying a belief in normative properties, which means moral realism.
If you want to describe people’s actions, a utilitarian and a world-utility-maximizing non-realist would act similarly, but there would be differences in attitude: a utilitarian would say and feel like he is doing the morally right thing and those who disagree with him are in error, whereas the non-realist would merely feel like he is doing what he wants and that there is nothing special about wanting to maximize world utility—to him, it’s just another preference, like collecting stamps or eating ice cream.
This is getting way too much into a debate over definitions so I’ll stop after this comment, but I’ll just point out that, among professional philosophers, there is no correlation between endorsing consequentialism and endorsing moral realism.
A non-consequentialist could be a moral realist as well, such as if they were a deontologist, so it’s not a good measurement.
Also, consequentialism and moral realism aren’t always well-defined terms.
Edit: That survey’s results are strange. Twenty people answered that they’re moral realists but non-cognitivists, though moral realism is necessarily cognitivist.
That doesn’t mean utilitarianism is subjective. Rather, it means any subjective idea could correspond to objective truth.