Hedonistic utilitiarians, however, do not acknowledge that it’s possible, or that it’s valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided!
I don’t think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don’t see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position.
Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn’t follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don’t think that the people who disagree are misguided in any sense, I just think that they value different things.
I agree with blacktrance’s reply to you, and also see my reply to tog in a different subthread for some commentary. However, I’m sufficiently unsure of what you’re saying to be certain that your comment is fully answered by either of those things. For example:
HUs who think that you are simply caring about different things
If you [the hypothetical you] think that it’s possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I’m not quite sure how you can remain a hedonistic utilitarian. You’d have to say something like: “Yes, many people intrinsically value all sorts of things, but those preferences are morally irrelevant, and it is ok to frustrate those preferences as much as necessary, in order to minimize pain and maximize pleasure.” You would, in other words, have to endorse a world where all the things that people value are mercilessly destroyed, and the things they most abhor and despise come to pass, if only this world had the most pleasure and least pain.
Now, granted, people sometimes endorse the strangest things, and I wouldn’t even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.
If I’ve misinterpreted your comment and thereby failed to address your points, apologies; please clarify.
If you [the hypothetical you] think that it’s possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I’m not quite sure how you can remain a hedonistic utilitarian.
Well, if you’re really curious about how one could be a hedonistic utilitarian while also thinking that it’s possible to care intrinsically about things other than pain and pleasure, one could think something like:
“So there’s this confusing concept called ‘preferences’ that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person’s nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn’t seem like a morally important one.
“So what does make a preference morally important? If we define a preference as ‘an internal disposition that affects the choices that you make’, it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don’t necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don’t feel like they’d be worth moral consideration any more than the automatic decisions made by a computer program would.
“But then there’s the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someone tried to change their caring about X.
“And of course people can have various intrinsic preferences for things, which can mean that they do things even though that doesn’t produce them any suffering or pleasure. Or it can mean that doing something gives them pleasure or lets them avoid suffering by itself, even when doing that something doesn’t lead to any other consequence. The first kind of intrinsic preference I already concluded was morally irrelevant; the second kind is worth respecting, again because violating it would cause suffering, or reduce pleasure, or both. And I get tired of saying something clumsy like ‘increasing pleasure and decreasing suffering’ all the time, so let’s just call that ‘increasing well-being’ for short.
“Now unfortunately people have lots of different intrinsic preferences, and they often conflict. We can’t satisfy them all, as nice as it would be, so I have to choose my side. Since I chose my favored preferences on the basis that pleasure is good and suffering is bad, it would make sense to side with the preferences that, in the long term, produce the greatest amount of well-being in the world. For instance, some people may want the freedom to lie and cheat and murder, whereas other people want to have a peaceful and well-organized society. I think the preferences for living in peace will lead to greater well-being in the long term, so I will side with them, even if that means that the preferences of the sociopaths and murderers will be frustrated.
“Now there’s also this kind of inconvenient issue that if we rewire people’s brains so that they’ll always experience the maximal amount of pleasure, then that will produce more well-being in the long run, even if those people don’t currently want to have their brains rewired. I previously concluded that I should side with kinds of preferences that produce the greatest amount of well-being in the world, and the preference of ‘let’s rewire everyone’s brains’ does seem to produce by far the greatest amount of well-being in the world. So I should side with that preference, even though it goes against the intrinsic preferences of a lot of other people, but so did the decision to impose a lawful and peaceful society on the sociopaths and murderers, so that’s okay by me.
“Of course, other people may disagree, since they care about different things than pain and pleasure. And they’re not any more or less right—they just have different criteria for what counts as a moral action. But if it’s either them imposing their worldview on me, or me imposing my worldview on them, well, I’d rather have it be me imposing mine on them.”
I wouldn’t even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.
Right, I wasn’t objecting to your statement of not wanting to have such a worldview imposed on you. I was only objecting to the statement that hedonistic utilitarians would necessarily have to think that others were misguided in some sense.
Any form of utilitarianism implies moral realism, as utilitarianism is a normative ethical theory and normative ethical theories presuppose moral realism.
I feel that this discussion is rapidly descending into a debate over definitions, but as a counter-example, take ethical subjectivism, which is a form of moral non-realism and which Wikipedia defines as claiming that:
Ethical sentences express propositions.
Some such propositions are true.
Those propositions are about the attitudes of people.
Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.
Someone could be an ethical subjectivist and want to maximize world utility, but such a person would not be a utilitarian, because utilitarianism holds that other people should maximize world utility. If you merely say “I want to maximize world utility and others to do the same”, that is not utilitarianism—a utilitarian would say that you ought to maximize world utility, even if you don’t want to, and it’s not a matter of attitudes. Yes, this is arguing over definitions to some extent, but it’s important because I often see this kind of confusion about utilitarianism on LW.
Could you provide a reference for that? At least the SEP entry on the topic doesn’t clearly state this. I’m also unsure of what difference this makes in practice—I guess we could come up with a new word for all the people who are both moral antirealist and utilitarian-aside-for-being-moral-antirealists, but I’m not sure if the difference in their behavior and beliefs is large enough for that to be worth it.
The SEP entry for consequentialism says it “is the view that normative properties depend only on consequences”, implying a belief in normative properties, which means moral realism.
If you want to describe people’s actions, a utilitarian and a world-utility-maximizing non-realist would act similarly, but there would be differences in attitude: a utilitarian would say and feel like he is doing the morally right thing and those who disagree with him are in error, whereas the non-realist would merely feel like he is doing what he wants and that there is nothing special about wanting to maximize world utility—to him, it’s just another preference, like collecting stamps or eating ice cream.
A non-consequentialist could be a moral realist as well, such as if they were a deontologist, so it’s not a good measurement.
Also, consequentialism and moral realism aren’t always well-defined terms.
Edit: That survey’s results are strange. Twenty people answered that they’re moral realists but non-cognitivists, though moral realism is necessarily cognitivist.
I don’t think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don’t see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position.
Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn’t follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don’t think that the people who disagree are misguided in any sense, I just think that they value different things.
I agree with blacktrance’s reply to you, and also see my reply to tog in a different subthread for some commentary. However, I’m sufficiently unsure of what you’re saying to be certain that your comment is fully answered by either of those things. For example:
If you [the hypothetical you] think that it’s possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I’m not quite sure how you can remain a hedonistic utilitarian. You’d have to say something like: “Yes, many people intrinsically value all sorts of things, but those preferences are morally irrelevant, and it is ok to frustrate those preferences as much as necessary, in order to minimize pain and maximize pleasure.” You would, in other words, have to endorse a world where all the things that people value are mercilessly destroyed, and the things they most abhor and despise come to pass, if only this world had the most pleasure and least pain.
Now, granted, people sometimes endorse the strangest things, and I wouldn’t even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.
If I’ve misinterpreted your comment and thereby failed to address your points, apologies; please clarify.
Well, if you’re really curious about how one could be a hedonistic utilitarian while also thinking that it’s possible to care intrinsically about things other than pain and pleasure, one could think something like:
“So there’s this confusing concept called ‘preferences’ that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person’s nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn’t seem like a morally important one.
“So what does make a preference morally important? If we define a preference as ‘an internal disposition that affects the choices that you make’, it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don’t necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don’t feel like they’d be worth moral consideration any more than the automatic decisions made by a computer program would.
“But then there’s the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someone tried to change their caring about X.
“And of course people can have various intrinsic preferences for things, which can mean that they do things even though that doesn’t produce them any suffering or pleasure. Or it can mean that doing something gives them pleasure or lets them avoid suffering by itself, even when doing that something doesn’t lead to any other consequence. The first kind of intrinsic preference I already concluded was morally irrelevant; the second kind is worth respecting, again because violating it would cause suffering, or reduce pleasure, or both. And I get tired of saying something clumsy like ‘increasing pleasure and decreasing suffering’ all the time, so let’s just call that ‘increasing well-being’ for short.
“Now unfortunately people have lots of different intrinsic preferences, and they often conflict. We can’t satisfy them all, as nice as it would be, so I have to choose my side. Since I chose my favored preferences on the basis that pleasure is good and suffering is bad, it would make sense to side with the preferences that, in the long term, produce the greatest amount of well-being in the world. For instance, some people may want the freedom to lie and cheat and murder, whereas other people want to have a peaceful and well-organized society. I think the preferences for living in peace will lead to greater well-being in the long term, so I will side with them, even if that means that the preferences of the sociopaths and murderers will be frustrated.
“Now there’s also this kind of inconvenient issue that if we rewire people’s brains so that they’ll always experience the maximal amount of pleasure, then that will produce more well-being in the long run, even if those people don’t currently want to have their brains rewired. I previously concluded that I should side with kinds of preferences that produce the greatest amount of well-being in the world, and the preference of ‘let’s rewire everyone’s brains’ does seem to produce by far the greatest amount of well-being in the world. So I should side with that preference, even though it goes against the intrinsic preferences of a lot of other people, but so did the decision to impose a lawful and peaceful society on the sociopaths and murderers, so that’s okay by me.
“Of course, other people may disagree, since they care about different things than pain and pleasure. And they’re not any more or less right—they just have different criteria for what counts as a moral action. But if it’s either them imposing their worldview on me, or me imposing my worldview on them, well, I’d rather have it be me imposing mine on them.”
Right, I wasn’t objecting to your statement of not wanting to have such a worldview imposed on you. I was only objecting to the statement that hedonistic utilitarians would necessarily have to think that others were misguided in some sense.
Any form of utilitarianism implies moral realism, as utilitarianism is a normative ethical theory and normative ethical theories presuppose moral realism.
I feel that this discussion is rapidly descending into a debate over definitions, but as a counter-example, take ethical subjectivism, which is a form of moral non-realism and which Wikipedia defines as claiming that:
Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.
Someone could be an ethical subjectivist and want to maximize world utility, but such a person would not be a utilitarian, because utilitarianism holds that other people should maximize world utility. If you merely say “I want to maximize world utility and others to do the same”, that is not utilitarianism—a utilitarian would say that you ought to maximize world utility, even if you don’t want to, and it’s not a matter of attitudes. Yes, this is arguing over definitions to some extent, but it’s important because I often see this kind of confusion about utilitarianism on LW.
Could you provide a reference for that? At least the SEP entry on the topic doesn’t clearly state this. I’m also unsure of what difference this makes in practice—I guess we could come up with a new word for all the people who are both moral antirealist and utilitarian-aside-for-being-moral-antirealists, but I’m not sure if the difference in their behavior and beliefs is large enough for that to be worth it.
Non egoistic subjectivists?
The SEP entry for consequentialism says it “is the view that normative properties depend only on consequences”, implying a belief in normative properties, which means moral realism.
If you want to describe people’s actions, a utilitarian and a world-utility-maximizing non-realist would act similarly, but there would be differences in attitude: a utilitarian would say and feel like he is doing the morally right thing and those who disagree with him are in error, whereas the non-realist would merely feel like he is doing what he wants and that there is nothing special about wanting to maximize world utility—to him, it’s just another preference, like collecting stamps or eating ice cream.
This is getting way too much into a debate over definitions so I’ll stop after this comment, but I’ll just point out that, among professional philosophers, there is no correlation between endorsing consequentialism and endorsing moral realism.
A non-consequentialist could be a moral realist as well, such as if they were a deontologist, so it’s not a good measurement.
Also, consequentialism and moral realism aren’t always well-defined terms.
Edit: That survey’s results are strange. Twenty people answered that they’re moral realists but non-cognitivists, though moral realism is necessarily cognitivist.
That doesn’t mean utilitarianism is subjective. Rather, it means any subjective idea could correspond to objective truth.