If you [the hypothetical you] think that it’s possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I’m not quite sure how you can remain a hedonistic utilitarian.
Well, if you’re really curious about how one could be a hedonistic utilitarian while also thinking that it’s possible to care intrinsically about things other than pain and pleasure, one could think something like:
“So there’s this confusing concept called ‘preferences’ that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person’s nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn’t seem like a morally important one.
“So what does make a preference morally important? If we define a preference as ‘an internal disposition that affects the choices that you make’, it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don’t necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don’t feel like they’d be worth moral consideration any more than the automatic decisions made by a computer program would.
“But then there’s the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someone tried to change their caring about X.
“And of course people can have various intrinsic preferences for things, which can mean that they do things even though that doesn’t produce them any suffering or pleasure. Or it can mean that doing something gives them pleasure or lets them avoid suffering by itself, even when doing that something doesn’t lead to any other consequence. The first kind of intrinsic preference I already concluded was morally irrelevant; the second kind is worth respecting, again because violating it would cause suffering, or reduce pleasure, or both. And I get tired of saying something clumsy like ‘increasing pleasure and decreasing suffering’ all the time, so let’s just call that ‘increasing well-being’ for short.
“Now unfortunately people have lots of different intrinsic preferences, and they often conflict. We can’t satisfy them all, as nice as it would be, so I have to choose my side. Since I chose my favored preferences on the basis that pleasure is good and suffering is bad, it would make sense to side with the preferences that, in the long term, produce the greatest amount of well-being in the world. For instance, some people may want the freedom to lie and cheat and murder, whereas other people want to have a peaceful and well-organized society. I think the preferences for living in peace will lead to greater well-being in the long term, so I will side with them, even if that means that the preferences of the sociopaths and murderers will be frustrated.
“Now there’s also this kind of inconvenient issue that if we rewire people’s brains so that they’ll always experience the maximal amount of pleasure, then that will produce more well-being in the long run, even if those people don’t currently want to have their brains rewired. I previously concluded that I should side with kinds of preferences that produce the greatest amount of well-being in the world, and the preference of ‘let’s rewire everyone’s brains’ does seem to produce by far the greatest amount of well-being in the world. So I should side with that preference, even though it goes against the intrinsic preferences of a lot of other people, but so did the decision to impose a lawful and peaceful society on the sociopaths and murderers, so that’s okay by me.
“Of course, other people may disagree, since they care about different things than pain and pleasure. And they’re not any more or less right—they just have different criteria for what counts as a moral action. But if it’s either them imposing their worldview on me, or me imposing my worldview on them, well, I’d rather have it be me imposing mine on them.”
I wouldn’t even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.
Right, I wasn’t objecting to your statement of not wanting to have such a worldview imposed on you. I was only objecting to the statement that hedonistic utilitarians would necessarily have to think that others were misguided in some sense.
Well, if you’re really curious about how one could be a hedonistic utilitarian while also thinking that it’s possible to care intrinsically about things other than pain and pleasure, one could think something like:
“So there’s this confusing concept called ‘preferences’ that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person’s nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn’t seem like a morally important one.
“So what does make a preference morally important? If we define a preference as ‘an internal disposition that affects the choices that you make’, it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don’t necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don’t feel like they’d be worth moral consideration any more than the automatic decisions made by a computer program would.
“But then there’s the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someone tried to change their caring about X.
“And of course people can have various intrinsic preferences for things, which can mean that they do things even though that doesn’t produce them any suffering or pleasure. Or it can mean that doing something gives them pleasure or lets them avoid suffering by itself, even when doing that something doesn’t lead to any other consequence. The first kind of intrinsic preference I already concluded was morally irrelevant; the second kind is worth respecting, again because violating it would cause suffering, or reduce pleasure, or both. And I get tired of saying something clumsy like ‘increasing pleasure and decreasing suffering’ all the time, so let’s just call that ‘increasing well-being’ for short.
“Now unfortunately people have lots of different intrinsic preferences, and they often conflict. We can’t satisfy them all, as nice as it would be, so I have to choose my side. Since I chose my favored preferences on the basis that pleasure is good and suffering is bad, it would make sense to side with the preferences that, in the long term, produce the greatest amount of well-being in the world. For instance, some people may want the freedom to lie and cheat and murder, whereas other people want to have a peaceful and well-organized society. I think the preferences for living in peace will lead to greater well-being in the long term, so I will side with them, even if that means that the preferences of the sociopaths and murderers will be frustrated.
“Now there’s also this kind of inconvenient issue that if we rewire people’s brains so that they’ll always experience the maximal amount of pleasure, then that will produce more well-being in the long run, even if those people don’t currently want to have their brains rewired. I previously concluded that I should side with kinds of preferences that produce the greatest amount of well-being in the world, and the preference of ‘let’s rewire everyone’s brains’ does seem to produce by far the greatest amount of well-being in the world. So I should side with that preference, even though it goes against the intrinsic preferences of a lot of other people, but so did the decision to impose a lawful and peaceful society on the sociopaths and murderers, so that’s okay by me.
“Of course, other people may disagree, since they care about different things than pain and pleasure. And they’re not any more or less right—they just have different criteria for what counts as a moral action. But if it’s either them imposing their worldview on me, or me imposing my worldview on them, well, I’d rather have it be me imposing mine on them.”
Right, I wasn’t objecting to your statement of not wanting to have such a worldview imposed on you. I was only objecting to the statement that hedonistic utilitarians would necessarily have to think that others were misguided in some sense.