There isn’t a standard utilitarian position on such dilemmas, because there is no such thing as standard utilitarianism. Utiliarianism is a meta-ethical system, not an ethical system. It specifies the general framework by which you think about morality, but not the details.
There are plenty of variations of utilitarianism—negative or positive utilitarianism, average or total utiliarianism, and so on. And there is nothing to prevent you from specifying that, in your utility function, your family members are treated preferrentially to everybody else.
Utilitarianism is an incompletely specified ethical (not meta-ethical) system, but part of what it does specify is that everyone gets equal weight. If you’re treating your family members preferentially, you may be maximizing your utility, but you’re not following “utilitarianism” in that word’s standard meaning.
[...] classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:
[...] Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).
The problem is that that “utilitarianism”, as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there’s definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
As much of this literature is in economics, perhaps it’s non-standard in philosophy. It does, however, have a fairly long pedigree.
I was actually uneasy about making the comment because I had a vague recollection that that might be true, but I’m not sure a definition that says “maximize Kim Jong-Il’s welfare” is a form of utilitarianism, is a good definition.
Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
Utilitarianism that includes animals vs. utilitarianism that doesn’t include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn’t seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less).
Classical utilitarianism is more strictly defined, but as you point out, we’re not talking about just classical utilitarianism here.
I don’t think that’s a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it’s not that I am supposed to have a special obligation to humans because I’m human; it’s that only humans are supposed to matter at all.
In any event, the point wasn’t that agent relative weights don’t make sense, it’s that they’re not part of a standard definition of utilitarianism, even in a broad sense. I still think that’s accurate characterization of professional usage, but if you have specific examples to the contrary, I’d be open to changing my mind.
You may be right. But we’re inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport’s question:
You’re right, by most interpretations utilitarianism does weigh everybody equally. However, if that’s the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?
In utilitarianism, sometimes some animals can be more equal than others.. It’s just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.
There isn’t a standard utilitarian position on such dilemmas, because there is no such thing as standard utilitarianism. Utiliarianism is a meta-ethical system, not an ethical system. It specifies the general framework by which you think about morality, but not the details.
There are plenty of variations of utilitarianism—negative or positive utilitarianism, average or total utiliarianism, and so on. And there is nothing to prevent you from specifying that, in your utility function, your family members are treated preferrentially to everybody else.
Utilitarianism is an incompletely specified ethical (not meta-ethical) system, but part of what it does specify is that everyone gets equal weight. If you’re treating your family members preferentially, you may be maximizing your utility, but you’re not following “utilitarianism” in that word’s standard meaning.
The SEP agrees with you:
I’d put a slight gloss on this.
The problem is that that “utilitarianism”, as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there’s definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
As much of this literature is in economics, perhaps it’s non-standard in philosophy. It does, however, have a fairly long pedigree.
I was actually uneasy about making the comment because I had a vague recollection that that might be true, but I’m not sure a definition that says “maximize Kim Jong-Il’s welfare” is a form of utilitarianism, is a good definition.
Utilitarianism that includes animals vs. utilitarianism that doesn’t include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn’t seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less).
Classical utilitarianism is more strictly defined, but as you point out, we’re not talking about just classical utilitarianism here.
I don’t think that’s a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it’s not that I am supposed to have a special obligation to humans because I’m human; it’s that only humans are supposed to matter at all.
In any event, the point wasn’t that agent relative weights don’t make sense, it’s that they’re not part of a standard definition of utilitarianism, even in a broad sense. I still think that’s accurate characterization of professional usage, but if you have specific examples to the contrary, I’d be open to changing my mind.
Gratuitous nitpick: humans are animals too.
You may be right. But we’re inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport’s question:
You’re right, by most interpretations utilitarianism does weigh everybody equally. However, if that’s the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?
I completely agree with your reframing.
I (mistakenly) thought your original point was a definitional one, and that we had been discussing definitions the entire time. Apologies.
No problem. It happens.
For just a moment I was thinking “How is the Somebody Else’s Problem field involved?”
In utilitarianism, sometimes some animals can be more equal than others.. It’s just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.