The problem is that that “utilitarianism”, as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there’s definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
As much of this literature is in economics, perhaps it’s non-standard in philosophy. It does, however, have a fairly long pedigree.
I was actually uneasy about making the comment because I had a vague recollection that that might be true, but I’m not sure a definition that says “maximize Kim Jong-Il’s welfare” is a form of utilitarianism, is a good definition.
Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
Utilitarianism that includes animals vs. utilitarianism that doesn’t include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn’t seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less).
Classical utilitarianism is more strictly defined, but as you point out, we’re not talking about just classical utilitarianism here.
I don’t think that’s a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it’s not that I am supposed to have a special obligation to humans because I’m human; it’s that only humans are supposed to matter at all.
In any event, the point wasn’t that agent relative weights don’t make sense, it’s that they’re not part of a standard definition of utilitarianism, even in a broad sense. I still think that’s accurate characterization of professional usage, but if you have specific examples to the contrary, I’d be open to changing my mind.
You may be right. But we’re inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport’s question:
You’re right, by most interpretations utilitarianism does weigh everybody equally. However, if that’s the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?
I’d put a slight gloss on this.
The problem is that that “utilitarianism”, as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there’s definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
As much of this literature is in economics, perhaps it’s non-standard in philosophy. It does, however, have a fairly long pedigree.
I was actually uneasy about making the comment because I had a vague recollection that that might be true, but I’m not sure a definition that says “maximize Kim Jong-Il’s welfare” is a form of utilitarianism, is a good definition.
Utilitarianism that includes animals vs. utilitarianism that doesn’t include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn’t seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less).
Classical utilitarianism is more strictly defined, but as you point out, we’re not talking about just classical utilitarianism here.
I don’t think that’s a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it’s not that I am supposed to have a special obligation to humans because I’m human; it’s that only humans are supposed to matter at all.
In any event, the point wasn’t that agent relative weights don’t make sense, it’s that they’re not part of a standard definition of utilitarianism, even in a broad sense. I still think that’s accurate characterization of professional usage, but if you have specific examples to the contrary, I’d be open to changing my mind.
Gratuitous nitpick: humans are animals too.
You may be right. But we’re inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport’s question:
You’re right, by most interpretations utilitarianism does weigh everybody equally. However, if that’s the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?
I completely agree with your reframing.
I (mistakenly) thought your original point was a definitional one, and that we had been discussing definitions the entire time. Apologies.
No problem. It happens.