I would guess that you’re not a utilitarian and a lot of LWers are. The standard utilitarian position is that all that matters is the interests of beings, and beings’ utility is weighed equally regardless of what those beings are. One “unit” of suffering (or utility) generated by an animal is equal to the same unit generated by a human.
There’s a continuum of.. mental complexity, to name something random, between modern dolphins and rocks. Homo sapiens also fits on that curve somewhere.
You might argue that mental complexity is not the right parameter to use, but unless you’re going to argue that rocks are deserving of utility you’ll have to agree to either an arbitrary cut-off point or some mapping between $parameter and utility-deservingness, practically all possible such parameters having a similar continuous curve.
As I understand it, a util is equal regardless of what generates it, but the ability to generate utils out of states of the world varies from species to species. A rock doesn’t experience utility, but dogs and humans do. If a rock could experience utility, it would be equally deserving of it.
I would guess that you’re not a utilitarian and a lot of LWers are.
I’m almost certain this is false for the definition of “utilitarianism” you give in the next sentence.
There is unfortunately a lot of confusion between two different senses of the word “utilitarianism”. The definition you give, and the more general sense of any morality system that uses a utility function.
I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I’m beginning to see that things aren’t so simple.
I’ve seen “utilitarianism” used to denote both “my utility is the average/[normalized sum] of the utility of each person, plus my exclusive preferences” and “my utility is a weighted sum/average of the utility of a bunch of entities, plus my exclusive preferences”. I’m almost sure that few LWers would claim to be utilitarians in the former sense, especially since most people round here believe minds are made of atoms and thus not very discrete.
I mean, we can add/remove small bits from minds, and unless personhood is continuous (which would imply the second sense of utilitarianism), one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do. This is an instance of the Sorites “paradox”.
(One might argue that utilities are only defined up to affine transformation, but when I say “utility” I mean the thing that’s like utility except it’s comparable between agents. Now that I think about it, you might mean that we’ve defined persons’ utility such that every util is equal in the second sense of the previous sentence, but I don’t think you meant that.)
Utilitarianism is normative, so it means that your utility should be the average of the utility of all beings capable of experiencing it, regardless of whether your utility currently is that. If it becomes a weighted average, it ceases to be utilitarianism, because it involves considerations other than the maximization of utility.
one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do
Consider how much people care about the living compared to the dead. I think that’s a counterexample to your claim.
I would guess that you’re not a utilitarian and a lot of LWers are. The standard utilitarian position is that all that matters is the interests of beings, and beings’ utility is weighed equally regardless of what those beings are. One “unit” of suffering (or utility) generated by an animal is equal to the same unit generated by a human.
If “a lot” means “a minority”.
Well, no, that can’t be right.
There’s a continuum of.. mental complexity, to name something random, between modern dolphins and rocks. Homo sapiens also fits on that curve somewhere.
You might argue that mental complexity is not the right parameter to use, but unless you’re going to argue that rocks are deserving of utility you’ll have to agree to either an arbitrary cut-off point or some mapping between $parameter and utility-deservingness, practically all possible such parameters having a similar continuous curve.
As I understand it, a util is equal regardless of what generates it, but the ability to generate utils out of states of the world varies from species to species. A rock doesn’t experience utility, but dogs and humans do. If a rock could experience utility, it would be equally deserving of it.
Fair enough.
~~~
I’m still not sure I agree, but I’ll need to think about it.
I’m almost certain this is false for the definition of “utilitarianism” you give in the next sentence.
There is unfortunately a lot of confusion between two different senses of the word “utilitarianism”. The definition you give, and the more general sense of any morality system that uses a utility function.
I thought the latter was just called “consequentialism”.
In practice I’ve seen “utilitarianism” used to refer to both positions, as well as a lot of positions in between.
I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I’m beginning to see that things aren’t so simple.
Do corporation who are legally persons count?
I’ve seen “utilitarianism” used to denote both “my utility is the average/[normalized sum] of the utility of each person, plus my exclusive preferences” and “my utility is a weighted sum/average of the utility of a bunch of entities, plus my exclusive preferences”. I’m almost sure that few LWers would claim to be utilitarians in the former sense, especially since most people round here believe minds are made of atoms and thus not very discrete.
I mean, we can add/remove small bits from minds, and unless personhood is continuous (which would imply the second sense of utilitarianism), one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do. This is an instance of the Sorites “paradox”.
(One might argue that utilities are only defined up to affine transformation, but when I say “utility” I mean the thing that’s like utility except it’s comparable between agents. Now that I think about it, you might mean that we’ve defined persons’ utility such that every util is equal in the second sense of the previous sentence, but I don’t think you meant that.)
Utilitarianism is normative, so it means that your utility should be the average of the utility of all beings capable of experiencing it, regardless of whether your utility currently is that. If it becomes a weighted average, it ceases to be utilitarianism, because it involves considerations other than the maximization of utility.
Consider how much people care about the living compared to the dead. I think that’s a counterexample to your claim.