I wrote that “if one cares about suffering, one should also care about nonhuman animals, since (1) they are capable of suffering, (2) they do suffer quite a lot, and (3) we can prevent their suffering.”
Presumably you either disagree with one of my three empirical claims (which means we can have a good discussion) or you don’t care about suffering generally (perhaps you only care about human or sapient suffering alone) and there’s not much we can discuss. I, or someone else, could attempt to throw some thought experiments at you, I suppose, but I don’t expect they’ll do much.
if one cares about suffering, one should also care about nonhuman animals
This assumes that if I care about suffering, my utility function places some negative weight on suffering much in the same way it places a positive weight on me eating food I like, but this need not be the case. If I care about suffering, it means I want less of it, but it doesn’t mean that I’m willing to give up much to reduce the amount. Ceteris paribus, I want less suffering in the world, but that doesn’t mean I care enough about it to not eat delicious hamburgers, or even to pay more for a burger. I care about not getting dust specks in my eye too, but if I got one dust speck in my eye per month, and I could get rid of it by never eating burgers, I’d keep eating burgers. It doesn’t mean that I don’t care, though.
Or it means that the formalism of a utility function does not fully describe your preferences.
That is, asking “how much do you care about X”, and getting some real number as the answer to that question for any value of X, will not describe the preferences and choices of the agent in question. (This is one way to interpret my previously offered “chickens vs. grandmother” conundrum.)
A more apt formalism might be some sort of multi-tier system, perhaps. I haven’t settled on an answer, myself.
I wrote that “if one cares about suffering, one should also care about nonhuman animals, since (1) they are capable of suffering, (2) they do suffer quite a lot, and (3) we can prevent their suffering.”
Presumably you either disagree with one of my three empirical claims (which means we can have a good discussion) or you don’t care about suffering generally (perhaps you only care about human or sapient suffering alone) and there’s not much we can discuss. I, or someone else, could attempt to throw some thought experiments at you, I suppose, but I don’t expect they’ll do much.
This assumes that if I care about suffering, my utility function places some negative weight on suffering much in the same way it places a positive weight on me eating food I like, but this need not be the case. If I care about suffering, it means I want less of it, but it doesn’t mean that I’m willing to give up much to reduce the amount. Ceteris paribus, I want less suffering in the world, but that doesn’t mean I care enough about it to not eat delicious hamburgers, or even to pay more for a burger. I care about not getting dust specks in my eye too, but if I got one dust speck in my eye per month, and I could get rid of it by never eating burgers, I’d keep eating burgers. It doesn’t mean that I don’t care, though.
That’s technically true, yeah. It means you don’t care very much (or care very very much about eating burgers)...
Or it means that the formalism of a utility function does not fully describe your preferences.
That is, asking “how much do you care about X”, and getting some real number as the answer to that question for any value of X, will not describe the preferences and choices of the agent in question. (This is one way to interpret my previously offered “chickens vs. grandmother” conundrum.)
A more apt formalism might be some sort of multi-tier system, perhaps. I haven’t settled on an answer, myself.