i.e. if you care about suffering of others, but only other humans, or some other group, explain why you draw the distinction.
I care less about the suffering of some groups, but can’t really explain what criterion I use (and am in general wary of coming up with simple rules). I can explain why from an evolutionary point of view, it makes sense for me to care less about those who are only distantly related, and are unlikely to punish me if I’m not nice to them. I agree that this “why” is probably not the one you were asking about.
Truth be told, in my day to day life I instinctively care less as well. It’s fairly easy to get myself to care about mammals with facial expressions I can recognize, harder for things like reptiles.
At some point I made a conscious decision to choose “universal preferences” over “what personally makes me feel squicky or warm and fuzzy.” That decision could, at least in part, be considered cooperating on a massive scale prisoner’s dilemma. If I let my personal squick-factors persuade me on moral issues, I’m giving approval for other people to do the same. Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
If you care about animal suffering but assign some coefficient of otherness to it, I think you should at least figure out what that coefficient IS, and then shut up and multiply.
Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
I don’t think “other” is the main criterion either—if we visit another planet and find it inhabited by aliens with approximatively 19th century europe technology, and consider them unlikely to harm us (their planet has no uranium, they’re two feet tall and not particularly warlike, and we have nanoweapons and orbital lasers), I would still consider it very immoral to kill one of them, even though they are very “other”, even less related to us than broccoli is, and being nice to them isn’t particularly in our interest.
I care less about the suffering of some groups, but can’t really explain what criterion I use (and am in general wary of coming up with simple rules). I can explain why from an evolutionary point of view, it makes sense for me to care less about those who are only distantly related, and are unlikely to punish me if I’m not nice to them. I agree that this “why” is probably not the one you were asking about.
Truth be told, in my day to day life I instinctively care less as well. It’s fairly easy to get myself to care about mammals with facial expressions I can recognize, harder for things like reptiles.
At some point I made a conscious decision to choose “universal preferences” over “what personally makes me feel squicky or warm and fuzzy.” That decision could, at least in part, be considered cooperating on a massive scale prisoner’s dilemma. If I let my personal squick-factors persuade me on moral issues, I’m giving approval for other people to do the same. Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
If you care about animal suffering but assign some coefficient of otherness to it, I think you should at least figure out what that coefficient IS, and then shut up and multiply.
I don’t think “other” is the main criterion either—if we visit another planet and find it inhabited by aliens with approximatively 19th century europe technology, and consider them unlikely to harm us (their planet has no uranium, they’re two feet tall and not particularly warlike, and we have nanoweapons and orbital lasers), I would still consider it very immoral to kill one of them, even though they are very “other”, even less related to us than broccoli is, and being nice to them isn’t particularly in our interest.