Truth be told, in my day to day life I instinctively care less as well. It’s fairly easy to get myself to care about mammals with facial expressions I can recognize, harder for things like reptiles.
At some point I made a conscious decision to choose “universal preferences” over “what personally makes me feel squicky or warm and fuzzy.” That decision could, at least in part, be considered cooperating on a massive scale prisoner’s dilemma. If I let my personal squick-factors persuade me on moral issues, I’m giving approval for other people to do the same. Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
If you care about animal suffering but assign some coefficient of otherness to it, I think you should at least figure out what that coefficient IS, and then shut up and multiply.
Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
I don’t think “other” is the main criterion either—if we visit another planet and find it inhabited by aliens with approximatively 19th century europe technology, and consider them unlikely to harm us (their planet has no uranium, they’re two feet tall and not particularly warlike, and we have nanoweapons and orbital lasers), I would still consider it very immoral to kill one of them, even though they are very “other”, even less related to us than broccoli is, and being nice to them isn’t particularly in our interest.
Truth be told, in my day to day life I instinctively care less as well. It’s fairly easy to get myself to care about mammals with facial expressions I can recognize, harder for things like reptiles.
At some point I made a conscious decision to choose “universal preferences” over “what personally makes me feel squicky or warm and fuzzy.” That decision could, at least in part, be considered cooperating on a massive scale prisoner’s dilemma. If I let my personal squick-factors persuade me on moral issues, I’m giving approval for other people to do the same. Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
If you care about animal suffering but assign some coefficient of otherness to it, I think you should at least figure out what that coefficient IS, and then shut up and multiply.
I don’t think “other” is the main criterion either—if we visit another planet and find it inhabited by aliens with approximatively 19th century europe technology, and consider them unlikely to harm us (their planet has no uranium, they’re two feet tall and not particularly warlike, and we have nanoweapons and orbital lasers), I would still consider it very immoral to kill one of them, even though they are very “other”, even less related to us than broccoli is, and being nice to them isn’t particularly in our interest.