The same moral arguments keep cropping up in multiple threads, and responding to them all separately seems inefficient. Here’s an attempt to summarize my views and head off a lot of identical conversations:
As I said before, I’m operating off of Preference Utilitarianism. I am still in the process of working through that (it’s only recently that I tacked on the word “preference,” I’m not 100% sure it solves the problems I wanted it to solve). But I strongly believe that the happiness and suffering of others IS important.
That is not a fact I will try to convince someone I am right about, because I don’t think it’s something that can be proven. But it’s not a leap of faith either—it’s simply how I feel. Most humans seem to feel the same way. If you don’t, that’s fine. But saying “there’s no reason to think that” is likewise irrelevant. It doesn’t matter whether there’s a reason to care. All it matters is that we do. If you honestly do not feel compelled to worry about the suffering of others for their own sake, I won’t argue with you. But if you DO feel compelled to worry about it, and are avoiding it because “it’s just an emotion,” then I think you are missing the point. At some point you need to define your goals, and figure out where they conflict with each other, and it’s perfectly okay if “not cause unnecessary suffering” ends up being one the things you arbitrarily care about.
I don’t really want this to turn into a discussion about morality in general. If the subject comes up in the various subthreads, please try to limit the discussion to “I do/do-not care about suffering of others” and then argue about the consequences of THAT (i.e. if you care about suffering of others, but only other humans, or some other group, explain why you draw the distinction). If you don’t care about it, then the animal rights side of the issue is largely irrelevant to you and I won’t press you on it.
If you want to argue about the merits of caring about others’ suffering in the first place (or in particular with the statements I just made) then try to keep it under this individual reply, so it stays fairly focused and doesn’t gum up the rest of the conversation.
i.e. if you care about suffering of others, but only other humans, or some other group, explain why you draw the distinction.
I care less about the suffering of some groups, but can’t really explain what criterion I use (and am in general wary of coming up with simple rules). I can explain why from an evolutionary point of view, it makes sense for me to care less about those who are only distantly related, and are unlikely to punish me if I’m not nice to them. I agree that this “why” is probably not the one you were asking about.
Truth be told, in my day to day life I instinctively care less as well. It’s fairly easy to get myself to care about mammals with facial expressions I can recognize, harder for things like reptiles.
At some point I made a conscious decision to choose “universal preferences” over “what personally makes me feel squicky or warm and fuzzy.” That decision could, at least in part, be considered cooperating on a massive scale prisoner’s dilemma. If I let my personal squick-factors persuade me on moral issues, I’m giving approval for other people to do the same. Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
If you care about animal suffering but assign some coefficient of otherness to it, I think you should at least figure out what that coefficient IS, and then shut up and multiply.
Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
I don’t think “other” is the main criterion either—if we visit another planet and find it inhabited by aliens with approximatively 19th century europe technology, and consider them unlikely to harm us (their planet has no uranium, they’re two feet tall and not particularly warlike, and we have nanoweapons and orbital lasers), I would still consider it very immoral to kill one of them, even though they are very “other”, even less related to us than broccoli is, and being nice to them isn’t particularly in our interest.
The same moral arguments keep cropping up in multiple threads, and responding to them all separately seems inefficient. Here’s an attempt to summarize my views and head off a lot of identical conversations:
As I said before, I’m operating off of Preference Utilitarianism. I am still in the process of working through that (it’s only recently that I tacked on the word “preference,” I’m not 100% sure it solves the problems I wanted it to solve). But I strongly believe that the happiness and suffering of others IS important.
That is not a fact I will try to convince someone I am right about, because I don’t think it’s something that can be proven. But it’s not a leap of faith either—it’s simply how I feel. Most humans seem to feel the same way. If you don’t, that’s fine. But saying “there’s no reason to think that” is likewise irrelevant. It doesn’t matter whether there’s a reason to care. All it matters is that we do. If you honestly do not feel compelled to worry about the suffering of others for their own sake, I won’t argue with you. But if you DO feel compelled to worry about it, and are avoiding it because “it’s just an emotion,” then I think you are missing the point. At some point you need to define your goals, and figure out where they conflict with each other, and it’s perfectly okay if “not cause unnecessary suffering” ends up being one the things you arbitrarily care about.
I don’t really want this to turn into a discussion about morality in general. If the subject comes up in the various subthreads, please try to limit the discussion to “I do/do-not care about suffering of others” and then argue about the consequences of THAT (i.e. if you care about suffering of others, but only other humans, or some other group, explain why you draw the distinction). If you don’t care about it, then the animal rights side of the issue is largely irrelevant to you and I won’t press you on it.
If you want to argue about the merits of caring about others’ suffering in the first place (or in particular with the statements I just made) then try to keep it under this individual reply, so it stays fairly focused and doesn’t gum up the rest of the conversation.
I care less about the suffering of some groups, but can’t really explain what criterion I use (and am in general wary of coming up with simple rules). I can explain why from an evolutionary point of view, it makes sense for me to care less about those who are only distantly related, and are unlikely to punish me if I’m not nice to them. I agree that this “why” is probably not the one you were asking about.
Truth be told, in my day to day life I instinctively care less as well. It’s fairly easy to get myself to care about mammals with facial expressions I can recognize, harder for things like reptiles.
At some point I made a conscious decision to choose “universal preferences” over “what personally makes me feel squicky or warm and fuzzy.” That decision could, at least in part, be considered cooperating on a massive scale prisoner’s dilemma. If I let my personal squick-factors persuade me on moral issues, I’m giving approval for other people to do the same. Humans used to consider the other tribe over the hill unworthy of moral consideration because they were “other.” You can use “other” as a criterion, but you’re increasing, in some small way, the chance that others will use that criterion to avoid giving consideration to you or people you care about.
If you care about animal suffering but assign some coefficient of otherness to it, I think you should at least figure out what that coefficient IS, and then shut up and multiply.
I don’t think “other” is the main criterion either—if we visit another planet and find it inhabited by aliens with approximatively 19th century europe technology, and consider them unlikely to harm us (their planet has no uranium, they’re two feet tall and not particularly warlike, and we have nanoweapons and orbital lasers), I would still consider it very immoral to kill one of them, even though they are very “other”, even less related to us than broccoli is, and being nice to them isn’t particularly in our interest.