I would agree that starting to find X immoral in-of-itself would be over-doing it, especially as there could be a conflict later with people who object to ~X. I suppose I am probing a middling position where even if you don’t find X intrinsically immoral, you associate it with the suffering it would cause those people and thus it acquires the immorality of being associated with that amount of suffering.
Back to the vegetarian example—which I continue to find politically ‘safe’—the unhappiness I may be causing to animal activists has not caused me to go as far as considering eating meat immoral, but I’m beginning to pause whenever I eat meat in deference to these minds and I wonder if I should continue to develop this second-person moral sensitivity. Arguably, the world could be better if my fellows didn’t worry about the slaughtering of animals. And then, why should I continue to eat meat if the world were better without it?
If this were merely about a concern for the affective state of supporters of animal rights, you could just meat and then lie about it.
What I got out of your post was a game theory strategy, a sort of special case of the Golden Rule, by which you might decide to not eat meat in deference to supporters of animal rights — even when no one is looking — because there are certain behaviors you would like others to adopt reciprocally. Maybe you’re a supporter of, um, turnips’ rights, and you want others to refrain from eating turnips, at least where doing so would not be an inconvenience. So we have a Prisoner’s Dilemma where you can eat animals or not, and the other player can eat turnips or not, and the best outcome is if everyone abstains from animals and turnips.
There’s the game theory consideration, certainly, but also I directly prefer a world in which people’s preferences are satisfied. Though this preference isn’t strong, I’m wondering if it could be strengthened through reflection and what the effects would be.
I would agree that starting to find X immoral in-of-itself would be over-doing it, especially as there could be a conflict later with people who object to ~X. I suppose I am probing a middling position where even if you don’t find X intrinsically immoral, you associate it with the suffering it would cause those people and thus it acquires the immorality of being associated with that amount of suffering.
Back to the vegetarian example—which I continue to find politically ‘safe’—the unhappiness I may be causing to animal activists has not caused me to go as far as considering eating meat immoral, but I’m beginning to pause whenever I eat meat in deference to these minds and I wonder if I should continue to develop this second-person moral sensitivity. Arguably, the world could be better if my fellows didn’t worry about the slaughtering of animals. And then, why should I continue to eat meat if the world were better without it?
If this were merely about a concern for the affective state of supporters of animal rights, you could just meat and then lie about it.
What I got out of your post was a game theory strategy, a sort of special case of the Golden Rule, by which you might decide to not eat meat in deference to supporters of animal rights — even when no one is looking — because there are certain behaviors you would like others to adopt reciprocally. Maybe you’re a supporter of, um, turnips’ rights, and you want others to refrain from eating turnips, at least where doing so would not be an inconvenience. So we have a Prisoner’s Dilemma where you can eat animals or not, and the other player can eat turnips or not, and the best outcome is if everyone abstains from animals and turnips.
There’s the game theory consideration, certainly, but also I directly prefer a world in which people’s preferences are satisfied. Though this preference isn’t strong, I’m wondering if it could be strengthened through reflection and what the effects would be.