Consider dealings with agents that have morals that conflict with your own. Obviously, major value conflicts preclude co-existence. Let’s assume it is a minor conflict—Bob believes eating cow milk and beef at the same meal is immoral.
It is possible to develop instrumental or terminal values to resolve how much you tolerate Bob’s different value—without reference to any meta-ethical theory. But I think that meta-ethical considerations play a large role in how tolerance of value conflict is resolved—for some people, at least.
Consider dealings with agents that have morals that conflict with your own. Obviously, major value conflicts preclude co-existence. Let’s assume it is a minor conflict—Bob believes eating cow milk and beef at the same meal is immoral.
It is possible to develop instrumental or terminal values to resolve how much you tolerate Bob’s different value—without reference to any meta-ethical theory. But I think that meta-ethical considerations play a large role in how tolerance of value conflict is resolved—for some people, at least.
Not obvious. (How does this “preclusion” work? Is it the best decision available to both agents?)
Well, if I don’t include that sentence, someone nitpicks by saying:
I was trying preempt by making it clear that McH gets imprisoned or killed, even by moral anti-realists (unless they are exceptionally stupid).