This doesn’t answer the question. Why is doing things Joe doesn’t like, or making his friends sad, bad? Consequentialism isn’t a moral system by itself; you need axioms or goals.
Why is doing things Joe doesn’t like, or making his friends sad, bad?
Because ceteris paribus, I prefer not to make Joe or his friends sad (which is an instance of the more general rule, “don’t violate people’s preferences, ceteris paribus”). And before you say that makes morality “arbitrary” or something along those lines, note that the overwhelming majority of society (in most Western First World countries, anyway—I don’t how it is in, say, the Middle East) agrees with me.
So yes, technically you could have a preference for violating other people’s preferences, and those preferences would technically be just as valid as mine, but in practice, if you act upon that preference, you are violating one of society’s rules, and game theory says that defectors get punished. So unless you want to get locked up for a long time, don’t kill people.
Of course, you might find this unsatisfactory for several reasons. For example, you might demand that morality hold anywhere and everywhere, whether a society exists to enforce it or not. However, the behavior of other animals in the wild definitely contradicts that idea, and humans, for all their intelligence, are still animals at their core, and therefore likely to behave the same way if deprived of societal norms. (Mind you, given enough time, they could probably implement a society from scratch—after all, we did it once—but that’ll take a long time.) Unless you’re a moral realist or something, which is indefensible for other reasons, I don’t really see how you could argue your way out of this point.
In morals, as in logic, you can’t explain something by appealing to something else unless the chain terminates in an axiom.
The question “why is it bad to rape and murder?” can be rephrased as, “how can we determine if a thing is bad, in the case of rape and murder?”
The answer “rape and murder are bad by definition” may be unsatisfying, but at least it’s a workable way: everything on the list is bad, everything else is not. But the answer “because they make others sad” assumes you can determine making others sad is bad. You substitute one question for another, and unless we keep asking why, we won’t have answered the original question.
This doesn’t answer the question. Why is doing things Joe doesn’t like, or making his friends sad, bad? Consequentialism isn’t a moral system by itself; you need axioms or goals.
Because ceteris paribus, I prefer not to make Joe or his friends sad (which is an instance of the more general rule, “don’t violate people’s preferences, ceteris paribus”). And before you say that makes morality “arbitrary” or something along those lines, note that the overwhelming majority of society (in most Western First World countries, anyway—I don’t how it is in, say, the Middle East) agrees with me.
So yes, technically you could have a preference for violating other people’s preferences, and those preferences would technically be just as valid as mine, but in practice, if you act upon that preference, you are violating one of society’s rules, and game theory says that defectors get punished. So unless you want to get locked up for a long time, don’t kill people.
Of course, you might find this unsatisfactory for several reasons. For example, you might demand that morality hold anywhere and everywhere, whether a society exists to enforce it or not. However, the behavior of other animals in the wild definitely contradicts that idea, and humans, for all their intelligence, are still animals at their core, and therefore likely to behave the same way if deprived of societal norms. (Mind you, given enough time, they could probably implement a society from scratch—after all, we did it once—but that’ll take a long time.) Unless you’re a moral realist or something, which is indefensible for other reasons, I don’t really see how you could argue your way out of this point.
Doesn’t that also imply you should feed utility monsters?
Sure. After all, I value humans much more highly than pigs. Doesn’t that imply that humans are utility monsters, at least compared to other animals?
EDIT: Vegans, on the other hand, should have a much harder time with the idea of utility monsters (at least from what little I know about veganism).
And that’s pretty much the difference between the two kinds of “moral realism”.
You can always keep asking why. That’s not particularly interesting.
In morals, as in logic, you can’t explain something by appealing to something else unless the chain terminates in an axiom.
The question “why is it bad to rape and murder?” can be rephrased as, “how can we determine if a thing is bad, in the case of rape and murder?”
The answer “rape and murder are bad by definition” may be unsatisfying, but at least it’s a workable way: everything on the list is bad, everything else is not. But the answer “because they make others sad” assumes you can determine making others sad is bad. You substitute one question for another, and unless we keep asking why, we won’t have answered the original question.
Okay, then interpret my answer as “rape and murder are bad because they make others sad, and making others sad is bad by definition”.