I’ve encountered a few near-Comtean altruists who will readily admit their morality makes them miserable; the idea that other people are worse off than them fills them with a deep guilt which they cannot resolve.
Interesting morality, that makes those who follow it miserable. Why is it that they want to have a morality, when the one they have makes them miserable?
Most of what we strive for has survival value for the species. Yet striving is not fun. The human moral instinct is filled with features which facilitate humans living together in large numbers and working together in a unified fashion. We don’t kill each other when we get mad at each other (mostly, and that is what morality pushes us towards even when we fail): this makes it way easier to live together in large groups. For the most part we are motivated to speak honestly to each other and to keep our promises and commitments. This allows groups of humans to function together to get things done that individual humans could not.
SO you ask:
Why is it that they want to have a morality, when the one they have makes them miserable?
Pardon my answering your question with a rhetorical question, but why would you think that what we have and what we get would be influenced at all by what we want or what does or does not make us miserable? But to answer more straightforwardly: if the result of having a morality which allows us to function effectively together in large numbers, then evolution will choose, if it can, for such moralities, and if being miserable does not fully negate the benefit of human cooperation, evolution will not fail at pursuing morality just because it also makes some people miserable.
Regardless of the debate surrounding whether Morality and “Good” are somehow embedded in how the universe works, you can’t change whether or not you prefer to behave that way.
For example, I can’t help but care about human suffering. You can’t ask me why I would want to care about it—it’s a terminal value. I care because I was “programmed” to care...and it wouldn’t matter whether or not it was “good” to care.
I can’t help it in the same way I can’t help preferring sweet to bitter. Asking why I would want to have those preferences is like asking someone why they find junk food tasty.
Interesting morality, that makes those who follow it miserable. Why is it that they want to have a morality, when the one they have makes them miserable?
Most of what we strive for has survival value for the species. Yet striving is not fun. The human moral instinct is filled with features which facilitate humans living together in large numbers and working together in a unified fashion. We don’t kill each other when we get mad at each other (mostly, and that is what morality pushes us towards even when we fail): this makes it way easier to live together in large groups. For the most part we are motivated to speak honestly to each other and to keep our promises and commitments. This allows groups of humans to function together to get things done that individual humans could not.
SO you ask:
Pardon my answering your question with a rhetorical question, but why would you think that what we have and what we get would be influenced at all by what we want or what does or does not make us miserable? But to answer more straightforwardly: if the result of having a morality which allows us to function effectively together in large numbers, then evolution will choose, if it can, for such moralities, and if being miserable does not fully negate the benefit of human cooperation, evolution will not fail at pursuing morality just because it also makes some people miserable.
This implies that you get a choice in the matter. Ultimately, preferences simply exist—they aren’t chosen.
I might say that, but I doubt that those miserable Comtean altruists would see it that way. For them, I suspect morality is a truth, not a preference.
Well, it is a truth about one’s preferences, isn’t it?
They’d say that Altruism is good regardless of whether they prefer it or not.
No, I mean even regardless of that.
Regardless of the debate surrounding whether Morality and “Good” are somehow embedded in how the universe works, you can’t change whether or not you prefer to behave that way.
For example, I can’t help but care about human suffering. You can’t ask me why I would want to care about it—it’s a terminal value. I care because I was “programmed” to care...and it wouldn’t matter whether or not it was “good” to care.
I can’t help it in the same way I can’t help preferring sweet to bitter. Asking why I would want to have those preferences is like asking someone why they find junk food tasty.