Well, I find that my metamorality meets those criteria, with one exception.
To reiterate once, I think that the foundations of morality as we understand it are certain evolved impulses like the ones we can find in other primates (maternal love, desire to punish a cheater, etc); these are like other emotions, with one key difference: the social component that we expect and rely on others having the same reaction, and accordingly we experience other emotions as more subjective and our moral impulses as more objective.
Note that when I’m afraid of something, and you’re not, this may surprise me but doesn’t anger me; but if I feel moral outrage at something, and you don’t, then I’m liable to get angry with you.
But of course our moralities aren’t just these few basic impulses. Given our capacity for complex thought and for passing down complex cultures, we’ve built up many systems of morality that try to integrate all these impulses. It’s a testament of the power of conscious thought to reshape our very perceptions of the world that we can get away with this— we foment one moral impulse to restrain another when our system tells us so, and we can work up a moral sentiment in extended contexts when our system tells us to do so. (When we fail to correctly extrapolate and apply our moral system, we later think of this as a moral error.)
Of course, some moral systems cohere logically better than others (which is good if we want to think of them as objective), some have better observable consequences, and some require less strenuous effort at reinterpreting experience. Moving from one moral system to another which improves in some of these areas is generally what we call “moral progress”.
This account has no problems with #2 and #3; I don’t see an “impossible question” suggesting itself (though I’m open to suggestions); the only divergence from your desired properties is that it only claims that we can hardly help but believe that some things are right objectively, whether we want them or not. It’s not impossible for an alien species to evolve to conscious thought without any such concept of objective morality, or with one that differs from ours on the most crucial of points (say, our immediate moral pain at seeing something like us suffer); and there’d be nothing in the universe to say which one of us is “right”.
In essence, I think that Subhan is weakly on the right track, but he doesn’t realize that there are some human impulses stronger than anything we’d call “preference”, or that a mix of moral impulse and reasoning and reclassifying of experience is at stake and is that much more complex than the interactions he supposes. Since we as humans have in common both the first-order moral impulses and the perception that these are objective and thus ought to be logically coherent, we aren’t in fact free to construct our moral systems with too many degrees of freedom.
Sorry for the overlong comment. I’m eager to see what tomorrow’s post will bring...
Well, I find that my metamorality meets those criteria, with one exception.
To reiterate once, I think that the foundations of morality as we understand it are certain evolved impulses like the ones we can find in other primates (maternal love, desire to punish a cheater, etc); these are like other emotions, with one key difference: the social component that we expect and rely on others having the same reaction, and accordingly we experience other emotions as more subjective and our moral impulses as more objective.
Note that when I’m afraid of something, and you’re not, this may surprise me but doesn’t anger me; but if I feel moral outrage at something, and you don’t, then I’m liable to get angry with you.
But of course our moralities aren’t just these few basic impulses. Given our capacity for complex thought and for passing down complex cultures, we’ve built up many systems of morality that try to integrate all these impulses. It’s a testament of the power of conscious thought to reshape our very perceptions of the world that we can get away with this— we foment one moral impulse to restrain another when our system tells us so, and we can work up a moral sentiment in extended contexts when our system tells us to do so. (When we fail to correctly extrapolate and apply our moral system, we later think of this as a moral error.)
Of course, some moral systems cohere logically better than others (which is good if we want to think of them as objective), some have better observable consequences, and some require less strenuous effort at reinterpreting experience. Moving from one moral system to another which improves in some of these areas is generally what we call “moral progress”.
This account has no problems with #2 and #3; I don’t see an “impossible question” suggesting itself (though I’m open to suggestions); the only divergence from your desired properties is that it only claims that we can hardly help but believe that some things are right objectively, whether we want them or not. It’s not impossible for an alien species to evolve to conscious thought without any such concept of objective morality, or with one that differs from ours on the most crucial of points (say, our immediate moral pain at seeing something like us suffer); and there’d be nothing in the universe to say which one of us is “right”.
In essence, I think that Subhan is weakly on the right track, but he doesn’t realize that there are some human impulses stronger than anything we’d call “preference”, or that a mix of moral impulse and reasoning and reclassifying of experience is at stake and is that much more complex than the interactions he supposes. Since we as humans have in common both the first-order moral impulses and the perception that these are objective and thus ought to be logically coherent, we aren’t in fact free to construct our moral systems with too many degrees of freedom.
Sorry for the overlong comment. I’m eager to see what tomorrow’s post will bring...