I know moral relativism is not universally popular, but can reductionism/rationalism lead to anything else?
That depends a lot on what you mean by “moral relativism”. Certainly rationality and reductionism need not imply taking morality less seriously. I liked what Eliezer’s Harry Potter had to say on the subject:
“No,” Professor Quirrell said. His fingers rubbed the bridge of his nose. “I don’t think that’s quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
“Well, obviously,” Harry said. “I couldn’t act on moral considerations if they lacked the power to move me. But that doesn’t mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!”
If you haven’t looked at it already, you might like Eliezer’s sequence on metaethics, which talks about how one can notice that our concerns are generated by our brains, and that one could design brains with different concerns, while still taking morality seriously.
I’m glad someone noticed that. It was NOT EASY to compress that entire metaethical debate down into two paragraphs of text that wouldn’t distract from the main story.
One attractor in the space of moral systems that doesn’t have much to do with what could be engineered into brains is the class of moral systems that are favoured by natural selection.
There’s no such essence in a classical philosophical sense in naturalistic metaethics. Rather, the transcendence comes from morality-as-computation’s ability to be abstracted away from brains, like arithmetic can be abstracted away from calculators. “Concerns generated by our brains”, for example, breaks down when we try to instill morality into a FAI that might be profoundly non-human and non-person.
I read some of it, and after you mentioning it, I read some more. E.g. The Bedrock
of Fairness touches on the
issue of whether or there is this moral ‘essence’. Also, I liked Paul Graham’s
What you can’s say, which discusses the
way morals change.
Overall, I think the closest thing that comes to a ‘moral essence’ is that the
set of moral intuitions (no matter how vaguely defined) is the best thing that
evolutionary processes have been able to come up with. Hume’s is-ought
problem does not really
apply because there is no real ought.
The set of morals we ended up with is probably best summarized with the Golden
Rule, which is a useful illusion in the same way that free will is, and
similarly, for all practical purpose we can treat it as if it were real.
[ It’s an interesting though experiment to consider whether there could be other,
radically different sets of morals that would lead to the same or better
evolutionary fitness, while still being ‘evolutionary feasible’. ]
That depends a lot on what you mean by “moral relativism”. Certainly rationality and reductionism need not imply taking morality less seriously. I liked what Eliezer’s Harry Potter had to say on the subject:
If you haven’t looked at it already, you might like Eliezer’s sequence on metaethics, which talks about how one can notice that our concerns are generated by our brains, and that one could design brains with different concerns, while still taking morality seriously.
I’m glad someone noticed that. It was NOT EASY to compress that entire metaethical debate down into two paragraphs of text that wouldn’t distract from the main story.
It sounds a lot as if you are suggesting that there is some essence to morality which transcends “concerns are generated by our brains”.
“still taking morality seriously” modifies “one [who can...]”, not “brains with different concerns”.
I’m not sure why you came to think there was some confusion on this point, so I will not presume to suggest where you went wrong in your reading.
One attractor in the space of moral systems that doesn’t have much to do with what could be engineered into brains is the class of moral systems that are favoured by natural selection.
There’s no such essence in a classical philosophical sense in naturalistic metaethics. Rather, the transcendence comes from morality-as-computation’s ability to be abstracted away from brains, like arithmetic can be abstracted away from calculators. “Concerns generated by our brains”, for example, breaks down when we try to instill morality into a FAI that might be profoundly non-human and non-person.
I read some of it, and after you mentioning it, I read some more. E.g. The Bedrock of Fairness touches on the issue of whether or there is this moral ‘essence’. Also, I liked Paul Graham’s What you can’s say, which discusses the way morals change.
Overall, I think the closest thing that comes to a ‘moral essence’ is that the set of moral intuitions (no matter how vaguely defined) is the best thing that evolutionary processes have been able to come up with. Hume’s is-ought problem does not really apply because there is no real ought.
The set of morals we ended up with is probably best summarized with the Golden Rule, which is a useful illusion in the same way that free will is, and similarly, for all practical purpose we can treat it as if it were real.
[ It’s an interesting though experiment to consider whether there could be other, radically different sets of morals that would lead to the same or better evolutionary fitness, while still being ‘evolutionary feasible’. ]