What about ethics? It seems that many people think there is some ‘moral
bedrock’ somewhere—but is there really such a thing?
To me it seems that ethical questions are really about the tension between our
knee-jerk moral intuitions and ethical frameworks (utilitarianism, deontology
etc.). Increasingly elaborate theories are built out of the urge to somehow
make our ‘moral compass’ seem logical, until someone comes up with some
clever example where the theory somehow conflicts with our intuitions...
I know moral relativism is not universally popular, but can
reductionism/rationalism lead to anything else?
I know moral relativism is not universally popular, but can reductionism/rationalism lead to anything else?
That depends a lot on what you mean by “moral relativism”. Certainly rationality and reductionism need not imply taking morality less seriously. I liked what Eliezer’s Harry Potter had to say on the subject:
“No,” Professor Quirrell said. His fingers rubbed the bridge of his nose. “I don’t think that’s quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
“Well, obviously,” Harry said. “I couldn’t act on moral considerations if they lacked the power to move me. But that doesn’t mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!”
If you haven’t looked at it already, you might like Eliezer’s sequence on metaethics, which talks about how one can notice that our concerns are generated by our brains, and that one could design brains with different concerns, while still taking morality seriously.
I’m glad someone noticed that. It was NOT EASY to compress that entire metaethical debate down into two paragraphs of text that wouldn’t distract from the main story.
One attractor in the space of moral systems that doesn’t have much to do with what could be engineered into brains is the class of moral systems that are favoured by natural selection.
There’s no such essence in a classical philosophical sense in naturalistic metaethics. Rather, the transcendence comes from morality-as-computation’s ability to be abstracted away from brains, like arithmetic can be abstracted away from calculators. “Concerns generated by our brains”, for example, breaks down when we try to instill morality into a FAI that might be profoundly non-human and non-person.
I read some of it, and after you mentioning it, I read some more. E.g. The Bedrock
of Fairness touches on the
issue of whether or there is this moral ‘essence’. Also, I liked Paul Graham’s
What you can’s say, which discusses the
way morals change.
Overall, I think the closest thing that comes to a ‘moral essence’ is that the
set of moral intuitions (no matter how vaguely defined) is the best thing that
evolutionary processes have been able to come up with. Hume’s is-ought
problem does not really
apply because there is no real ought.
The set of morals we ended up with is probably best summarized with the Golden
Rule, which is a useful illusion in the same way that free will is, and
similarly, for all practical purpose we can treat it as if it were real.
[ It’s an interesting though experiment to consider whether there could be other,
radically different sets of morals that would lead to the same or better
evolutionary fitness, while still being ‘evolutionary feasible’. ]
What about ethics? It seems that many people think there is some ‘moral bedrock’ somewhere—but is there really such a thing?
No. Next? :P
(Ok, to be fair ethics is actually a really good example. Quite possibly the best example, given that most of the other critical things are approximately reduced already. Just not that particular ethical question.)
Or, to pick one currently popular example, to “blackmail”.
It seems to me that people often struggle to come up with a technical definition of some word which captures the “essence” of a concept. One particular example which I have some experience with is the definition of “life”. This activity can generate considerable emotion, and I don’t think that the reductionist explanation of “natural kinds” quite applies to this kind of dispute.
Maybe not reductionism vs essentialism in quite the way that Anna intends. But close enough to create confusion. In fact, I might advise Anna to attempt a taxonomy of different kinds of “essence” and different kinds of “reduction” so as to dispel some of the confusion.
I’d had it there originally, and had then removed it on the theory peoples’ persistent tendency to postulate an essential and irreducible ethics had more to do with folks having strong and non-truth-seeking motives on the subject than with empirical regularities of a sort that essences could help predict.
But on reflection, one’s goals are confusingly different from other sorts of phenomena, so maybe even without strong emotions folks would expect magical essences here.
Which list? The list of things successfully reduced, or the list of candidates for reduction that you are asking us to help you build?
You wondered why people seem to be confused by this posting. I think it is because there are two lists being discussed here, and you have been extremely unclear in your transitions in distinguishing them.
What about ethics? It seems that many people think there is some ‘moral bedrock’ somewhere—but is there really such a thing?
To me it seems that ethical questions are really about the tension between our knee-jerk moral intuitions and ethical frameworks (utilitarianism, deontology etc.). Increasingly elaborate theories are built out of the urge to somehow make our ‘moral compass’ seem logical, until someone comes up with some clever example where the theory somehow conflicts with our intuitions...
I know moral relativism is not universally popular, but can reductionism/rationalism lead to anything else?
That depends a lot on what you mean by “moral relativism”. Certainly rationality and reductionism need not imply taking morality less seriously. I liked what Eliezer’s Harry Potter had to say on the subject:
If you haven’t looked at it already, you might like Eliezer’s sequence on metaethics, which talks about how one can notice that our concerns are generated by our brains, and that one could design brains with different concerns, while still taking morality seriously.
I’m glad someone noticed that. It was NOT EASY to compress that entire metaethical debate down into two paragraphs of text that wouldn’t distract from the main story.
It sounds a lot as if you are suggesting that there is some essence to morality which transcends “concerns are generated by our brains”.
“still taking morality seriously” modifies “one [who can...]”, not “brains with different concerns”.
I’m not sure why you came to think there was some confusion on this point, so I will not presume to suggest where you went wrong in your reading.
One attractor in the space of moral systems that doesn’t have much to do with what could be engineered into brains is the class of moral systems that are favoured by natural selection.
There’s no such essence in a classical philosophical sense in naturalistic metaethics. Rather, the transcendence comes from morality-as-computation’s ability to be abstracted away from brains, like arithmetic can be abstracted away from calculators. “Concerns generated by our brains”, for example, breaks down when we try to instill morality into a FAI that might be profoundly non-human and non-person.
I read some of it, and after you mentioning it, I read some more. E.g. The Bedrock of Fairness touches on the issue of whether or there is this moral ‘essence’. Also, I liked Paul Graham’s What you can’s say, which discusses the way morals change.
Overall, I think the closest thing that comes to a ‘moral essence’ is that the set of moral intuitions (no matter how vaguely defined) is the best thing that evolutionary processes have been able to come up with. Hume’s is-ought problem does not really apply because there is no real ought.
The set of morals we ended up with is probably best summarized with the Golden Rule, which is a useful illusion in the same way that free will is, and similarly, for all practical purpose we can treat it as if it were real.
[ It’s an interesting though experiment to consider whether there could be other, radically different sets of morals that would lead to the same or better evolutionary fitness, while still being ‘evolutionary feasible’. ]
No. Next? :P
(Ok, to be fair ethics is actually a really good example. Quite possibly the best example, given that most of the other critical things are approximately reduced already. Just not that particular ethical question.)
Apply reductionism to itself.
Or to “rationalism”.
Or, to pick one currently popular example, to “blackmail”.
It seems to me that people often struggle to come up with a technical definition of some word which captures the “essence” of a concept. One particular example which I have some experience with is the definition of “life”. This activity can generate considerable emotion, and I don’t think that the reductionist explanation of “natural kinds” quite applies to this kind of dispute.
Maybe not reductionism vs essentialism in quite the way that Anna intends. But close enough to create confusion. In fact, I might advise Anna to attempt a taxonomy of different kinds of “essence” and different kinds of “reduction” so as to dispel some of the confusion.
You’re right—ethics should be on the list.
I’d had it there originally, and had then removed it on the theory peoples’ persistent tendency to postulate an essential and irreducible ethics had more to do with folks having strong and non-truth-seeking motives on the subject than with empirical regularities of a sort that essences could help predict.
But on reflection, one’s goals are confusingly different from other sorts of phenomena, so maybe even without strong emotions folks would expect magical essences here.
Which list? The list of things successfully reduced, or the list of candidates for reduction that you are asking us to help you build?
You wondered why people seem to be confused by this posting. I think it is because there are two lists being discussed here, and you have been extremely unclear in your transitions in distinguishing them.
Or maybe it is just me.