[EDIT: The way I had initially described the distinction was misleading, as pointed out by thomblake. I apologize for potentially skewing the results of the poll, although I don’t think my revised version is that far off from the earlier version. Still, I should have been more careful.]
Moral realism: There are objective moral facts, i.e. there are facts about what is right and wrong (or good and bad) that are not constituted by a subject’s beliefs and desires.
i.e. there are facts about what is right and wrong (or good and bad) that are not agent-relative.
Is that right? I’ve understood that you can be a realist about subject-sensitive objective moral facts. Is that different from saying that the facts are “agent-relative”?
You’re right, my potted descriptions here are misleading. Certain forms of relativism are appropriately classified as realist. I’ll edit my descriptions.
But in another sense, pluralistic moral reductionism is ‘anti-realist’. It suggests that there is no One True Theory of Morality.
But this is orthogonal to the question of moral realism: you can have realism just as well with or without moral universalism. So I think you’re just a moral realist.
I’ve never seen a completely satisfying reduction of moral facts, and I don’t know whether such a successful reduction would be a vindicative or an eliminative one. Am I a moral realist or an anti-realist?
Other: While there are objective “moral facts” this is because we are implying various subjective human values into the word ‘moral.’ Given the word ‘moral’ there are certain facts about what that behavior is like, but they are not “out there in the world” and are highly contextual.
Lean toward: moral realism. My leanings are semantic. There is not one unified object or semantic value that all members of our linguistic community intend by ‘moral.’ So we must either choose one of the semantic values in advance, or leave all moral statements underdetermined. I think choosing a semantic value that naturalizes morality (e.g., ‘morality is behaving in accord with a decision procedure that optimizes for the overall preference satisfaction of all preference-bearers’) is much more useful and conducive to our goals than choosing one that forfeits all moral convictions to the Dark Side. If we want to win, we have to convince people in general to move toward a LessWrongier perspective; and convincing people without making any appeal to words like ‘good,’ ‘bad,’ ‘right,’ ‘wrong,’ ‘moral,’ or ‘immoral’ is a hell of a handicap.
Other: There is only one right morality criteria, but it is not written into the fabric of the universe or anything. It is the idealized criteria that humans refer to as “right”. It is not relative to belief or individual preferences.
What I care about (my utility function) includes this abstract morality, but also very relative things like “I like human females” and “I’m more important than you”.
I’d say accept moral antirealism except that that is too easily construed as relativism or some other philosophy.
For the record, I misread pragmatist’s definition. My answer, which was Lean toward moral realism, should have been lean toward moral anti-realism. (I missed the “that are not agent-relative” part.)
I tend to view game strategies that lead to the best stable equilibrium as moral injunctions ( tit for tat, cooperate first) These are provable ( under certain assumptions) so I lean towards saying they are “real”
Yeah, I have similar ideas. On the other hand, rules that are Nash equilibria in the current “environment” are in some ways determined by the preferences of the (by now, long dead) agents (and even their initial bargaining positions). I’m having a hard time deciding how to categorize this kind of “morality” (if it can, in truth, be called such a thing). I ended up going with “Lean toward: moral anti-realism”.
Meta-ethics: moral realism or moral anti-realism?
[pollid:84]
[EDIT: The way I had initially described the distinction was misleading, as pointed out by thomblake. I apologize for potentially skewing the results of the poll, although I don’t think my revised version is that far off from the earlier version. Still, I should have been more careful.]
Moral realism: There are objective moral facts, i.e. there are facts about what is right and wrong (or good and bad) that are not constituted by a subject’s beliefs and desires.
Moral anti-realism: The denial of moral realism.
Is that right? I’ve understood that you can be a realist about subject-sensitive objective moral facts. Is that different from saying that the facts are “agent-relative”?
You’re right, my potted descriptions here are misleading. Certain forms of relativism are appropriately classified as realist. I’ll edit my descriptions.
Thanks! I was concerned I had it wrong.
Other : depends on the level of the desires (object-level, meta-level, etc.)
My ‘Other’ answer is “Depends what you mean.”
From that article:
But this is orthogonal to the question of moral realism: you can have realism just as well with or without moral universalism. So I think you’re just a moral realist.
I’ve never seen a completely satisfying reduction of moral facts, and I don’t know whether such a successful reduction would be a vindicative or an eliminative one. Am I a moral realist or an anti-realist?
That sounds like “undecided” to me.
Other: While there are objective “moral facts” this is because we are implying various subjective human values into the word ‘moral.’ Given the word ‘moral’ there are certain facts about what that behavior is like, but they are not “out there in the world” and are highly contextual.
Lean toward: moral realism. My leanings are semantic. There is not one unified object or semantic value that all members of our linguistic community intend by ‘moral.’ So we must either choose one of the semantic values in advance, or leave all moral statements underdetermined. I think choosing a semantic value that naturalizes morality (e.g., ‘morality is behaving in accord with a decision procedure that optimizes for the overall preference satisfaction of all preference-bearers’) is much more useful and conducive to our goals than choosing one that forfeits all moral convictions to the Dark Side. If we want to win, we have to convince people in general to move toward a LessWrongier perspective; and convincing people without making any appeal to words like ‘good,’ ‘bad,’ ‘right,’ ‘wrong,’ ‘moral,’ or ‘immoral’ is a hell of a handicap.
Other: There is only one right morality criteria, but it is not written into the fabric of the universe or anything. It is the idealized criteria that humans refer to as “right”. It is not relative to belief or individual preferences.
What I care about (my utility function) includes this abstract morality, but also very relative things like “I like human females” and “I’m more important than you”.
I’d say accept moral antirealism except that that is too easily construed as relativism or some other philosophy.
Other: I think I’m a realist, but it really depends on the definition of “desires.”
For the record, I misread pragmatist’s definition. My answer, which was Lean toward moral realism, should have been lean toward moral anti-realism. (I missed the “that are not agent-relative” part.)
Yeah, I’m not sure that part is correct, or it needs clarification.
I tend to view game strategies that lead to the best stable equilibrium as moral injunctions ( tit for tat, cooperate first) These are provable ( under certain assumptions) so I lean towards saying they are “real”
Yeah, I have similar ideas. On the other hand, rules that are Nash equilibria in the current “environment” are in some ways determined by the preferences of the (by now, long dead) agents (and even their initial bargaining positions). I’m having a hard time deciding how to categorize this kind of “morality” (if it can, in truth, be called such a thing). I ended up going with “Lean toward: moral anti-realism”.