I was surprised to see the high number of moral realists on Less Wrong
Just a guess, but this may be related to the high number of consequentialists. For any given function U to evaluate consequences (e.g. a utility function) there are facts about which actions maximize that function. Since what a consequentialist thinks of as a “right” action is what maximizes some corresponding U, there are (in the consequentialist’s eyes) moral facts about what are the “right” actions.
Similar logic applies to rule consequentialism by the way (there may well be facts of the matter about which moral rules would maximize the utility function if generally adopted).
That may be true, but I don’t think that accounts for what is meant by “moral realism”. Yes, it’s a confused term with multiple definitions, but it usually means that there is a certain utility function that is normative for everyone—as in you are morally wrong if you have a different utility function.
I think this is more the distinction between “objectivism” and “subjectivism”, rather than between “realism” and “anti-realism”.
Let’s suppose that different moral agents find they are using different U-functions to evaluate consequences. Each agent describes their U as just “good” (simpliciter) rather than as “good for me” or as “good from my point of view”. Each agent is utterly sincere in their ascription. Neither agent has any inconsistency in their functions, or any reflective inconsistency (i.e. neither discovers that under their existing U it would be better for them to adopt some other U’ as a function instead). Neither can be persuaded to change their mind, no matter how much additional information is discovered.
In that case, we have a form of moral “subjectivism”—basically each agent has a different concept of good, and their concepts are not reconcilable. Yet for each agent there are genuine facts of the matter about what would maximize their U, so we have a form of moral “realism” as well.
Agree though that the definitions aren’t precise, and many people equate “objectivism” with “realism”.
Just a guess, but this may be related to the high number of consequentialists. For any given function U to evaluate consequences (e.g. a utility function) there are facts about which actions maximize that function. Since what a consequentialist thinks of as a “right” action is what maximizes some corresponding U, there are (in the consequentialist’s eyes) moral facts about what are the “right” actions.
Similar logic applies to rule consequentialism by the way (there may well be facts of the matter about which moral rules would maximize the utility function if generally adopted).
That may be true, but I don’t think that accounts for what is meant by “moral realism”. Yes, it’s a confused term with multiple definitions, but it usually means that there is a certain utility function that is normative for everyone—as in you are morally wrong if you have a different utility function.
I think this is more the distinction between “objectivism” and “subjectivism”, rather than between “realism” and “anti-realism”.
Let’s suppose that different moral agents find they are using different U-functions to evaluate consequences. Each agent describes their U as just “good” (simpliciter) rather than as “good for me” or as “good from my point of view”. Each agent is utterly sincere in their ascription. Neither agent has any inconsistency in their functions, or any reflective inconsistency (i.e. neither discovers that under their existing U it would be better for them to adopt some other U’ as a function instead). Neither can be persuaded to change their mind, no matter how much additional information is discovered.
In that case, we have a form of moral “subjectivism”—basically each agent has a different concept of good, and their concepts are not reconcilable. Yet for each agent there are genuine facts of the matter about what would maximize their U, so we have a form of moral “realism” as well.
Agree though that the definitions aren’t precise, and many people equate “objectivism” with “realism”.