That may be true, but I don’t think that accounts for what is meant by “moral realism”. Yes, it’s a confused term with multiple definitions, but it usually means that there is a certain utility function that is normative for everyone—as in you are morally wrong if you have a different utility function.
I think this is more the distinction between “objectivism” and “subjectivism”, rather than between “realism” and “anti-realism”.
Let’s suppose that different moral agents find they are using different U-functions to evaluate consequences. Each agent describes their U as just “good” (simpliciter) rather than as “good for me” or as “good from my point of view”. Each agent is utterly sincere in their ascription. Neither agent has any inconsistency in their functions, or any reflective inconsistency (i.e. neither discovers that under their existing U it would be better for them to adopt some other U’ as a function instead). Neither can be persuaded to change their mind, no matter how much additional information is discovered.
In that case, we have a form of moral “subjectivism”—basically each agent has a different concept of good, and their concepts are not reconcilable. Yet for each agent there are genuine facts of the matter about what would maximize their U, so we have a form of moral “realism” as well.
Agree though that the definitions aren’t precise, and many people equate “objectivism” with “realism”.
That may be true, but I don’t think that accounts for what is meant by “moral realism”. Yes, it’s a confused term with multiple definitions, but it usually means that there is a certain utility function that is normative for everyone—as in you are morally wrong if you have a different utility function.
I think this is more the distinction between “objectivism” and “subjectivism”, rather than between “realism” and “anti-realism”.
Let’s suppose that different moral agents find they are using different U-functions to evaluate consequences. Each agent describes their U as just “good” (simpliciter) rather than as “good for me” or as “good from my point of view”. Each agent is utterly sincere in their ascription. Neither agent has any inconsistency in their functions, or any reflective inconsistency (i.e. neither discovers that under their existing U it would be better for them to adopt some other U’ as a function instead). Neither can be persuaded to change their mind, no matter how much additional information is discovered.
In that case, we have a form of moral “subjectivism”—basically each agent has a different concept of good, and their concepts are not reconcilable. Yet for each agent there are genuine facts of the matter about what would maximize their U, so we have a form of moral “realism” as well.
Agree though that the definitions aren’t precise, and many people equate “objectivism” with “realism”.