I don’t have much in the way of any general opposition to Carrol’s remarks in the quote you provide but I do think Carroll characterizes relativism in a way that may be inaccurate, or at least incomplete. According to Carroll:
“If a community decides that something is moral, then it’s moral. That’s what it means to be moral. And people who are outside the community have nothing to say about that, have no rights, no leverage, to critique it.”
This may be true of some forms of moral relativism, but not all or the most defensible forms. Nothing about moral relativism prohibits the relativist from judging the moral actions of other people, or the cultural standards of other cultures, nor does relativism entail that they have no right or leverage to criticize those cultures. After all, the latter appear to be moral or at least normative claims themselves, and if you’re a relativist, you could reasonably ask: no right or leverage relative to what moral standard? The standards of the people or cultures I am judging, or relative to my own standards? A relativist does not have to think they can only judge people according to those people’s standards; they can endorse appraiser relativism, and think that they can judge others relative to their own standards.
One shortcoming in descriptions of moral relativism is that they frequently fail to distinguish between agent and appraiser relativism. Agent relativism holds that moral standards are true or false relative to the agent performing the action (or that agent’s culture). Appraiser relativism holds that moral standards are true or false relative to the moral framework of the agent (or the culture of the agent) judging the action in question. Here’s how the SEP distinguishes them:
“Appraiser relativism suggests that we do or should make moral judgments on the basis of our own standards, while agent relativism implies that the relevant standards are those of the persons we are judging.”
Many common depictions of relativism focus on agent relativism. And this seems consistent with Carrol’s description. Yet I suspect this emphasis stems from a tendency to characterize relativism in ways that seem to have more straightforward normative implications: people often reject relativism because it purportedly encourages or mandates indifference towards people with different moral standards. But this would only be true of at best some forms of moral relativism. Incidentally, Gowans, the author of the SEP article on moral relativism, says:
“Appraiser relativism is the more common position, and it will usually be assumed in the discussion that follows.”
I don’t know if this is true. But if it is, there’s something odd about depictions of relativism that seem closer to agent relativism than appraiser relativism. Appraiser relativism can get you something pretty close to the kind of constructivism Carroll describes, so I don’t think the relativism/constructivism distinction was necessary here. Relativism itself has the resources to do what Carroll proposes.
My goal with the remark is to accurately characterize relativism. Not to defend it. If someone wants to object to relativism on the grounds that it doesn’t achieve anything, that’s orthogonal to the point I was making. I’m not really sure I understand the objection, though. When you say the judgments achieve nothing, can you clarify what you mean? If I judge others as doing something wrong, I’m not sure why it would be an objection to tell me that this doesn’t achieve anything. Would it avoid the objection by achieving something in particular? If so, what?
Ethics is supposed to do things, not be an ivory tower approach.
The symmetry problem, the fact that every relativist can equally criticise every other, is a bug not a feature.
Alice: stop that, it’s wrong-for-me!
Bob: It’s Ok by me, so I’m going to carry on.
Etc,ad infinitum.
If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism is that relativists can’t criticise… The actual point is that it is in vain … No relativist has a motivation to.change their mind.
This is a common feature of moral disputes even when no relativism is involved. Compare:
“You shouldn’t do that.” “It’s fine according to my values, and that’s all that matters.”
“You shouldn’t do that.” “Yes I should; you’re wrong about morality.”
If there’s an important difference between these that makes 1 problematic and 2 not, I’m failing to see it. In practice, the way you convince someone to change their behaviour is some combination of (a) appealing to moral ideas they do agree with you about and (b) influencing them not-explicitly-rationally to change their values (e.g., by exposing them to people they currently condemn so that they can see for themselves that they’re decent human beings). And both of these work equally well (or badly) whether either or both of the parties are moral realists.
“You shouldn’t do that.” “It’s fine according to my values, and that’s all that matters.”
“You shouldn’t do that.” “Yes I should; you’re wrong about morality.”
If there’s an important difference between these that makes 1 problematic and 2 not, I’m failing to see it.
1 is necessarily subjective, and 2 isnt.
In practice, the way you convince someone to change their behaviour is some combination of (a) appealing to moral ideas they do agree with you about and (b) influencing them not-explicitly-rationally to change their values
Maybe in normie-land, but in philosophy you can go up meta levels.
Yes, 1 is necessarily subjective and 2 isn’t. But since what you were trying to do is to show that subjectivism is bad, it’s not really on to take “it’s subjective!” as a criticism.
Philosophers and other intellectual sorts may indeed be more open than normies to rational persuasion in matters of ethics. (So probably more of (a) and less of (b).) They’re also not much given to resolving their disagreements by brute force, realist or not, relativist or not, so your concern that “force will take the place of reason” doesn’t seem very applicable to them. Is there any evidence that philosophers who are moral realists are more readily persuaded to change their ethical positions than philosophers who are moral nonrealists? For what it’s worth, my intuition expects not.
Your argument was that for subjectivists “such judgements achieve nothing” on the grounds that “every relativist can equally criticise every another” because when criticized someone can say “It’s OK by me, so I’m going to carry on”, so that “force will take the place of reason” since “no relativist has a motivation to change their mind”.
I objected that this argument actually applies just as much to moral realists, the only difference being that the response changes from “It’s OK by me” to “It’s OK objectively”. No one is going to be convinced just by being told “X is wrong”; you have to offer some sort of argument starting from premises they share, and that’s exactly as true whether the people involved are realists or not, subjectivists or not, relativists or not. (Or, in either case, you can try to persuade by not-explicitly-rational means like just showing them the consequences of their alleged principles, or making them personally acquainted with people they are inclined to condemn, or whatever; this, too, works or fails just the same whether anyone involved is objectivist or subjectivist.)
When I made this objection, your reply was that “It’s OK by me” is “necessarily subjective” and “It’s OK objectively” isn’t. But if your argument against subjectivism depends on it being bad for something to be subjective then it is a circular argument.
Maybe that’s not what you meant. Maybe you were just doubling down on the claim that being “necessarily subjective” means there’s no hope of convincing anyone to change their moral judgements. But that’s exactly the thing I’m disagreeing with, and you’re not offering any counterargument by merely reiterating the claim I’m disagreeing with.
No one is going to be convinced just by being told “X is wrong”;
Obviously they are not, and that was not my argument.
you have to offer some sort of argument starting from premises they share, and that’s exactly as true whether the people involved are realists or not, subjectivists or not, relativists or not
I know.
But if your argument against subjectivism depends on it being bad for something to be subjective then it is a circular argument.
My argument was:-
that for subjectivists “such judgements achieve nothing” on the grounds that “every relativist can equally criticise every another”
Yeah, that was your argument originally. But when I explained why I didn’t buy it you switched to “1 is necessarily subjective, and 2 isn’t” as if being subjective is known to be a fatal problem—but the question at issue is precisely whether being subjective is a problem or not!
Anyway: Anyone can equally criticize anyone, relativist or not, subjectivist or not, realist or not. Can you give some actual, reasonably concrete examples of moral disagreements in which moral nonrealism makes useful discussion impossible or pointless or something, and where in an equivalent scenario involving moral realists progress would be possible?
If I try to imagine such an example, the sort of thing I come up with goes like this. X and Y are moral nonrealists. X is torturing kittens. Y says “Stop that! It’s wrong!” X says “Not according to my values.” And then, if I understand you aright, Y is supposed to give up in despair because “every relativist can equally criticise every other” or something. But in practice, (1) Y need not give up, because maybe there are things in X’s values that Y thinks actually lead to the conclusion that one shouldn’t torture kittens, and (2) in a parallel scenario involving moral realists, the only difference is that X just says “No it isn’t”, and if Y wants not to give up here then they have to do the same as in the nonrealist scenario: find things X agrees with from which one can get to “don’t torture kittens”. And all the arguments are just the same in the two cases, except that in one Y has to be explicit about where they’re explicitly appealing to some potentially controversial matter of values. This is, it seems to me, not a disadvantage. (Those controversial matters are just as controversial for moral realists.)
Perhaps this isn’t the kind of scenario you have in mind. Or perhaps there’s some specific kind of argument you think realist-Y can make that might actually convince realist-X, that doesn’t have a counterpart in the nonrealist version of the scenario. If so, I’m all ears: show me the details!
I can think of one kind of scenario where progress is easier for realists. Kinda. Suppose X and Y are “the same kind” of moral realist: e.g., they are both divine command theorists and they belong to the same religion, or they are both hedonistic act-utilitarians, or something. In this case, they should be able to reduce their argument about torturing kittens to a more straightforwardly factual argument about what their scriptures say or what gives who how much pleasure. But this isn’t really about realism versus nonrealism. If we imagine the nearest nonrealist equivalents of these guys, then we find e.g. that X and Y both say “What I choose to value is maximizing the net pleasure minus pain in the world”—and then, just as if they were realists, X and Y can in principle resolve their moral disagreement by arguing about matters of nonmoral fact. And if we let X and Y remain realists, but have them be “of different kinds”—maybe X is a divine command theorist and Y is a utilitarian—then they can be as utterly stuck as any nonrealists could be. Y says: but look, torturing kittens produces all this suffering! X says: so what? suffering has nothing to do with value; the gods have commanded that I torture kittens. And the difficulty they have in making progress from there is exactly the same sort of difficulty as their nonrealist equivalents would have.
(I remark that “It would be awful if X were true, therefore X is false” is not a valid form of argument, so even if you are correct about moral nonrealism making it impossible or futile to argue about morality that wouldn’t be any reason to disbelieve moral realism. But I don’t think you are in fact correct about it.)
Anyway: Anyone can equally criticize anyone, relativist or not, subjectivist or not, realist or not
Only in the ultimate clown universe where there are no facts or rules.
need not give up, because maybe there are things in X’s values that Y thinks actually lead to the conclusion that one shouldn’t torture kittens
But if those things are subjective, the same problem re-applies.
Perhaps this isn’t the kind of scenario you have in mind. Or perhaps there’s some specific kind of argument you think realist-Y can make that might actually convince realist-X, that doesn’t have a counterpart in the nonrealist version of the scenario. If so, I’m all ears: show me the details!.
Any realist argument that could do that. So long as there is such a thing. I think your real objection is that there are no good realist arguments. But you can’t be completely sure of that. If there is a 1% chance of a succesfull realist argument , then rational. debaters who want to converge on the truth should take that chance , rather than blocking it off by assuming subjectvism.
If you assume subjectivism , you are guaranteed not to get onto a realistic argument. If you assume realism , there is a possibility, but not a guarantee, of getting onto a realistic solution.
I remark that “It would be awful if X were true, therefore X is false” is not a valid form of argument
It’s entirely valid if you are constructing something. Bridges that fall down are awful, so don’t construct them that way.
I think that when you say “if those things are subjective, the same problem re-applies” you are either arguing in a circle, or claiming something that’s just false.
Suppose X is a moral nonrealist (but not a nihilist: he does have moral values, he just doesn’t think they’re built into the structure of the universe somehow), and he’s doing something that actually isn’t compatible with his moral values but he hasn’t noticed. Crudely simple toy example for clarity: he’s torturing kittens because he’s a utilitarian and enjoys torturing kittens, but he somehow hasn’t considered the kittens’ suffering at all in his moral reckoning. Y (who, let’s suppose, is also a moral nonrealist, though it doesn’t particularly matter) points out that the kittens are suffering terribly. X thinks about it for a while and agrees that indeed his values say he shouldn’t torture kittens, and reluctantly stops doing it.
This seems to me a perfectly satisfactory way for things to go, and in particular it is no less satisfactory than if X is a moral realist who believes that hedonistic utilitarianism is an objective truth and stops torturing kittens because Y convinces him that the objective truth of hedonistic utilitarianism implies the objective truth that one shouldn’t torture kittens, rather than “merely” that his own acceptance of hedonistic utilitarianism implies that he shouldn’t torture kittens.
“Oh, but instead of being convinced X could just say: meh, maybe you’re right but who cares? And then Y will have no good arguments.” Sure. But that’s an argument not against moral nonrealism but against moral nihilism: against not actually having any moral values of any sort at all.
“Oh, sure, X may be convinced, but that doesn’t count because it wasn’t a realist argument. Only realist arguments count.” Well, then your argument is perfectly circular: nonrealism is bad because nonrealists can’t make realist arguments. And, sure, I will gladly concede that if you take it as axiomatic that nonrealism is bad then you can conclude that nonrealism is bad, but so what?
No, my real objection is not that there are no good realist arguments. I’m not sure quite what you mean by that phrase, though.
If you mean arguments that start from only nonmoral premises and deduce moral truths then as it happens I don’t believe there are any; if there are then indeed moral realism is correct; but, also, if there are then they should have as much force for an intelligent and openminded nonrealist (who will, on understanding the arguments, stop being a nonrealist) as for a realist.
If you mean arguments that assume realism but not anything more specific then I rather doubt that that assumption buys you anything, though I’m willing to be shown the error of my ways. At any rate, I can’t see how that assumption is ever going to be any use in, say, arguing that X shouldn’t be torturing kittens.
If you mean arguments that assume some specific sort of realism (e.g., that every moral claim in the New Testament is true, or that the best thing to do is whatever gives the greatest expected excess of pleasure over pain) then (1) these will have no more force for a realist who doesn’t accept that particular kind of realism than for a nonrealist and (2) they will have as much force for a nonrealist who embraces the same moral system (not very common for divine-command theories, I guess, but there are definitely nonrealist utilitarians).
Again: I would like to see a concrete example of how this is supposed to work. You say “any realist argument” but it seems to me that that’s obviously wrong for the reason I’ve already given above: “you shouldn’t torture kittens because hedonistic utilitarianism is objectively right and torturing kittens produces net excess suffering” is a realist argument, but it is exactly paralleled by “you shouldn’t torture kittens because you are a hedonistic utilitarian, and torturing kittens produces net excess suffering” which is a perfectly respectable argument to make to a nonrealist hedonistic utilitarian.
Of course I agree that I can’t be completely sure that there are no good realist arguments (whatever exactly you mean by that), or indeed of anything else. If a genuinely strong argument for moral realism comes along, I hope I’ll see its merits and be convinced. I’m not sure what I’ve said to make you think otherwise.
It seems to me that your last paragraph amounts to a wholehearted embrace of moral nonrealism. If moral realism versus nonrealism is something we are constructing, something we could choose to be one way or the other according to what gives the better outcomes—why, then, in fact moral realism is false. (Because if it is true, then we don’t have the freedom to choose to believe something else in pursuit of better outcomes, at least not if we first and foremost want our beliefs to be true rather than false.)
I don’t have much in the way of any general opposition to Carrol’s remarks in the quote you provide but I do think Carroll characterizes relativism in a way that may be inaccurate, or at least incomplete. According to Carroll:
This may be true of some forms of moral relativism, but not all or the most defensible forms. Nothing about moral relativism prohibits the relativist from judging the moral actions of other people, or the cultural standards of other cultures, nor does relativism entail that they have no right or leverage to criticize those cultures. After all, the latter appear to be moral or at least normative claims themselves, and if you’re a relativist, you could reasonably ask: no right or leverage relative to what moral standard? The standards of the people or cultures I am judging, or relative to my own standards? A relativist does not have to think they can only judge people according to those people’s standards; they can endorse appraiser relativism, and think that they can judge others relative to their own standards.
One shortcoming in descriptions of moral relativism is that they frequently fail to distinguish between agent and appraiser relativism. Agent relativism holds that moral standards are true or false relative to the agent performing the action (or that agent’s culture). Appraiser relativism holds that moral standards are true or false relative to the moral framework of the agent (or the culture of the agent) judging the action in question. Here’s how the SEP distinguishes them:
Many common depictions of relativism focus on agent relativism. And this seems consistent with Carrol’s description. Yet I suspect this emphasis stems from a tendency to characterize relativism in ways that seem to have more straightforward normative implications: people often reject relativism because it purportedly encourages or mandates indifference towards people with different moral standards. But this would only be true of at best some forms of moral relativism. Incidentally, Gowans, the author of the SEP article on moral relativism, says:
I don’t know if this is true. But if it is, there’s something odd about depictions of relativism that seem closer to agent relativism than appraiser relativism. Appraiser relativism can get you something pretty close to the kind of constructivism Carroll describes, so I don’t think the relativism/constructivism distinction was necessary here. Relativism itself has the resources to do what Carroll proposes.
Again, that isn’t the objection. The objection is that such judgements achieve nothing.
My goal with the remark is to accurately characterize relativism. Not to defend it. If someone wants to object to relativism on the grounds that it doesn’t achieve anything, that’s orthogonal to the point I was making. I’m not really sure I understand the objection, though. When you say the judgments achieve nothing, can you clarify what you mean? If I judge others as doing something wrong, I’m not sure why it would be an objection to tell me that this doesn’t achieve anything. Would it avoid the objection by achieving something in particular? If so, what?
Ethics is supposed to do things, not be an ivory tower approach.
The symmetry problem, the fact that every relativist can equally criticise every other, is a bug not a feature.
Alice: stop that, it’s wrong-for-me! Bob: It’s Ok by me, so I’m going to carry on.
Etc,ad infinitum.
If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism is that relativists can’t criticise… The actual point is that it is in vain … No relativist has a motivation to.change their mind.
This is a common feature of moral disputes even when no relativism is involved. Compare:
“You shouldn’t do that.” “It’s fine according to my values, and that’s all that matters.”
“You shouldn’t do that.” “Yes I should; you’re wrong about morality.”
If there’s an important difference between these that makes 1 problematic and 2 not, I’m failing to see it. In practice, the way you convince someone to change their behaviour is some combination of (a) appealing to moral ideas they do agree with you about and (b) influencing them not-explicitly-rationally to change their values (e.g., by exposing them to people they currently condemn so that they can see for themselves that they’re decent human beings). And both of these work equally well (or badly) whether either or both of the parties are moral realists.
1 is necessarily subjective, and 2 isnt.
Maybe in normie-land, but in philosophy you can go up meta levels.
Yes, 1 is necessarily subjective and 2 isn’t. But since what you were trying to do is to show that subjectivism is bad, it’s not really on to take “it’s subjective!” as a criticism.
Philosophers and other intellectual sorts may indeed be more open than normies to rational persuasion in matters of ethics. (So probably more of (a) and less of (b).) They’re also not much given to resolving their disagreements by brute force, realist or not, relativist or not, so your concern that “force will take the place of reason” doesn’t seem very applicable to them. Is there any evidence that philosophers who are moral realists are more readily persuaded to change their ethical positions than philosophers who are moral nonrealists? For what it’s worth, my intuition expects not.
I’ve already given the argument against subjectivism.
Your argument was that for subjectivists “such judgements achieve nothing” on the grounds that “every relativist can equally criticise every another” because when criticized someone can say “It’s OK by me, so I’m going to carry on”, so that “force will take the place of reason” since “no relativist has a motivation to change their mind”.
I objected that this argument actually applies just as much to moral realists, the only difference being that the response changes from “It’s OK by me” to “It’s OK objectively”. No one is going to be convinced just by being told “X is wrong”; you have to offer some sort of argument starting from premises they share, and that’s exactly as true whether the people involved are realists or not, subjectivists or not, relativists or not. (Or, in either case, you can try to persuade by not-explicitly-rational means like just showing them the consequences of their alleged principles, or making them personally acquainted with people they are inclined to condemn, or whatever; this, too, works or fails just the same whether anyone involved is objectivist or subjectivist.)
When I made this objection, your reply was that “It’s OK by me” is “necessarily subjective” and “It’s OK objectively” isn’t. But if your argument against subjectivism depends on it being bad for something to be subjective then it is a circular argument.
Maybe that’s not what you meant. Maybe you were just doubling down on the claim that being “necessarily subjective” means there’s no hope of convincing anyone to change their moral judgements. But that’s exactly the thing I’m disagreeing with, and you’re not offering any counterargument by merely reiterating the claim I’m disagreeing with.
Obviously they are not, and that was not my argument.
I know.
My argument was:-
Yeah, that was your argument originally. But when I explained why I didn’t buy it you switched to “1 is necessarily subjective, and 2 isn’t” as if being subjective is known to be a fatal problem—but the question at issue is precisely whether being subjective is a problem or not!
Anyway: Anyone can equally criticize anyone, relativist or not, subjectivist or not, realist or not. Can you give some actual, reasonably concrete examples of moral disagreements in which moral nonrealism makes useful discussion impossible or pointless or something, and where in an equivalent scenario involving moral realists progress would be possible?
If I try to imagine such an example, the sort of thing I come up with goes like this. X and Y are moral nonrealists. X is torturing kittens. Y says “Stop that! It’s wrong!” X says “Not according to my values.” And then, if I understand you aright, Y is supposed to give up in despair because “every relativist can equally criticise every other” or something. But in practice, (1) Y need not give up, because maybe there are things in X’s values that Y thinks actually lead to the conclusion that one shouldn’t torture kittens, and (2) in a parallel scenario involving moral realists, the only difference is that X just says “No it isn’t”, and if Y wants not to give up here then they have to do the same as in the nonrealist scenario: find things X agrees with from which one can get to “don’t torture kittens”. And all the arguments are just the same in the two cases, except that in one Y has to be explicit about where they’re explicitly appealing to some potentially controversial matter of values. This is, it seems to me, not a disadvantage. (Those controversial matters are just as controversial for moral realists.)
Perhaps this isn’t the kind of scenario you have in mind. Or perhaps there’s some specific kind of argument you think realist-Y can make that might actually convince realist-X, that doesn’t have a counterpart in the nonrealist version of the scenario. If so, I’m all ears: show me the details!
I can think of one kind of scenario where progress is easier for realists. Kinda. Suppose X and Y are “the same kind” of moral realist: e.g., they are both divine command theorists and they belong to the same religion, or they are both hedonistic act-utilitarians, or something. In this case, they should be able to reduce their argument about torturing kittens to a more straightforwardly factual argument about what their scriptures say or what gives who how much pleasure. But this isn’t really about realism versus nonrealism. If we imagine the nearest nonrealist equivalents of these guys, then we find e.g. that X and Y both say “What I choose to value is maximizing the net pleasure minus pain in the world”—and then, just as if they were realists, X and Y can in principle resolve their moral disagreement by arguing about matters of nonmoral fact. And if we let X and Y remain realists, but have them be “of different kinds”—maybe X is a divine command theorist and Y is a utilitarian—then they can be as utterly stuck as any nonrealists could be. Y says: but look, torturing kittens produces all this suffering! X says: so what? suffering has nothing to do with value; the gods have commanded that I torture kittens. And the difficulty they have in making progress from there is exactly the same sort of difficulty as their nonrealist equivalents would have.
(I remark that “It would be awful if X were true, therefore X is false” is not a valid form of argument, so even if you are correct about moral nonrealism making it impossible or futile to argue about morality that wouldn’t be any reason to disbelieve moral realism. But I don’t think you are in fact correct about it.)
Only in the ultimate clown universe where there are no facts or rules.
But if those things are subjective, the same problem re-applies.
Any realist argument that could do that. So long as there is such a thing. I think your real objection is that there are no good realist arguments. But you can’t be completely sure of that. If there is a 1% chance of a succesfull realist argument , then rational. debaters who want to converge on the truth should take that chance , rather than blocking it off by assuming subjectvism.
If you assume subjectivism , you are guaranteed not to get onto a realistic argument. If you assume realism , there is a possibility, but not a guarantee, of getting onto a realistic solution.
It’s entirely valid if you are constructing something. Bridges that fall down are awful, so don’t construct them that way.
I think that when you say “if those things are subjective, the same problem re-applies” you are either arguing in a circle, or claiming something that’s just false.
Suppose X is a moral nonrealist (but not a nihilist: he does have moral values, he just doesn’t think they’re built into the structure of the universe somehow), and he’s doing something that actually isn’t compatible with his moral values but he hasn’t noticed. Crudely simple toy example for clarity: he’s torturing kittens because he’s a utilitarian and enjoys torturing kittens, but he somehow hasn’t considered the kittens’ suffering at all in his moral reckoning. Y (who, let’s suppose, is also a moral nonrealist, though it doesn’t particularly matter) points out that the kittens are suffering terribly. X thinks about it for a while and agrees that indeed his values say he shouldn’t torture kittens, and reluctantly stops doing it.
This seems to me a perfectly satisfactory way for things to go, and in particular it is no less satisfactory than if X is a moral realist who believes that hedonistic utilitarianism is an objective truth and stops torturing kittens because Y convinces him that the objective truth of hedonistic utilitarianism implies the objective truth that one shouldn’t torture kittens, rather than “merely” that his own acceptance of hedonistic utilitarianism implies that he shouldn’t torture kittens.
“Oh, but instead of being convinced X could just say: meh, maybe you’re right but who cares? And then Y will have no good arguments.” Sure. But that’s an argument not against moral nonrealism but against moral nihilism: against not actually having any moral values of any sort at all.
“Oh, sure, X may be convinced, but that doesn’t count because it wasn’t a realist argument. Only realist arguments count.” Well, then your argument is perfectly circular: nonrealism is bad because nonrealists can’t make realist arguments. And, sure, I will gladly concede that if you take it as axiomatic that nonrealism is bad then you can conclude that nonrealism is bad, but so what?
No, my real objection is not that there are no good realist arguments. I’m not sure quite what you mean by that phrase, though.
If you mean arguments that start from only nonmoral premises and deduce moral truths then as it happens I don’t believe there are any; if there are then indeed moral realism is correct; but, also, if there are then they should have as much force for an intelligent and openminded nonrealist (who will, on understanding the arguments, stop being a nonrealist) as for a realist.
If you mean arguments that assume realism but not anything more specific then I rather doubt that that assumption buys you anything, though I’m willing to be shown the error of my ways. At any rate, I can’t see how that assumption is ever going to be any use in, say, arguing that X shouldn’t be torturing kittens.
If you mean arguments that assume some specific sort of realism (e.g., that every moral claim in the New Testament is true, or that the best thing to do is whatever gives the greatest expected excess of pleasure over pain) then (1) these will have no more force for a realist who doesn’t accept that particular kind of realism than for a nonrealist and (2) they will have as much force for a nonrealist who embraces the same moral system (not very common for divine-command theories, I guess, but there are definitely nonrealist utilitarians).
Again: I would like to see a concrete example of how this is supposed to work. You say “any realist argument” but it seems to me that that’s obviously wrong for the reason I’ve already given above: “you shouldn’t torture kittens because hedonistic utilitarianism is objectively right and torturing kittens produces net excess suffering” is a realist argument, but it is exactly paralleled by “you shouldn’t torture kittens because you are a hedonistic utilitarian, and torturing kittens produces net excess suffering” which is a perfectly respectable argument to make to a nonrealist hedonistic utilitarian.
Of course I agree that I can’t be completely sure that there are no good realist arguments (whatever exactly you mean by that), or indeed of anything else. If a genuinely strong argument for moral realism comes along, I hope I’ll see its merits and be convinced. I’m not sure what I’ve said to make you think otherwise.
It seems to me that your last paragraph amounts to a wholehearted embrace of moral nonrealism. If moral realism versus nonrealism is something we are constructing, something we could choose to be one way or the other according to what gives the better outcomes—why, then, in fact moral realism is false. (Because if it is true, then we don’t have the freedom to choose to believe something else in pursuit of better outcomes, at least not if we first and foremost want our beliefs to be true rather than false.)