Talking about the truth-value of the assertion “murder is right” seems unjustified at this point, much like the truth-value of “rubbers help prevent pregnancy.” Is it true? Yes. Is it false? Yes. When a word means different things within a conversation, ambiguity is introduced to many sentences containing that word. It helps at that point to set aside the ambiguous label and introduce more precise ones. Which is why I introduced X1 and X2 in the first place.
I agree that the fact that X1 rejects murder doesn’t necessarily change just because X2 endorses it.
But I don’t agree that what X1 endorses is necessarily independent of what X2 endorses.
For example, if I don’t value the existence of Gorgonzola in the world, and I value your preferences being satisfied, then I value Gorgonzola IFF you prefer there exist Gorgonzola in the world.
To the extent that what I should do is a function of what I value, and to the extent that X2 relates to your preferences, then X2 (what you call “right”) has a lot to do with what I should do.
The assertion “murder is right”—by your definition of “right”, which is the only definition you should care about, being the person who formulates the question “what is right for me to do?”—has a value of TRUE precisely if X1 endorses murder. There’s nothing unjustified about saying that, since X1 was brought in specifically defined as the thing your definition of “right” refers to.
I’ll grant that it’s perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples’ terminal values. But if so that’s a question of object-level ethics, not meta-ethics.
your definition of “right”, which is the only definition you should care about...
It is not clear to me that X1 is the only definition of “right” I should care about, even if it is mine… any more than thing-to-erase-pencil-marks-with is the only definition of “rubber” I should care about.
Regardless, whether I should care about other people’s definitions of these words or not, the fact remains that I do seem to care about it.
And I also seem to care about other people’s preferences being satisfied, especially the preferences that they associate with the emotional responses that lead them to talk about that preference being “right” (rather than just “my preference”).
Again, maybe I oughtn’t… though if so, it’s not clear to me why… but nevertheless I do.
...being the person who formulates the question “what is right for me to do?”
It may be relevant that this is not the only moral question I formulate. Other moral questions include “what is right for others to do?” and “what is right to occur?” Indeed, that last one is far more important to me than the others, which is one reason I consider myself mostly a consequentialist.
I’ll grant that it’s perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples’ terminal values. But if so that’s a question of object-level ethics, not meta-ethics.
Any question you could possibly want the answer to relating in any sense to “rightness” is not a question at all unless you have a definition of “right” in mind (or at the least a fuzzy intuitive definition that you don’t have full access to). You want to know “what is right to occur”. You won’t get anywhere unless you have an inkling of what you meant by “right”. It’s built into the question that you are looking for the answer to your question. It’s your question!
Maybe you decide that X1 (which is the meaning of your definition of “right”) includes, among things such as “eudaimonia” and “no murder”, “other humans getting what they value”. Then the answer to your question is that it’s right for people to experience eudaimonia and to not be murdered, and to get what they value. And the answer to “what should I do” is that you should try and bring those things about.
Or maybe I decide that X1 doesn’t include other humans getting what they value, and I’m only under the impression that it does because there are some things that other humans happen to value that X1 does include, or because X1 includes something that is similar but not quite identical to other humans getting what they value, or for some other reason.
Either way, whichever of those things turns out to be the case, that’s what I should do… agreed (1).
Of course, in some of those cases (though not others), in order to work out what that is in practice, I also need to know what other humans’ equivalents of X1 are. That is, if it turns out X1 includes you getting what you value as long as you’re alive, and what you value is given by X2, then as long as you’re alive I should bring about X2 as well as X1. And in this scenario, when you are no longer alive, I no longer should bring about X2.
==== (1) Or, well, colloquially true, anyway. I should certainly prefer those things occurring, but whether I should do anything in particular, let alone try to do anything in particular, is less clear. For example, if there exists a particularly perverse agent A who is much more powerful than I, and if A is such that A will bring about what I value IFF I make no efforts whatsoever towards bringing them about myself, then it follows that what I ought to do is make no efforts whatsoever towards bringing them about. It’s not clear that I’m capable of that, but whether I’m capable of it or not it seems clear that it’s what I ought to do. Put a different way, in that situation I should prefer to be capable of doing so, if it turns out that I’m not.
Talking about the truth-value of the assertion “murder is right” seems unjustified at this point, much like the truth-value of “rubbers help prevent pregnancy.” Is it true? Yes. Is it false? Yes. When a word means different things within a conversation, ambiguity is introduced to many sentences containing that word. It helps at that point to set aside the ambiguous label and introduce more precise ones. Which is why I introduced X1 and X2 in the first place.
I agree that the fact that X1 rejects murder doesn’t necessarily change just because X2 endorses it.
But I don’t agree that what X1 endorses is necessarily independent of what X2 endorses.
For example, if I don’t value the existence of Gorgonzola in the world, and I value your preferences being satisfied, then I value Gorgonzola IFF you prefer there exist Gorgonzola in the world.
To the extent that what I should do is a function of what I value, and to the extent that X2 relates to your preferences, then X2 (what you call “right”) has a lot to do with what I should do.
The assertion “murder is right”—by your definition of “right”, which is the only definition you should care about, being the person who formulates the question “what is right for me to do?”—has a value of TRUE precisely if X1 endorses murder. There’s nothing unjustified about saying that, since X1 was brought in specifically defined as the thing your definition of “right” refers to.
I’ll grant that it’s perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples’ terminal values. But if so that’s a question of object-level ethics, not meta-ethics.
It is not clear to me that X1 is the only definition of “right” I should care about, even if it is mine… any more than thing-to-erase-pencil-marks-with is the only definition of “rubber” I should care about.
Regardless, whether I should care about other people’s definitions of these words or not, the fact remains that I do seem to care about it.
And I also seem to care about other people’s preferences being satisfied, especially the preferences that they associate with the emotional responses that lead them to talk about that preference being “right” (rather than just “my preference”).
Again, maybe I oughtn’t… though if so, it’s not clear to me why… but nevertheless I do.
It may be relevant that this is not the only moral question I formulate. Other moral questions include “what is right for others to do?” and “what is right to occur?” Indeed, that last one is far more important to me than the others, which is one reason I consider myself mostly a consequentialist.
Maybe so. What follows from that?
Any question you could possibly want the answer to relating in any sense to “rightness” is not a question at all unless you have a definition of “right” in mind (or at the least a fuzzy intuitive definition that you don’t have full access to). You want to know “what is right to occur”. You won’t get anywhere unless you have an inkling of what you meant by “right”. It’s built into the question that you are looking for the answer to your question. It’s your question!
Maybe you decide that X1 (which is the meaning of your definition of “right”) includes, among things such as “eudaimonia” and “no murder”, “other humans getting what they value”. Then the answer to your question is that it’s right for people to experience eudaimonia and to not be murdered, and to get what they value. And the answer to “what should I do” is that you should try and bring those things about.
Yes, that’s true.
Or maybe I decide that X1 doesn’t include other humans getting what they value, and I’m only under the impression that it does because there are some things that other humans happen to value that X1 does include, or because X1 includes something that is similar but not quite identical to other humans getting what they value, or for some other reason.
Either way, whichever of those things turns out to be the case, that’s what I should do… agreed (1).
Of course, in some of those cases (though not others), in order to work out what that is in practice, I also need to know what other humans’ equivalents of X1 are. That is, if it turns out X1 includes you getting what you value as long as you’re alive, and what you value is given by X2, then as long as you’re alive I should bring about X2 as well as X1. And in this scenario, when you are no longer alive, I no longer should bring about X2.
====
(1) Or, well, colloquially true, anyway. I should certainly prefer those things occurring, but whether I should do anything in particular, let alone try to do anything in particular, is less clear. For example, if there exists a particularly perverse agent A who is much more powerful than I, and if A is such that A will bring about what I value IFF I make no efforts whatsoever towards bringing them about myself, then it follows that what I ought to do is make no efforts whatsoever towards bringing them about. It’s not clear that I’m capable of that, but whether I’m capable of it or not it seems clear that it’s what I ought to do. Put a different way, in that situation I should prefer to be capable of doing so, if it turns out that I’m not.