Eliezer is a realist, he’s just also an indexicalist. According to his theory, when you use the word “morality”, you refer to “Human!morality”, and there are objective facts about that. His theory just also says that when Clippy uses the word “morality”, it refery to “Clippy!morality” (about which there are also objective facts, which are logically independent of the facts about “Human!morality”). Just like when you say “water”, it refers to water, but when twin-you says water, it refers to XYZ.
I thought that when humans and Clippy speak about morality, they speak about the same thing (assuming that they are not lying and not making mistakes).
The difference is in connotations. For humans, morality has a connotation “the thing that should be done”. For Clippy, morality has a connotation “this weird stuff humans care about”.
So, you could explain the concept of morality to Clippy, and then also explain that X is obviously moral. And Clippy would agree with you. It just wouldn’t make Clippy any more likely to do X; the “should” emotion would not get across. The only result would be Clippy remembering that humans feel a desire to do X; and that information could be later used to create more paperclips.
Clippy’s equivalent of “should” is connected to maximizing the number of paperclips. The fact that X is moral is about as much important for it as an existence of a specific paperclip is for us. “Sure, X is moral. I see. I have no use of this fact. Now stop bothering me, because I want to make another paperclip.”
According to his theory, when you use the word “morality”, you refer to “Human!morality”, and there are objective facts about that.
If this is a theory about what people mean when they say “morality”, then he is wrong about a significant percentage of people, as a matter of simple fact.
And what kinds of things are the things that people mean? Semantic entities, or entities in the world? If semantic, intensions or Kaplanian characters or something else?
This is not a rhetorical question. I have absolutely no clue what “mean” means when applied to people. (Actually, I don’t even know what it means when applied to words, but that case feels intuitively much clearer than people meaning something.)
By “mean” I mean (no pun intended) that when people say a word, they use it to refer to a concept they have. This can be a semantic entity, or a physical entity, or a linguistic entity elsewhere in the same sentence, or anything else the speaker has a mental concept of that they can attach the word to, and which they expect the listeners to infer by hearing the word.
To put it another way: people use words to cause the listener to think thoughts which correspond in a certain way to the ones the speaker thinks. The thoughts of the speaker, which they intend to convey to the listener, are what they mean by the words.
Please be patient, I’m out of my depth somewhat.
If I say to you “invisible pink unicorn” or “spherical cube”, I would characterise myself as not having successfully meant anything, even though, if I’m not paying attention, it feels like I did. Am I wrong? Am I confusing meaning with reference, or some such? It certainly seems to me that I am in some way failing.
If I say to you “invisible pink unicorn” or “spherical cube”, I would characterise myself as not having successfully meant anything, even though, if I’m not paying attention, it feels like I did.
In both examples I understand you to mean two (non-existent in the real world) items with a set of seemingly contradictory characteristics. So you did mean something. Not an object in the real world, but you meant the concept of an object containing contradictory characteristics, and gave examples of what “contradictory characteristics” are.
Indeed that meaning of contradiction is the reason “Invisible Pink Unicorn” is used to parody religion, etc.
Now if someone used the words without understanding that they are contradictory, or even believing the things in question are real—they’d still have meant something: An item in their model of the world. They’d be wrong that such an item really existed in the outside world, but their words would still have meaning in pinpointing to said item in their mental model.
Hm, thoughts are tricky things, and identity conditions of thoughts are trickier yet. I was just trying to see if you had a better idea of what “mean” might mean than me. But it seems we have to get by with what little we have.
Because I share your intuition that there is something fishy about the referential intention in Eliezer’s picture. With terms like water, it’s plausible that people intend to refer to “this stuff here” or “this stuff that [complicated description of their experiences with water]”. With morality, it seems dubious that they should be intending to refer to “this thing that humans would all want if we were absolutely coherent etc.”
Group-level moral relativism just is the belief that moral truths are indexed to groups. Since relativism is uncontroversially opposed to realism, “indexical realist” is a bit of a contradiction.
“Indexicality” in the philosopher’s sense means that the reference of a word depends on who utters it in which circumstances. Putnam argues that “water” (and all other natural kind terms) has an indexical component because its reference depends on whether you or twin-you utters it.
Which is about equivalent to claiming that anything might be relative, because it might be indexical along some unknown axis, in this case unobserved possible worlds. I afraid I don’t think that is very interesting.
What’s that concept of “relativity” you’re talking about, anyway? The proposition expressed by the sentence “clippy shouldn’t convert humans into paperclips”, uttered by a speaker of English in the actual world, is simply true. That the proposition expressed by the sentence varies depending on who utters it in which world is a completely different thing. There is no relativism about whether I am sitting at my desk just because I can report this fact by saying “I’m sitting in my desk” (which you can’t do, because if you said that sentence, you would be expressing a different proposition, one that’s about you, not me).
“clippy shouldn’t convert humans into paperclips”, uttered by a speaker of English in the actual world, is simply true.
Only if moral realism is also true. If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.
There is no relativism about whether I am sitting at my desk just because I can report this fact by saying “I’m sitting in my desk”
It’s not relative, and it is indexical, because “I” is indexical. The point you are making is again, not interesting.
Yes, of course. I was illustrating how the theory works.
If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.
No, it doesn’t. The thing is that on the view I’m talking about here, sentences don’t have truth-conditions, but propositions have. (Some) sentences express a proposition dependent on the context of utterance. Moral realism thus has to be the position that moral statements express propositions, because it wouldn’t make any sense otherwise—sentences don’t have truth-conditions anyway. When clippy says “One shouldn’t convert humans into paperclips”, he is simply not expressing the same proposition that I am expressing when I utter that sentence.
The point you are making is again, not interesting.
Then why exactly are you having a discussion that seems to be based on you not understanding concepts that you find “uninteresting”? I find your sense of “relative”, which seems to be “in any conceivable way dependent on anything”, pretty uninteresting, actually...
When clippy says “One shouldn’t convert humans into paperclips”, he is simply not expressing the same proposition that I am expressing when I utter that sentence.
Why shouldn’t the truth-value attach to a (proposition, context) tuple? Why, for that matter shouldn’t it attach to a (sentence, language, context) tuple?
A (sentence,language,context) tuple uniquely determines a proposition, so I don’t mind if you attach a truth-value to that (relative to a world of evaluation, of course). But propositions don’t change their truth-value relative to a context by definition. A proposition is that thing which has a truth-value relative to a situation of evaluation.
But—see this comment—I may have been too charitable in interpreting “realism” as what is more properly called “cognitivism”. That’s because I can’t think of any other interpretation of “realism” that even makes any sense.
Cognitivism is compatible with the claim that moral statements have truth values that vary with the speaker. (despite lack of explicit indexicals, yadda yadda). The contrary claim is that they don’t. I don’t see why the one claim should be more readily comprehensible that its opposite.
The contrary claim is often called realism, although that muddies the water, since in addition to the epistemological claim it can be used to state the claim that moral terms have real referents.
“Cognitivism encompasses all forms of moral realism, but cognitivism can also agree with ethical irrealism or anti-realism. Aside from the subjectivist branch of cognitivism, some cognitive irrealist theories accept that ethical sentences can be objectively true or false, even if there exist no natural, physical or in any way real (or “worldly”) entities or objects to make them true or false.
There are a number of ways of construing how a proposition can be objectively true without corresponding to the world:
By the coherence rather than the correspondence theory of truth
In a figurative sense: it can be true that I have a cold, but that doesn’t mean that the word “cold” corresponds to a distinct entity.
In the way that mathematical statements are true for mathematical anti-realists. This would typically be the idea that a proposition can be true if it is a entailment of some intuitively appealing axiom — in other words, apriori anayltical reasoning.
Crispin Wright, John Skorupski and some others defend normative cognitivist irrealism. Wright asserts the extreme implausibility of both J. L. Mackie’s error-theory and non-cognitivism (including S. Blackburn’s quasi-realism) in view of both everyday and sophisticated moral speech and argument. The same point is often expressed as the Frege-Geach Objection. Skorupski distinguishes between receptive awareness, which is not possible in normative matters, and non-receptive awareness (including dialogical knowledge), which is possible in normative matters.
Hilary Putnam’s book Ethics without ontology (Harvard, 2004) argues for a similar view, that ethical (and for that matter mathematical) sentences can be true and objective without there being any objects to make them so.
Cognitivism points to the semantic difference between imperative sentences and declarative sentences in normative subjects. Or to the different meanings and purposes of some superficially declarative sentences. For instance, if a teacher allows one of her students to go out by saying “You may go out”, this sentence is neither true or false. It gives a permission. But, in most situations, if one of the students asks one of his classmates whether she thinks that he may go out and she answers “Of course you may go out”, this sentence is either true or false. It does not give a permission, it states that there is a permission.
Another argument for ethical cognitivism stands on the close resemblance between ethics and other normative matters, such as games. As much as morality, games consist of norms (or rules), but it would be hard to accept that it be not true that the chessplayer who checkmates the other one wins the game. If statements about game rules can be true or false, why not ethical statements? One answer is that we may want ethical statements to be categorically true, while we only need statements about right action to be contingent on the acceptance of the rules of a particular game—that is, the choice to play the game according to a given set of rules.”—WP
By the way, I suspect you call indexicality “uninteresting” because if it applies to “water”, then it probably applies to just about every word. This is true—but it is also why should be happy to count Eliezer’s position as moral realism, or do you want to call yourself a relativist about water?
I am not saying water is indexical because of PWs or whatever. I am saying that cases of indexicallity irrelvant to moral relativism are not interesting in the context of a discussion about moral relativism.
Eliezer is a realist, he’s just also an indexicalist. According to his theory, when you use the word “morality”, you refer to “Human!morality”, and there are objective facts about that. His theory just also says that when Clippy uses the word “morality”, it refery to “Clippy!morality” (about which there are also objective facts, which are logically independent of the facts about “Human!morality”). Just like when you say “water”, it refers to water, but when twin-you says water, it refers to XYZ.
I thought that when humans and Clippy speak about morality, they speak about the same thing (assuming that they are not lying and not making mistakes).
The difference is in connotations. For humans, morality has a connotation “the thing that should be done”. For Clippy, morality has a connotation “this weird stuff humans care about”.
So, you could explain the concept of morality to Clippy, and then also explain that X is obviously moral. And Clippy would agree with you. It just wouldn’t make Clippy any more likely to do X; the “should” emotion would not get across. The only result would be Clippy remembering that humans feel a desire to do X; and that information could be later used to create more paperclips.
Clippy’s equivalent of “should” is connected to maximizing the number of paperclips. The fact that X is moral is about as much important for it as an existence of a specific paperclip is for us. “Sure, X is moral. I see. I have no use of this fact. Now stop bothering me, because I want to make another paperclip.”
Oh, yes. I was using “moral” the same way you used “should” here.
So why do humans have different words for would fo it, and should do it?
If this is a theory about what people mean when they say “morality”, then he is wrong about a significant percentage of people, as a matter of simple fact.
What does it mean for something to be theory about what people mean?
It means the thing the theory tries to model, predict, and explain, is “what do people mean”.
And what kinds of things are the things that people mean? Semantic entities, or entities in the world? If semantic, intensions or Kaplanian characters or something else?
This is not a rhetorical question. I have absolutely no clue what “mean” means when applied to people. (Actually, I don’t even know what it means when applied to words, but that case feels intuitively much clearer than people meaning something.)
By “mean” I mean (no pun intended) that when people say a word, they use it to refer to a concept they have. This can be a semantic entity, or a physical entity, or a linguistic entity elsewhere in the same sentence, or anything else the speaker has a mental concept of that they can attach the word to, and which they expect the listeners to infer by hearing the word.
To put it another way: people use words to cause the listener to think thoughts which correspond in a certain way to the ones the speaker thinks. The thoughts of the speaker, which they intend to convey to the listener, are what they mean by the words.
Please be patient, I’m out of my depth somewhat. If I say to you “invisible pink unicorn” or “spherical cube”, I would characterise myself as not having successfully meant anything, even though, if I’m not paying attention, it feels like I did.
Am I wrong? Am I confusing meaning with reference, or some such? It certainly seems to me that I am in some way failing.
In both examples I understand you to mean two (non-existent in the real world) items with a set of seemingly contradictory characteristics. So you did mean something. Not an object in the real world, but you meant the concept of an object containing contradictory characteristics, and gave examples of what “contradictory characteristics” are.
Indeed that meaning of contradiction is the reason “Invisible Pink Unicorn” is used to parody religion, etc.
Now if someone used the words without understanding that they are contradictory, or even believing the things in question are real—they’d still have meant something: An item in their model of the world. They’d be wrong that such an item really existed in the outside world, but their words would still have meaning in pinpointing to said item in their mental model.
Hm, thoughts are tricky things, and identity conditions of thoughts are trickier yet. I was just trying to see if you had a better idea of what “mean” might mean than me. But it seems we have to get by with what little we have.
Because I share your intuition that there is something fishy about the referential intention in Eliezer’s picture. With terms like water, it’s plausible that people intend to refer to “this stuff here” or “this stuff that [complicated description of their experiences with water]”. With morality, it seems dubious that they should be intending to refer to “this thing that humans would all want if we were absolutely coherent etc.”
Group-level moral relativism just is the belief that moral truths are indexed to groups. Since relativism is uncontroversially opposed to realism, “indexical realist” is a bit of a contradiction.
“Indexicality” in the philosopher’s sense means that the reference of a word depends on who utters it in which circumstances. Putnam argues that “water” (and all other natural kind terms) has an indexical component because its reference depends on whether you or twin-you utters it.
Which is about equivalent to claiming that anything might be relative, because it might be indexical along some unknown axis, in this case unobserved possible worlds. I afraid I don’t think that is very interesting.
What’s that concept of “relativity” you’re talking about, anyway? The proposition expressed by the sentence “clippy shouldn’t convert humans into paperclips”, uttered by a speaker of English in the actual world, is simply true. That the proposition expressed by the sentence varies depending on who utters it in which world is a completely different thing. There is no relativism about whether I am sitting at my desk just because I can report this fact by saying “I’m sitting in my desk” (which you can’t do, because if you said that sentence, you would be expressing a different proposition, one that’s about you, not me).
Only if moral realism is also true. If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.
It’s not relative, and it is indexical, because “I” is indexical. The point you are making is again, not interesting.
Yes, of course. I was illustrating how the theory works.
No, it doesn’t. The thing is that on the view I’m talking about here, sentences don’t have truth-conditions, but propositions have. (Some) sentences express a proposition dependent on the context of utterance. Moral realism thus has to be the position that moral statements express propositions, because it wouldn’t make any sense otherwise—sentences don’t have truth-conditions anyway. When clippy says “One shouldn’t convert humans into paperclips”, he is simply not expressing the same proposition that I am expressing when I utter that sentence.
Then why exactly are you having a discussion that seems to be based on you not understanding concepts that you find “uninteresting”? I find your sense of “relative”, which seems to be “in any conceivable way dependent on anything”, pretty uninteresting, actually...
Why shouldn’t the truth-value attach to a (proposition, context) tuple? Why, for that matter shouldn’t it attach to a (sentence, language, context) tuple?
A (sentence,language,context) tuple uniquely determines a proposition, so I don’t mind if you attach a truth-value to that (relative to a world of evaluation, of course). But propositions don’t change their truth-value relative to a context by definition. A proposition is that thing which has a truth-value relative to a situation of evaluation.
But—see this comment—I may have been too charitable in interpreting “realism” as what is more properly called “cognitivism”. That’s because I can’t think of any other interpretation of “realism” that even makes any sense.
Cognitivism is compatible with the claim that moral statements have truth values that vary with the speaker. (despite lack of explicit indexicals, yadda yadda). The contrary claim is that they don’t. I don’t see why the one claim should be more readily comprehensible that its opposite.
The contrary claim is often called realism, although that muddies the water, since in addition to the epistemological claim it can be used to state the claim that moral terms have real referents.
“Cognitivism encompasses all forms of moral realism, but cognitivism can also agree with ethical irrealism or anti-realism. Aside from the subjectivist branch of cognitivism, some cognitive irrealist theories accept that ethical sentences can be objectively true or false, even if there exist no natural, physical or in any way real (or “worldly”) entities or objects to make them true or false.
There are a number of ways of construing how a proposition can be objectively true without corresponding to the world:
By the coherence rather than the correspondence theory of truth
In a figurative sense: it can be true that I have a cold, but that doesn’t mean that the word “cold” corresponds to a distinct entity.
In the way that mathematical statements are true for mathematical anti-realists. This would typically be the idea that a proposition can be true if it is a entailment of some intuitively appealing axiom — in other words, apriori anayltical reasoning.
Crispin Wright, John Skorupski and some others defend normative cognitivist irrealism. Wright asserts the extreme implausibility of both J. L. Mackie’s error-theory and non-cognitivism (including S. Blackburn’s quasi-realism) in view of both everyday and sophisticated moral speech and argument. The same point is often expressed as the Frege-Geach Objection. Skorupski distinguishes between receptive awareness, which is not possible in normative matters, and non-receptive awareness (including dialogical knowledge), which is possible in normative matters.
Hilary Putnam’s book Ethics without ontology (Harvard, 2004) argues for a similar view, that ethical (and for that matter mathematical) sentences can be true and objective without there being any objects to make them so.
Cognitivism points to the semantic difference between imperative sentences and declarative sentences in normative subjects. Or to the different meanings and purposes of some superficially declarative sentences. For instance, if a teacher allows one of her students to go out by saying “You may go out”, this sentence is neither true or false. It gives a permission. But, in most situations, if one of the students asks one of his classmates whether she thinks that he may go out and she answers “Of course you may go out”, this sentence is either true or false. It does not give a permission, it states that there is a permission.
Another argument for ethical cognitivism stands on the close resemblance between ethics and other normative matters, such as games. As much as morality, games consist of norms (or rules), but it would be hard to accept that it be not true that the chessplayer who checkmates the other one wins the game. If statements about game rules can be true or false, why not ethical statements? One answer is that we may want ethical statements to be categorically true, while we only need statements about right action to be contingent on the acceptance of the rules of a particular game—that is, the choice to play the game according to a given set of rules.”—WP
Nothing in this is at all illuminating as to what on earth realism is supposed to be.
Do understand what moral subjectivism is?
By the way, I suspect you call indexicality “uninteresting” because if it applies to “water”, then it probably applies to just about every word. This is true—but it is also why should be happy to count Eliezer’s position as moral realism, or do you want to call yourself a relativist about water?
I am not saying water is indexical because of PWs or whatever. I am saying that cases of indexicallity irrelvant to moral relativism are not interesting in the context of a discussion about moral relativism.
They are because they help to illustrate the theory.
No, Relativism is a type of Realism. You might be confusing it with Subjectivism.