Well, sure. That’s why I phrased my comment the way I did, referencing what I like/prefer/feel.
Yes, good point.
I sometimes have conversations for the purposes of entertainment, or validation, or comfort. … But. One thing I never want is to be entertained by lies[1]; to be validated with lies; to be comforted by lies.
I agree, and I feel the same way. However, I believe that you and I see conversations somewhat differently from other people.
When you and I engage in conversation (unless I misunderstood your position, in which case I apologize), we tend to take most of the things that are said at face value. So, for example, if you were to ask “did you like my play ?”, what you are really asking is… “did you like my play ?” And, naturally, you would feel betrayed if the answer is less than honest.
However, I’ve met many people who, when asking “did you like my play ?”, really mean something like, “given my performance tonight, do you still consider me a a valuable friend whose company you’d enjoy ?” If you answer “no”, the emotional impact can be quite devastating.
The surprising thing, though (well, it was surprising to me when I figured it out) is that such people still do care very much about the truth; i.e., whether you liked the play or not. However, unlike us, they do not believe that any reliable evidence for or against the proposition can be gathered from verbal conversation. Instead, they look for non-verbal cues, as well as other behaviors (f.ex., whether you’d recommend the play to others, or attend future plays, etc.).
So, as I said above, the two types of people view the very purpose of everyday conversation very differently; and hence tend to evaluate its content quite differently, as well.
You make good points, and your assessment seems entirely correct.
However, unlike us, they do not believe that any reliable evidence for or against the proposition can be gathered from verbal conversation. Instead, they look for non-verbal cues, as well as other behaviors (f.ex., whether you’d recommend the play to others, or attend future plays, etc.).
This seems accurate, yes. Strangely, I remember reading/learning/realizing this before, but I seem to have forgotten it. How curious. Perhaps it is because the mode of communication you describe is so unnatural to me. (As I am on the autism spectrum.)
I am unsure how to apply all of this to the moral status of behaving the way moridinamael describes...
Strangely, I remember reading/learning/realizing this before, but I seem to have forgotten it.
I have not internalized this point, either, and thus I have to continually remind myself of it during every conversation. It can be exhausting, and sometimes I fail and slip up anyway. I don’t know where I am on the autism spectrum; perhaps I’m just an introvert...
I am unsure how to apply all of this to the moral status of behaving the way moridinamael describes...
Yeah, it’s a tough call. Personally, I think his behavior is either morally neutral, or possibly morally superior, assuming that people like ourselves are in the minority (which seems likely). That is to say, if you behaved in a way that felt naturally to you; and moridinamael behaved in a way that felt naturally to him; and both of you talked to 1000 random people; then, moridinamael would hurt fewer people than you would (and, conversely, make more people feel better).
Of course, such ethics are situational. If those 1000 people were not random, but members of the hardcore rationalist community, then moridinamael would probably hurt more people than you would.
On the third hand, moridinamael indicates that he can’t help but behave the way he does, so that adds a whole new layer of complexity to the problem...
Your analysis of the ethics involved is valid if you only take harm / comfort into account, but one aspect of my own morality is that I value truth intrinsically, not just for its harm/help consequences. So I don’t think it’s as simple as counting up how many people are hurt by our utterances.
If you value truth intrinsically, then reducing your ability to approach it would hurt you, so I think my analysis is still applicable to some extent.
But you are probably right, since we are running into the issue of implicit goals. If I am a paperclip maximizer, then, from my point of view, any action that reduces the projected number of future paperclips in the world is immoral, and there’s probably nothing you can do to convince me otherwise. Similarly, if you value truth as a goal in and of itself, regardless of its instrumental value; then your morality may be completely incompatible with the morality of someone who (for example) only values truth as a means to an end (i.e., achieving his other goals).
I have to admit that don’t know how to resolve this problem, or whether it has a resolution at all.
Yes, good point.
I agree, and I feel the same way. However, I believe that you and I see conversations somewhat differently from other people.
When you and I engage in conversation (unless I misunderstood your position, in which case I apologize), we tend to take most of the things that are said at face value. So, for example, if you were to ask “did you like my play ?”, what you are really asking is… “did you like my play ?” And, naturally, you would feel betrayed if the answer is less than honest.
However, I’ve met many people who, when asking “did you like my play ?”, really mean something like, “given my performance tonight, do you still consider me a a valuable friend whose company you’d enjoy ?” If you answer “no”, the emotional impact can be quite devastating.
The surprising thing, though (well, it was surprising to me when I figured it out) is that such people still do care very much about the truth; i.e., whether you liked the play or not. However, unlike us, they do not believe that any reliable evidence for or against the proposition can be gathered from verbal conversation. Instead, they look for non-verbal cues, as well as other behaviors (f.ex., whether you’d recommend the play to others, or attend future plays, etc.).
So, as I said above, the two types of people view the very purpose of everyday conversation very differently; and hence tend to evaluate its content quite differently, as well.
You make good points, and your assessment seems entirely correct.
This seems accurate, yes. Strangely, I remember reading/learning/realizing this before, but I seem to have forgotten it. How curious. Perhaps it is because the mode of communication you describe is so unnatural to me. (As I am on the autism spectrum.)
I am unsure how to apply all of this to the moral status of behaving the way moridinamael describes...
I have not internalized this point, either, and thus I have to continually remind myself of it during every conversation. It can be exhausting, and sometimes I fail and slip up anyway. I don’t know where I am on the autism spectrum; perhaps I’m just an introvert...
Yeah, it’s a tough call. Personally, I think his behavior is either morally neutral, or possibly morally superior, assuming that people like ourselves are in the minority (which seems likely). That is to say, if you behaved in a way that felt naturally to you; and moridinamael behaved in a way that felt naturally to him; and both of you talked to 1000 random people; then, moridinamael would hurt fewer people than you would (and, conversely, make more people feel better).
Of course, such ethics are situational. If those 1000 people were not random, but members of the hardcore rationalist community, then moridinamael would probably hurt more people than you would.
On the third hand, moridinamael indicates that he can’t help but behave the way he does, so that adds a whole new layer of complexity to the problem...
Your analysis of the ethics involved is valid if you only take harm / comfort into account, but one aspect of my own morality is that I value truth intrinsically, not just for its harm/help consequences. So I don’t think it’s as simple as counting up how many people are hurt by our utterances.
If you value truth intrinsically, then reducing your ability to approach it would hurt you, so I think my analysis is still applicable to some extent.
But you are probably right, since we are running into the issue of implicit goals. If I am a paperclip maximizer, then, from my point of view, any action that reduces the projected number of future paperclips in the world is immoral, and there’s probably nothing you can do to convince me otherwise. Similarly, if you value truth as a goal in and of itself, regardless of its instrumental value; then your morality may be completely incompatible with the morality of someone who (for example) only values truth as a means to an end (i.e., achieving his other goals).
I have to admit that don’t know how to resolve this problem, or whether it has a resolution at all.