You make good points, and your assessment seems entirely correct.
However, unlike us, they do not believe that any reliable evidence for or against the proposition can be gathered from verbal conversation. Instead, they look for non-verbal cues, as well as other behaviors (f.ex., whether you’d recommend the play to others, or attend future plays, etc.).
This seems accurate, yes. Strangely, I remember reading/learning/realizing this before, but I seem to have forgotten it. How curious. Perhaps it is because the mode of communication you describe is so unnatural to me. (As I am on the autism spectrum.)
I am unsure how to apply all of this to the moral status of behaving the way moridinamael describes...
Strangely, I remember reading/learning/realizing this before, but I seem to have forgotten it.
I have not internalized this point, either, and thus I have to continually remind myself of it during every conversation. It can be exhausting, and sometimes I fail and slip up anyway. I don’t know where I am on the autism spectrum; perhaps I’m just an introvert...
I am unsure how to apply all of this to the moral status of behaving the way moridinamael describes...
Yeah, it’s a tough call. Personally, I think his behavior is either morally neutral, or possibly morally superior, assuming that people like ourselves are in the minority (which seems likely). That is to say, if you behaved in a way that felt naturally to you; and moridinamael behaved in a way that felt naturally to him; and both of you talked to 1000 random people; then, moridinamael would hurt fewer people than you would (and, conversely, make more people feel better).
Of course, such ethics are situational. If those 1000 people were not random, but members of the hardcore rationalist community, then moridinamael would probably hurt more people than you would.
On the third hand, moridinamael indicates that he can’t help but behave the way he does, so that adds a whole new layer of complexity to the problem...
Your analysis of the ethics involved is valid if you only take harm / comfort into account, but one aspect of my own morality is that I value truth intrinsically, not just for its harm/help consequences. So I don’t think it’s as simple as counting up how many people are hurt by our utterances.
If you value truth intrinsically, then reducing your ability to approach it would hurt you, so I think my analysis is still applicable to some extent.
But you are probably right, since we are running into the issue of implicit goals. If I am a paperclip maximizer, then, from my point of view, any action that reduces the projected number of future paperclips in the world is immoral, and there’s probably nothing you can do to convince me otherwise. Similarly, if you value truth as a goal in and of itself, regardless of its instrumental value; then your morality may be completely incompatible with the morality of someone who (for example) only values truth as a means to an end (i.e., achieving his other goals).
I have to admit that don’t know how to resolve this problem, or whether it has a resolution at all.
You make good points, and your assessment seems entirely correct.
This seems accurate, yes. Strangely, I remember reading/learning/realizing this before, but I seem to have forgotten it. How curious. Perhaps it is because the mode of communication you describe is so unnatural to me. (As I am on the autism spectrum.)
I am unsure how to apply all of this to the moral status of behaving the way moridinamael describes...
I have not internalized this point, either, and thus I have to continually remind myself of it during every conversation. It can be exhausting, and sometimes I fail and slip up anyway. I don’t know where I am on the autism spectrum; perhaps I’m just an introvert...
Yeah, it’s a tough call. Personally, I think his behavior is either morally neutral, or possibly morally superior, assuming that people like ourselves are in the minority (which seems likely). That is to say, if you behaved in a way that felt naturally to you; and moridinamael behaved in a way that felt naturally to him; and both of you talked to 1000 random people; then, moridinamael would hurt fewer people than you would (and, conversely, make more people feel better).
Of course, such ethics are situational. If those 1000 people were not random, but members of the hardcore rationalist community, then moridinamael would probably hurt more people than you would.
On the third hand, moridinamael indicates that he can’t help but behave the way he does, so that adds a whole new layer of complexity to the problem...
Your analysis of the ethics involved is valid if you only take harm / comfort into account, but one aspect of my own morality is that I value truth intrinsically, not just for its harm/help consequences. So I don’t think it’s as simple as counting up how many people are hurt by our utterances.
If you value truth intrinsically, then reducing your ability to approach it would hurt you, so I think my analysis is still applicable to some extent.
But you are probably right, since we are running into the issue of implicit goals. If I am a paperclip maximizer, then, from my point of view, any action that reduces the projected number of future paperclips in the world is immoral, and there’s probably nothing you can do to convince me otherwise. Similarly, if you value truth as a goal in and of itself, regardless of its instrumental value; then your morality may be completely incompatible with the morality of someone who (for example) only values truth as a means to an end (i.e., achieving his other goals).
I have to admit that don’t know how to resolve this problem, or whether it has a resolution at all.