a born nerd, who seeks to become even more rational, should allow themselves to lie, and give themselves safe occasions to practice lying, so that they are not tempted to twist around the truth internally
I’m starting to think that this is exactly correct.
As we all know, natural language sentences (encoded as pressure waves in the air, or light emitted from a monitor) aren’t imbued with an inherent essence of trueness or falseness. Rather, we say a sentence is true when reporting it to a credulous human listener would improve the accuracy of that human’s model of reality. For many sentences, this is pretty straightforward (“The sky is blue” is true if and only if the sky is blue, &c.), but in other cases it’s more ambiguous, not because the sentence has an inherently fuzzy truth value, but because upon interpreting the sentence, the correspondence between the human’s beliefs and reality could improve in some aspects but not others; e.g., we don’t want to say “The Earth is a sphere” is false, even though it’s really more like an oblate spheroid and has mountains and valleys. This insight is embedded in the name of the site itself: “Less Wrong,” suggesting that wrongness is a quantitative rather than binary property.
But if sentences don’t have little XML tags attached to them, then why bother drawing a bright-line boundary around “lying”, making a deontological distinction where lying is prohibited but it’s okay to achieve similar effects on the world without technically uttering a sentence that a human observer would dub “false”? It seems like a form of running away from the actual decision problem of figuring out what to say. When I’m with close friends from my native subculture, I can say what I’m actually thinking using the words that come naturally to me, but when I’m interacting with arbitrary people in society, that doesn’t work as a matter of cause and effect, because I’m often relying on a lot of concepts and vocabulary that my interlocutor hasn’t learned (with high probability). If I actually want to communicate, I’m going to need a better decision criterion than my brain’s horrifyingly naive conception of honesty, and that’s going to take consequentialist thinking (guessing what words will produce what effect in the listener’s mind) rather than moralistic thinking (Honesty is Good, but Lying is Bad, so I’m not Allowed to say anything that could be construed as a Lie, because then I would be a Bad Person). The problem of “what speech acts I should perform in this situation” and the problem of having beliefs that correspond to reality are separate problems with different success criteria; it really shouldn’t be surprising that one can do better on both of them by optimizing them separately.
Looking back on my life, moralistic reasoning—thinking in terms of what I or others “should” do, without having a consequentialist reduction of “should”—has caused me a lot of unnecessary suffering, and it didn’t even help anyone. I’m proud that I had an internalized morality and that I cared about doing the Right Thing, but my conception of what the Right Thing was, was really really stupid and crazy, and people tried to explain to me what I was doing wrong, and I still didn’t get it. I’m not going to make that (particular) mistake again (in that particular form).
I’m starting to think that this is exactly correct.
As we all know, natural language sentences (encoded as pressure waves in the air, or light emitted from a monitor) aren’t imbued with an inherent essence of trueness or falseness. Rather, we say a sentence is true when reporting it to a credulous human listener would improve the accuracy of that human’s model of reality. For many sentences, this is pretty straightforward (“The sky is blue” is true if and only if the sky is blue, &c.), but in other cases it’s more ambiguous, not because the sentence has an inherently fuzzy truth value, but because upon interpreting the sentence, the correspondence between the human’s beliefs and reality could improve in some aspects but not others; e.g., we don’t want to say “The Earth is a sphere” is false, even though it’s really more like an oblate spheroid and has mountains and valleys. This insight is embedded in the name of the site itself: “Less Wrong,” suggesting that wrongness is a quantitative rather than binary property.
But if sentences don’t have little XML tags attached to them, then why bother drawing a bright-line boundary around “lying”, making a deontological distinction where lying is prohibited but it’s okay to achieve similar effects on the world without technically uttering a sentence that a human observer would dub “false”? It seems like a form of running away from the actual decision problem of figuring out what to say. When I’m with close friends from my native subculture, I can say what I’m actually thinking using the words that come naturally to me, but when I’m interacting with arbitrary people in society, that doesn’t work as a matter of cause and effect, because I’m often relying on a lot of concepts and vocabulary that my interlocutor hasn’t learned (with high probability). If I actually want to communicate, I’m going to need a better decision criterion than my brain’s horrifyingly naive conception of honesty, and that’s going to take consequentialist thinking (guessing what words will produce what effect in the listener’s mind) rather than moralistic thinking (Honesty is Good, but Lying is Bad, so I’m not Allowed to say anything that could be construed as a Lie, because then I would be a Bad Person). The problem of “what speech acts I should perform in this situation” and the problem of having beliefs that correspond to reality are separate problems with different success criteria; it really shouldn’t be surprising that one can do better on both of them by optimizing them separately.
Looking back on my life, moralistic reasoning—thinking in terms of what I or others “should” do, without having a consequentialist reduction of “should”—has caused me a lot of unnecessary suffering, and it didn’t even help anyone. I’m proud that I had an internalized morality and that I cared about doing the Right Thing, but my conception of what the Right Thing was, was really really stupid and crazy, and people tried to explain to me what I was doing wrong, and I still didn’t get it. I’m not going to make that (particular) mistake again (in that particular form).