Lying about what? It is certainly common to blatantly lie when you want to cancel plans or decline an invitation. Some people think there should be social repurcussions for these lies. But imo these sorts of lies are, by default, socially acceptable.
There are complicated incentives around punishing deliberate manipulations and deception much harder than motivatated/unconcious manipulation and deception. In particular you are punishing people for being self aware. You can interpret ‘The Elephant in the Brain’ as record of the myriad ways people in somewhat, or more than somewhat, manipulative behavior. Motivated reasoning is endemic. A huge amount of behavior is largely motivated by local ‘monkey politics’ and status games. Learning about rationality might make a suffiently open minded and intelectually honest person aware of what they are often doing. But its not going to make them stop doing these things.
Imagine that people on average engage in 120 units of deception. 20 units of concious deception and 100 units of unconcious. People who take the self awareness pill engage in 40 units of concious deception and 0 units of unconcious deception. The later group engage in much less deception but they engage in twice as much ‘deliberate’ deception.
I have two main conclusions. First, I think seeing people, and yourself, clearly requires an increased tolerance for certain kinds of bad behavior. People are not very honest but cooperation is empirically possible. Ray commented this below: “If someone consciously lies* to me, it’s generally because there is no part of them that thinks it was important enough to cooperate with me”. I think that Ray’s comment is false. Secondly I think its bad to penalize ‘deliberate’ bad behavior so much more heavily. What is the point of penalizing deception? Presumably much of the point is to preserve the group’s ability to reason. Motivated reasoning and other forms of non-deleiberate deception and manipulation are arguably at least as serious a problem as blatant lies.
I know of a set of norms where if you want to decline an invitation you can make up an excuse and if it is discovered that the excuse doesn’t hold water it’s no big deal. Additionally I have seen norms where just saying a flat unexplained “no” is seen as more erroneous than obviously false excuse. I am of personally the inclination that if I don’t have conflicting plans I won’t fabricate them and it making the “unwillingness” more bare seems to get under some peoples nerves. This might be that some people value “keeping face” more than honesty. But there is a funny effect in that if somebody has just announced that they have an excuse not to go at some specific time and you make a suggestion for another actvity for exact same time they might declare to be free to do that. If everybody is “in on the game” that it’s all “face” this doesn’t seem to cause problems.
I do have a problem where if somebody makes what seems like english level claim of fact I tend to favour it being a claim of fact and having trouble when it’s infact something else like invitation rejection. I also have bad understanding why people value “face-culture” and have trouble imagining what kinds of things would go wrong in a “faceless” dystopia.
I agree that the way I phrased that comment was wrong. There’s a fairly narrow concept I was trying to point at, which is easily confused with other concepts.
(this is also a domain that I still feel overall confused about, and my main claim in the current conversational zeitgeist is that “naively pushing the ‘lets use words for their literal meanings without regard for connotation’” is likely to cause unnecessary damage, and that getting to a world where we can rationally talk about many cognitive errors in public requires a solid foundation of game theory, and common knowledge about how to re-implement it)
I edited my original statement to say “epistemically cooperate”, which is what I meant. If I’m working at a marketing firm and people regularly lie to customers and there’s office politics that involve lying to each other all the time, then I probably wouldn’t expect epistemic cooperation at all, but I would expect various other kinds of cooperation.
Also note the asterix after “consciously lies*”. There’s lots of small lies which aren’t primarily about deception so much as social protocol, and well, I actually think everyone DOES know (mostly) that “I can’t make the event, I’m busy” is code for “I can’t make it for a reason that I don’t want to share with you”.
(This does still mean that I can’t epistemically cooperate with people who do that as easily. A thing I like about the Berkeley rationalist community is that it’s more socially acceptable to say “sorry, can’t make it. I’m too depressed” or “too over-socialled”, which in some cases allows for better cooperation on what activities you can do. For example “oh, well how about instead of going to the party we just silently read next to each other”)
But part of what I’m pushing back against with this post (and series of ongoing conversations), of naively using words with heavy connotations, as if they did not have those connotations.
My sense is when [most] people actually use the phrase “Bob lied”, they mean something closer to “Bob had an affair and then lied about it” or “Bob told me the car had 10,000 miles on it but actually it had 100,000 miles on it.”
When Bob says “I can’t make it to the party, I’m sick”, or “this project is going to save the world!”, people instead either don’t say anything about it at all, or call it a “white lie”, or use different words entirely like “Bob exaggerated.”
The point of this post is to serve as a pointer towards ways we can improve clear communication, without trampling over chesterton fences.
[brief note for now: I agree with your point about “if you gain 40 points of self awareness it doesn’t make sense to penalize that.” I have more thoughts about it but will be awhile before writing it up]
I think if we try to stop punishing for deception altogether, we are missing on a good solution for the prisoner’s dilemma.
It’s reasonable (though not obvious) that we don’t punish for unconscious deception. And you also make a good point that we shouldn’t punish for self-awareness.
But I think, an important distinction has to be made between self-awareness and self-control. I am aware of many things, but I don’t necessarily have active control over all of them, mostly because it would require a different level of mental energy.
In my books, a controlled and deliberated lie is much worse than an unconscious one or even a lie you are simply aware of.
You could say that a “lie” is as bad, as little mental effort it would require of the “liar” to avoid it.
Lying about what? It is certainly common to blatantly lie when you want to cancel plans or decline an invitation. Some people think there should be social repurcussions for these lies. But imo these sorts of lies are, by default, socially acceptable.
There are complicated incentives around punishing deliberate manipulations and deception much harder than motivatated/unconcious manipulation and deception. In particular you are punishing people for being self aware. You can interpret ‘The Elephant in the Brain’ as record of the myriad ways people in somewhat, or more than somewhat, manipulative behavior. Motivated reasoning is endemic. A huge amount of behavior is largely motivated by local ‘monkey politics’ and status games. Learning about rationality might make a suffiently open minded and intelectually honest person aware of what they are often doing. But its not going to make them stop doing these things.
Imagine that people on average engage in 120 units of deception. 20 units of concious deception and 100 units of unconcious. People who take the self awareness pill engage in 40 units of concious deception and 0 units of unconcious deception. The later group engage in much less deception but they engage in twice as much ‘deliberate’ deception.
I have two main conclusions. First, I think seeing people, and yourself, clearly requires an increased tolerance for certain kinds of bad behavior. People are not very honest but cooperation is empirically possible. Ray commented this below: “If someone consciously lies* to me, it’s generally because there is no part of them that thinks it was important enough to cooperate with me”. I think that Ray’s comment is false. Secondly I think its bad to penalize ‘deliberate’ bad behavior so much more heavily. What is the point of penalizing deception? Presumably much of the point is to preserve the group’s ability to reason. Motivated reasoning and other forms of non-deleiberate deception and manipulation are arguably at least as serious a problem as blatant lies.
I know of a set of norms where if you want to decline an invitation you can make up an excuse and if it is discovered that the excuse doesn’t hold water it’s no big deal. Additionally I have seen norms where just saying a flat unexplained “no” is seen as more erroneous than obviously false excuse. I am of personally the inclination that if I don’t have conflicting plans I won’t fabricate them and it making the “unwillingness” more bare seems to get under some peoples nerves. This might be that some people value “keeping face” more than honesty. But there is a funny effect in that if somebody has just announced that they have an excuse not to go at some specific time and you make a suggestion for another actvity for exact same time they might declare to be free to do that. If everybody is “in on the game” that it’s all “face” this doesn’t seem to cause problems.
I do have a problem where if somebody makes what seems like english level claim of fact I tend to favour it being a claim of fact and having trouble when it’s infact something else like invitation rejection. I also have bad understanding why people value “face-culture” and have trouble imagining what kinds of things would go wrong in a “faceless” dystopia.
I agree that the way I phrased that comment was wrong. There’s a fairly narrow concept I was trying to point at, which is easily confused with other concepts.
(this is also a domain that I still feel overall confused about, and my main claim in the current conversational zeitgeist is that “naively pushing the ‘lets use words for their literal meanings without regard for connotation’” is likely to cause unnecessary damage, and that getting to a world where we can rationally talk about many cognitive errors in public requires a solid foundation of game theory, and common knowledge about how to re-implement it)
I edited my original statement to say “epistemically cooperate”, which is what I meant. If I’m working at a marketing firm and people regularly lie to customers and there’s office politics that involve lying to each other all the time, then I probably wouldn’t expect epistemic cooperation at all, but I would expect various other kinds of cooperation.
Also note the asterix after “consciously lies*”. There’s lots of small lies which aren’t primarily about deception so much as social protocol, and well, I actually think everyone DOES know (mostly) that “I can’t make the event, I’m busy” is code for “I can’t make it for a reason that I don’t want to share with you”.
(This does still mean that I can’t epistemically cooperate with people who do that as easily. A thing I like about the Berkeley rationalist community is that it’s more socially acceptable to say “sorry, can’t make it. I’m too depressed” or “too over-socialled”, which in some cases allows for better cooperation on what activities you can do. For example “oh, well how about instead of going to the party we just silently read next to each other”)
But part of what I’m pushing back against with this post (and series of ongoing conversations), of naively using words with heavy connotations, as if they did not have those connotations.
My sense is when [most] people actually use the phrase “Bob lied”, they mean something closer to “Bob had an affair and then lied about it” or “Bob told me the car had 10,000 miles on it but actually it had 100,000 miles on it.”
When Bob says “I can’t make it to the party, I’m sick”, or “this project is going to save the world!”, people instead either don’t say anything about it at all, or call it a “white lie”, or use different words entirely like “Bob exaggerated.”
The point of this post is to serve as a pointer towards ways we can improve clear communication, without trampling over chesterton fences.
[brief note for now: I agree with your point about “if you gain 40 points of self awareness it doesn’t make sense to penalize that.” I have more thoughts about it but will be awhile before writing it up]
I think if we try to stop punishing for deception altogether, we are missing on a good solution for the prisoner’s dilemma.
It’s reasonable (though not obvious) that we don’t punish for unconscious deception. And you also make a good point that we shouldn’t punish for self-awareness.
But I think, an important distinction has to be made between self-awareness and self-control. I am aware of many things, but I don’t necessarily have active control over all of them, mostly because it would require a different level of mental energy.
In my books, a controlled and deliberated lie is much worse than an unconscious one or even a lie you are simply aware of.
You could say that a “lie” is as bad, as little mental effort it would require of the “liar” to avoid it.