My guess is that you have an unrealistic picture of what ordinary lying is like. When I lie, it’s usually an automatic response (like most speech), said reflexively based on the social situation I’m in. (Think, “do I look fat in this?”) I can “catch myself” afterwards or during the process, but the response itself is generated by system 1.
Using system 2 while lying is usually a mistake, because it seems unnatural. If system 2 is used for lying, it’s usually offline: telling yourself a certain story before going in to a social situation, so the responses can come automatically. Having to use system 2 to lie during a conversation is a kind of failure mode.
There are extreme cases like faking entire datasets, which are pretty rare.
Hmm. It occurs to me that lying might be a domain that’s particularly prone to typical mind fallacy because people rarely share information about their lying habits. (See “Typical Sex Life Fallacy”)
Some categories of experiences I can recall, which I think fall on a spectrum of deliberateness to unconsciousness.
Lying while surprised.
As a teenager, my dad suddenly asked me “have you done Forbidden Activity?” at a time when I wasn’t expecting it. “Oh shit,” I thought. I said “no.”
[in this case, I was explicitly not epistemically cooperating with my dad. My understanding from Blatant Lying is the Best Kind is that this was simulacrum 2 behavior]
Rehearsing a narrative
Perhaps most similar to your experience: talking to a prospective employer at an interview, and realizing they’re about to ask me about X and the truest answer to X is pretty unflattering to myself. Rehearsing a narrative in my head to prepare for that moment, trying to come up with a story that’s true-ish enough that I can justify it to myself, so that by the time they ask me about X I can bullshit my way through it fluently.
[This seemed mostly lying playing a simulacrum 4 game with fairly established rules about what is acceptable]
Reflexing lying that’s easy to notice
If someone asks “does this dress make me look fat” and I say “no you look great!”, or someone asks “how’s your project coming along” and I say “great!”, and no she doesn’t look great and/or my project is not going great, it’s usually obvious to me almost immediately, even if I believed it (or wasn’t really paying attention one way or another) at the moment that I said “great!”.
This feels on the edge of the lying/motivated-cognition spectrum, and seems reasonable to me classify it as a lie.
Even if the first instance was unconscious, if the conversation continues about how my project is going, subsequent statements are probably deliberate S2 lies, or there is clear, continuous S2 thinking about how to maintain the “things are great!” narrative.
[where this falls on the simulacrum spectrum depends a bit on context, I could see it being level 3 or level 4]
Reflexively bad arguments
Sometimes someone says “Policy X is terrible!” and I think “no, policy X is good! Without policy X is entire project is doomed!”. And, well, I do think without Policy X, the project is going to be much harder and failure is more likely. But the statement was clearly politically motivated. “My preferred policy is absolutely essential” probably isn’t true.
A few years ago, I probably would have not even noticed that “without policy X the project is doomed” is a bad argument. A few years later (with much deliberate practice in noticing motivated cognition under my belt), and I’m capable of noticing that “this was a bad argument with the flavor of political motivation” within a few minutes. If we’re talking in person, that’s probably too long for me to catch it in time. In email or blogpost form, I can usually catch it.
And that of those who lied, many were surprised about how often they lied. I would not be surprised if this is true for many people (they lie at least once every ten minutes and would be surprised at how often they lie)
When I specifically started paying attention to little white lies (in particular, I found that I often reflexively exaggerated to make myself look good or prevent myself from looking bad) I found that I did it WAY more often than I thought. Once I got to a point where I could notice in the moment, I was able to begin correcting, but the first step was just noticing how often it occurred.
I haven’t thought about this topic much and don’t have a strong opinion here yet, but I wanted to chime in with some personal experience which makes me suspect there might be distinct categories:
I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told “yes, we’ve already got that feature (we hadn’t) and we already have several clients successfully using that (we hadn’t).” Others were invited to be part an “existing beta program” alongside others just like them (in fact, they would have been the very first). When I objected, I was told “no one wants to be the first, so you have to say that.” Another time, they denied that they ever lied, but they did, and it was more than motivated cognition. There is a very vast gulf between “we’ve built this feature already” and “we haven’t even asked the engineers what they think” and no amount of motivated cognition bridges it. It’s less work than faking data, but it’s no more subtle.
Motivated cognition is bad, but some people are really very willing to abandon truth for their own benefit in a completely adversarial way. The motivated cognition comes in to justify why what they’re doing is okay, but they have a very clear model of the falsehoods they’re presenting (they must in order to protect them).
I think they lie to themselves that they’re not lying (so that if you search their thoughts, they never think “I’m lying”), but they are consciously aware of the different stories they have told different people, and the ones that actually constrain their expectations. And it’s such a practiced way of being that even involving System 2, it’s fluid. Each context activating which story to tell, etc., in a way that appears natural from the outside. Maybe that’s offline S2, online S1? I’m not sure. I think people who interact like that have a very different relationship with the truth than do most people on LW.
My guess is that you have an unrealistic picture of what ordinary lying is like. When I lie, it’s usually an automatic response (like most speech), said reflexively based on the social situation I’m in. (Think, “do I look fat in this?”) I can “catch myself” afterwards or during the process, but the response itself is generated by system 1.
Using system 2 while lying is usually a mistake, because it seems unnatural. If system 2 is used for lying, it’s usually offline: telling yourself a certain story before going in to a social situation, so the responses can come automatically. Having to use system 2 to lie during a conversation is a kind of failure mode.
There are extreme cases like faking entire datasets, which are pretty rare.
Hmm. It occurs to me that lying might be a domain that’s particularly prone to typical mind fallacy because people rarely share information about their lying habits. (See “Typical Sex Life Fallacy”)
Some categories of experiences I can recall, which I think fall on a spectrum of deliberateness to unconsciousness.
Lying while surprised.
As a teenager, my dad suddenly asked me “have you done Forbidden Activity?” at a time when I wasn’t expecting it. “Oh shit,” I thought. I said “no.”
[in this case, I was explicitly not epistemically cooperating with my dad. My understanding from Blatant Lying is the Best Kind is that this was simulacrum 2 behavior]
Rehearsing a narrative
Perhaps most similar to your experience: talking to a prospective employer at an interview, and realizing they’re about to ask me about X and the truest answer to X is pretty unflattering to myself. Rehearsing a narrative in my head to prepare for that moment, trying to come up with a story that’s true-ish enough that I can justify it to myself, so that by the time they ask me about X I can bullshit my way through it fluently.
[This seemed mostly lying playing a simulacrum 4 game with fairly established rules about what is acceptable]
Reflexing lying that’s easy to notice
If someone asks “does this dress make me look fat” and I say “no you look great!”, or someone asks “how’s your project coming along” and I say “great!”, and no she doesn’t look great and/or my project is not going great, it’s usually obvious to me almost immediately, even if I believed it (or wasn’t really paying attention one way or another) at the moment that I said “great!”.
This feels on the edge of the lying/motivated-cognition spectrum, and seems reasonable to me classify it as a lie.
Even if the first instance was unconscious, if the conversation continues about how my project is going, subsequent statements are probably deliberate S2 lies, or there is clear, continuous S2 thinking about how to maintain the “things are great!” narrative.
[where this falls on the simulacrum spectrum depends a bit on context, I could see it being level 3 or level 4]
Reflexively bad arguments
Sometimes someone says “Policy X is terrible!” and I think “no, policy X is good! Without policy X is entire project is doomed!”. And, well, I do think without Policy X, the project is going to be much harder and failure is more likely. But the statement was clearly politically motivated. “My preferred policy is absolutely essential” probably isn’t true.
A few years ago, I probably would have not even noticed that “without policy X the project is doomed” is a bad argument. A few years later (with much deliberate practice in noticing motivated cognition under my belt), and I’m capable of noticing that “this was a bad argument with the flavor of political motivation” within a few minutes. If we’re talking in person, that’s probably too long for me to catch it in time. In email or blogpost form, I can usually catch it.
[This seems like the sort of level 3 simulacrum thing that can strengthen the level 3-ness of the conversation. I don’t actually think it’s useful to think of simulacrum levels moving in a particular order, so I don’t think it’s usually accurate to say that this is moving the dial from 2 to 3, but I do think it makes it harder to get from 3 to 1]
This study found that 60% of students at UMASS lied at least once in a 10 minute conversation: https://www.eurekalert.org/pub_releases/2002-06/uoma-urf061002.php
And that of those who lied, many were surprised about how often they lied. I would not be surprised if this is true for many people (they lie at least once every ten minutes and would be surprised at how often they lie)
When I specifically started paying attention to little white lies (in particular, I found that I often reflexively exaggerated to make myself look good or prevent myself from looking bad) I found that I did it WAY more often than I thought. Once I got to a point where I could notice in the moment, I was able to begin correcting, but the first step was just noticing how often it occurred.
That link doesn’t have enough information to find the study, which is likely to contain important methodological caveats.
Here’s the study: https://sci-hub.tw/10.1207/S15324834BASP2402_8
I think the methodology is fairly OK for this sort of high level analysis, except of course for it being all university students from UMASS.
I haven’t thought about this topic much and don’t have a strong opinion here yet, but I wanted to chime in with some personal experience which makes me suspect there might be distinct categories:
I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told “yes, we’ve already got that feature (we hadn’t) and we already have several clients successfully using that (we hadn’t).” Others were invited to be part an “existing beta program” alongside others just like them (in fact, they would have been the very first). When I objected, I was told “no one wants to be the first, so you have to say that.” Another time, they denied that they ever lied, but they did, and it was more than motivated cognition. There is a very vast gulf between “we’ve built this feature already” and “we haven’t even asked the engineers what they think” and no amount of motivated cognition bridges it. It’s less work than faking data, but it’s no more subtle.
Motivated cognition is bad, but some people are really very willing to abandon truth for their own benefit in a completely adversarial way. The motivated cognition comes in to justify why what they’re doing is okay, but they have a very clear model of the falsehoods they’re presenting (they must in order to protect them).
I think they lie to themselves that they’re not lying (so that if you search their thoughts, they never think “I’m lying”), but they are consciously aware of the different stories they have told different people, and the ones that actually constrain their expectations. And it’s such a practiced way of being that even involving System 2, it’s fluid. Each context activating which story to tell, etc., in a way that appears natural from the outside. Maybe that’s offline S2, online S1? I’m not sure. I think people who interact like that have a very different relationship with the truth than do most people on LW.