Things can be misleading by accident or on purpose. Motivated cognition would fit under “on purpose”, since (by definition) it’s motivated.
What’s the difference between motivated errors and lies? Motivated errors are motivated, meaning they’re decision-theoretic (responsive to incentives, etc). The standard argument for compatibilist free will now applies: since it’s a decision made on the basis of consequences, it’s responsive to incentives, so agent-based models and social systems design (including design of norms) should be treating the mind-part doing motivated cognition as an agent.
I think you are pointing to some difference between the two, but I’m not sure what it is. Maybe the difference is that motivated errors are more covert than lies are, more plausible deniability (including narrative coherence) is maintained, and this plausible deniability is maintained through keeping the thing unconscious, while a plausible semicoherent narrative is maintained in consciousness?
(Obviously, being very alarmed at things that are constantly happening is not a useful allocation of attention! But, that applies to deliberate lies too, not just motivated cognition.)
Motivated errors are much less legible to the person who is motivated. The on-purposes-ness of motivated cognition is quite different from the on-purposeness of deliberate choice and I think treating them as the same leads to important failures.
If someone consciously lies* to me, it’s generally because there is no part of them that thinks it was important enough to [edit: epistemically] cooperate with me. They specifically considered, with their System 2, and/or their entire subagent parliament, whether it was a good idea to lie to me, then they chose to do so. I have basically no further interest in attempting to cooperate with such a person unless they put a lot of work into repairing the damage.
When someone motivatedly rationalizes at the subconscious level, my sense of what’s happening is some combination of
a) no one is really in control and it makes more sense to model them as a random collection of atoms doing stuff than at the ‘agent’ abstraction. The random collection of atoms might respond to incentives, but naively applying incentives to them won’t necessarily work the way you want. (I mostly don’t think this is a useful frame here but noting it for completeness)
b) insofar as there is an agent there, it’s often the case that there are multiple subagents that are compartmentalized from each other. If they’ve made their way to the rationalsphere, read the sequences, etc, then it’s highly likely that at least one subagent highly endorses not being motivated. But that agent may not have conscious access to the fact that they are making the motivated error.
The priority, in my mind, for conversations among people striving to discuss beliefs empirically/rationally rather than as tribal affiliation or point scoring, should be to make sure the subagent that cares about truth remains in control. (Otherwise you’ve already failed, or dramatically increased the difficulty, of having a conversation that isn’t about tribal affiliation and point scoring)
[*minor point, but there’s a large class of lies like jokes and stuff than I’m not counting here]
What’s the difference between motivated errors and lies?
They’re implemented by very different cognitive algorithms, which differently constrain the sorts of falsehoods and strategies they can generate.
Motivated cognition is exclusively implemented in pre-conscious mechanisms: distortion of attention, distortion of intuition, selective forgetting. Direct lying, on the other hand, usually refers to lying which has System 2 involvement, which means a wider range of possible mistruths and a wider (and more destructive) range of supporting strategies.
For example: A motivated reasoner will throw out some of their data inappropriately, telling themself a plausible but false story about how that data didn’t mean anything, but they’ll never compose fake data from scratch. But a direct liar will do both, according to what they can get away with.
My guess is that you have an unrealistic picture of what ordinary lying is like. When I lie, it’s usually an automatic response (like most speech), said reflexively based on the social situation I’m in. (Think, “do I look fat in this?”) I can “catch myself” afterwards or during the process, but the response itself is generated by system 1.
Using system 2 while lying is usually a mistake, because it seems unnatural. If system 2 is used for lying, it’s usually offline: telling yourself a certain story before going in to a social situation, so the responses can come automatically. Having to use system 2 to lie during a conversation is a kind of failure mode.
There are extreme cases like faking entire datasets, which are pretty rare.
Hmm. It occurs to me that lying might be a domain that’s particularly prone to typical mind fallacy because people rarely share information about their lying habits. (See “Typical Sex Life Fallacy”)
Some categories of experiences I can recall, which I think fall on a spectrum of deliberateness to unconsciousness.
Lying while surprised.
As a teenager, my dad suddenly asked me “have you done Forbidden Activity?” at a time when I wasn’t expecting it. “Oh shit,” I thought. I said “no.”
[in this case, I was explicitly not epistemically cooperating with my dad. My understanding from Blatant Lying is the Best Kind is that this was simulacrum 2 behavior]
Rehearsing a narrative
Perhaps most similar to your experience: talking to a prospective employer at an interview, and realizing they’re about to ask me about X and the truest answer to X is pretty unflattering to myself. Rehearsing a narrative in my head to prepare for that moment, trying to come up with a story that’s true-ish enough that I can justify it to myself, so that by the time they ask me about X I can bullshit my way through it fluently.
[This seemed mostly lying playing a simulacrum 4 game with fairly established rules about what is acceptable]
Reflexing lying that’s easy to notice
If someone asks “does this dress make me look fat” and I say “no you look great!”, or someone asks “how’s your project coming along” and I say “great!”, and no she doesn’t look great and/or my project is not going great, it’s usually obvious to me almost immediately, even if I believed it (or wasn’t really paying attention one way or another) at the moment that I said “great!”.
This feels on the edge of the lying/motivated-cognition spectrum, and seems reasonable to me classify it as a lie.
Even if the first instance was unconscious, if the conversation continues about how my project is going, subsequent statements are probably deliberate S2 lies, or there is clear, continuous S2 thinking about how to maintain the “things are great!” narrative.
[where this falls on the simulacrum spectrum depends a bit on context, I could see it being level 3 or level 4]
Reflexively bad arguments
Sometimes someone says “Policy X is terrible!” and I think “no, policy X is good! Without policy X is entire project is doomed!”. And, well, I do think without Policy X, the project is going to be much harder and failure is more likely. But the statement was clearly politically motivated. “My preferred policy is absolutely essential” probably isn’t true.
A few years ago, I probably would have not even noticed that “without policy X the project is doomed” is a bad argument. A few years later (with much deliberate practice in noticing motivated cognition under my belt), and I’m capable of noticing that “this was a bad argument with the flavor of political motivation” within a few minutes. If we’re talking in person, that’s probably too long for me to catch it in time. In email or blogpost form, I can usually catch it.
And that of those who lied, many were surprised about how often they lied. I would not be surprised if this is true for many people (they lie at least once every ten minutes and would be surprised at how often they lie)
When I specifically started paying attention to little white lies (in particular, I found that I often reflexively exaggerated to make myself look good or prevent myself from looking bad) I found that I did it WAY more often than I thought. Once I got to a point where I could notice in the moment, I was able to begin correcting, but the first step was just noticing how often it occurred.
I haven’t thought about this topic much and don’t have a strong opinion here yet, but I wanted to chime in with some personal experience which makes me suspect there might be distinct categories:
I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told “yes, we’ve already got that feature (we hadn’t) and we already have several clients successfully using that (we hadn’t).” Others were invited to be part an “existing beta program” alongside others just like them (in fact, they would have been the very first). When I objected, I was told “no one wants to be the first, so you have to say that.” Another time, they denied that they ever lied, but they did, and it was more than motivated cognition. There is a very vast gulf between “we’ve built this feature already” and “we haven’t even asked the engineers what they think” and no amount of motivated cognition bridges it. It’s less work than faking data, but it’s no more subtle.
Motivated cognition is bad, but some people are really very willing to abandon truth for their own benefit in a completely adversarial way. The motivated cognition comes in to justify why what they’re doing is okay, but they have a very clear model of the falsehoods they’re presenting (they must in order to protect them).
I think they lie to themselves that they’re not lying (so that if you search their thoughts, they never think “I’m lying”), but they are consciously aware of the different stories they have told different people, and the ones that actually constrain their expectations. And it’s such a practiced way of being that even involving System 2, it’s fluid. Each context activating which story to tell, etc., in a way that appears natural from the outside. Maybe that’s offline S2, online S1? I’m not sure. I think people who interact like that have a very different relationship with the truth than do most people on LW.
Attempting to answer more concretely and principled-ly about what makes sense to distinguish here
I think you are pointing to some difference between the two, but I’m not sure what it is. Maybe the difference is that motivated errors are more covert than lies are, more plausible deniability (including narrative coherence) is maintained, and this plausible deniability is maintained through keeping the thing unconscious, while a plausible semicoherent narrative is maintained in consciousness?
Reflecting a bit more, I think there are two important distinctions to be made:
Situation A) Alice makes a statement, which is false, and either Alice knows beforehand it’s false, or Alice realizes it’s false as soon as she pays any attention to it after the fact. (this is slightly different from how I’d have defined “lie” yesterday, but after 24 hours of mulling it over I think this is the correct clustering)
Situation B) Alice makes a statement which is false, which to Alice appears locally valid, but which is built upon some number of premises or arguments that are motivated.
...
[edit:]
This comment ended up quite long, so a summary of my overall point:
Situation B is much more complicated than Situation A.
In Situation A, Alice only has one inferential step to make, and Alice and Bob have mutual understanding (although not common knowledge) of that one inferential step. Bob can say “Alice, you lied here” and have the conversation make sense.
In Situation B, Alice has many inferential steps to make, and if Bob says “Alice, you lied here”, Alice (even if rational and honest) needs to include probability mass on “Bob is wrong, Bob is motivated, and/or Bob is a malicious actor.”
These are sufficiently different epistemic states for Alice to be in that I think it makes sense to use different words for them.
...
Situation A
In situation A, if Bob says “Hey, Alice, you lied here”, Alice thinks internally either “shit I got caught” or “oh shit, I *did* lie.” In the first case, Alice might attempt to obfuscate further. In the second case, Alice hopefully says “oops”, admits the falsehood, and the conversation moves on. In either case, the incentives are *mostly* clear and direct to Alice – try to avoid doing this again, because you will get called on it.
If Alice obfuscates, or pretends to be in Situation B, she might get away with it this time, but identifying the lie will still likely reduce her incentives to make similar statements in the future (since at the very least, she’ll have to do work defending herself)
Situation B
In situation B, if you say “Hey Alice, you lied here”, Alice will say “what the hell? No?”.
And then a few things happen, which I consider justified on Alice’s part:
From Alice’s epistemic position, she just said a true thing. If Bob just claimed that true thing was a lie, alice has now has several major hypotheses to consider:
Alice actually said a false thing
maybe the argument that directly supports proposition B is faulty reasoning, or Alice is mistaken about the facts.
maybe somewhere in her background models/beliefs/ontology are nodes that are false due to motivated reasoning
maybe somewhere in her background models/beliefs/ontology are nodes that are false for non-motivated reasoning
Alice actually said a true thing
Bob’s models/beliefs/ontology are wrong, because *Bob* is motivated, causing Bob to incorrectly think Alice’s statement was false
Bob’s models/beliefs/ontology are wrong, for non-motivated reasons
Bob is making some kind of straightforward local error about the claim in question (maybe he’s misunderstanding her or defining words differently from her)
Bob’s models are fine… but Bob is politically motivated. He is calling Alice a liar, not to help truthseek, but to cast aspersions on Alice’s character. (this might part of an ongoing campaign to harm Alice, or just a random “Bob is having a bad day and looking to dump his frustration on someone else”)
Alice said a partially true, partially false thing (or, some other variation of “it’s complicated”).
Maybe Bob is correctly noticing that Alice has a motivated component to her belief, but in fact, the belief is still true, and most of her reasoning is still correct, and Bob is factually wrong about the statement being a lie.
Maybe Alice and Bob’s separate world models are pointing in different directions, which is making different aspects of Alice’s salient to each of them. (They might both be motivated, or non-motivated). If they talk for awhile they may both eventually learn to see the situation through different frames that broaden their understanding.
This is a much more complicated set of possibilities for Alice to evaluate. Incentives are getting applied here, but they could push her in a number of ways.
If Alice is a typical human and/or junior rationalist, she’s going to be defensive, which will make it harder for her to think clearly. She will be prone to exaggerating the probability of options that aren’t her fault. She may see Bob as socially threatening her – not as a truthseeking collaborator trying to help, but as a malicious actor out to harm her.
If Alice is a perfectly skilled rationalist, she’ll hopefully avoid feeling defensive, and will not exaggerate the probability any of the options for motivated reasons. But over half the options are still “this is Bob’s problem, not Alice’s, and/or they are both somewhat confused together”.
Exactly how the probabilities fall out depends on the situation, and how much Alice trusts her own reasoning, and how much she trusts Bob’s reasoning. But even perfect-rationalist Alice should have nonzero probability on “Bob is one who is wrong, perhaps maliciously, here”.
And if the answer is “Alice’s belief is built on some kind of motivated reasoning”, that’s not something that can be easily resolved. If Alice is wrong, but luckily so, where the chain of motivated beliefs might be only 1-2 nodes deep, she can check if they make sense and maybe discover she is wrong. But...
if she checks 1-2 nodes deep and she’s not obviously wrong, this isn’t clear evidence, since she might still be using motivated cognition to check for motivated cognition
if Alice is a skilled enough rationalist to easily check for motivated cognition, going 1-2 nodes deep still isn’t very reassuring. If the problem was that “many of Alice’s older observations were due to confirmation bias, and she no longer directly remembers those events but has cached them as prior probabilities”, that’s too computationally intractable to check in the moment.
And meanwhile, until Alice has verified that her reasoning was motivated, she needs to retain probability mass on Bob being the wrong one.
Takeaways
Situation B seems extremely different to me from Situation A. It makes a lot of sense to me for people to use different words or phrases for the two situations.
One confounding issue is that obfuscating liars in Situation A have an incentive to pretend to be in Situation B. But there’s still a fact-of-the-matter of what mental state Alice is in, which changes what incentives Alice will and should respond to.
I think I’ve lost the thread of your point. It seems a LOT like you’re looking at motivation and systemic issues _WAY_ too soon in situation B. Start with “I think that statement is incorrect, Alice”, and work to crux the disagreement and find out what’s going on. _THEN_ decide if there’s something motivated or systemic that needs to be addressed.
Basically, don’t sit bolt upright in alarm for situation B. That’s the common case for anything complicated, and you need to untangle it as part of deciding if it’s important.
Ah, sorry not being clearer. Yes, that’s actually the point I meant to be making. It’s inappropriate (and factually wrong) for Bob to lead with “hey Alice you lied here”. (I was trying to avoid editorializing too much about what seemed appropriate, and focus on why the two situations are different)
I agree that the correct opening move is “that statement is incorrect”, etc.
One further complication, though, is that it might be that Alice and Bob have talked a lot about whether Alice is incorrect, looked for cruxes, etc, and after several months of this Bob still thinks Alice is being motivated and Alice still think her model just makes sense. (This was roughly the situation in the OP)
From Bob’s epistemic state, he’s now in a world where it looks like Alice has a pattern of motivation that needs to be addressed, and Alice is non-cooperative because Alice disagrees (and it’s hard to tell the difference between “Alice actually disagrees” or “Alice is feigning disagreement for political convenience). I don’t think there’s any simple thing that can happen next, and [for good or for ill] what happens next is probably going to have something to do with Alice and Bob’s respective social standing.
I think there are practices and institutions one could develop to help keep the topic in the domain of epistemics instead of politics, and there are meta-practices Alice and Bob can try to follow if they both wish for it to remain in the domain of epistemics rather than politics. But there is no special trick for it.
I think a little clearer in the comment, but I’m confused about the main post—in the case of subtle disagreements that _aren’t_ clearly wrong nor intended to mislead, why do you want a word or concept that makes people sit up in alarm? Only after you’ve identified the object-level reasoning that shows it to be both incorrect and object-important, should you examine the process-importance of why Alice is saying it (though in reality, you’re evaluating this, just like she’s evaluating yours).
The biggest confounding issue in my experience is that for deep enough models that Alice has used for a long time, her prior that Bob is the one with a problem is MUCH higher than that her model is inappropriate for this question. In exactly the same way that Bob’s beliefs are pointing to the inverse and defying introspection of true reasons for his beliefs.
If you’re finding this after a fair bit of discussion, and it’s a topic without fairly straightforward empirical resolution, you’re probably in the “agree to disagree” state (admitting that on this topic you don’t have sufficient mutual knowledge of each others’ rational beliefs to agree). And then you CERTAINLY don’t want a word that makes people “sit up in alarm”, as it’s entirely about politics which of you is deemed to be biased.
There are other cases where Alice is uncooperative and you’re willing to assume her motives or process are so bad that you want others not to be infected. That’s more a warning to others than a statement that Alice should be expected to respond to. And it’s also going to hit politics and backfire on Bob, at least some of the time. This case comes up a lot in public statements by celebrities or authorities. There’s no room for discussion at the object level, so you kind of jump to assuming bad faith if you disagree with the statements. Reaction by those who disagree with Paul Krugman’s NYT column is an example of this—“he’s got a Nobel in Economics, he must be intentionally misleading people by ignoring all the complexity in his bad policy advice”.
I’ll try to write up a post that roughly summarizes the overall thesis I’m trying to build towards here, so that it’s clearer how individual pieces fit together.
But a short answer to the “why would I want a clear handle for ‘sitting upright in alarm’” is that I think it’s at least sometimes necessary (or at the very least, inevitable), for this sort of conversation to veer into politics, and what I want is to eventually be able to discuss politics-qua-politics sanely and truth-trackingly.
My current best guess (although very lightly held) is that politics will go better if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
I don’t think I agree with any of that—I think that rational discussion needs to have less rhetorical punch and more specific clarity of proposition, which tends not to meet political/other-dominating needs. And most of what we’re talking about in this subthread isn’t “need to say misleading things”, but “have a confused (or just different from mine) worldview that feels misleading without lots of discussion”.
I look forward to further iterations—I hope I’m wrong and only stuck in my confused model of how rationalists approach such disagreements.
Things can be misleading by accident or on purpose. Motivated cognition would fit under “on purpose”, since (by definition) it’s motivated.
What’s the difference between motivated errors and lies? Motivated errors are motivated, meaning they’re decision-theoretic (responsive to incentives, etc). The standard argument for compatibilist free will now applies: since it’s a decision made on the basis of consequences, it’s responsive to incentives, so agent-based models and social systems design (including design of norms) should be treating the mind-part doing motivated cognition as an agent.
I think you are pointing to some difference between the two, but I’m not sure what it is. Maybe the difference is that motivated errors are more covert than lies are, more plausible deniability (including narrative coherence) is maintained, and this plausible deniability is maintained through keeping the thing unconscious, while a plausible semicoherent narrative is maintained in consciousness?
(Obviously, being very alarmed at things that are constantly happening is not a useful allocation of attention! But, that applies to deliberate lies too, not just motivated cognition.)
(See also, Dishonest Update Reporting)
Motivated errors are much less legible to the person who is motivated. The on-purposes-ness of motivated cognition is quite different from the on-purposeness of deliberate choice and I think treating them as the same leads to important failures.
If someone consciously lies* to me, it’s generally because there is no part of them that thinks it was important enough to [edit: epistemically] cooperate with me. They specifically considered, with their System 2, and/or their entire subagent parliament, whether it was a good idea to lie to me, then they chose to do so. I have basically no further interest in attempting to cooperate with such a person unless they put a lot of work into repairing the damage.
When someone motivatedly rationalizes at the subconscious level, my sense of what’s happening is some combination of
a) no one is really in control and it makes more sense to model them as a random collection of atoms doing stuff than at the ‘agent’ abstraction. The random collection of atoms might respond to incentives, but naively applying incentives to them won’t necessarily work the way you want. (I mostly don’t think this is a useful frame here but noting it for completeness)
b) insofar as there is an agent there, it’s often the case that there are multiple subagents that are compartmentalized from each other. If they’ve made their way to the rationalsphere, read the sequences, etc, then it’s highly likely that at least one subagent highly endorses not being motivated. But that agent may not have conscious access to the fact that they are making the motivated error.
The priority, in my mind, for conversations among people striving to discuss beliefs empirically/rationally rather than as tribal affiliation or point scoring, should be to make sure the subagent that cares about truth remains in control. (Otherwise you’ve already failed, or dramatically increased the difficulty, of having a conversation that isn’t about tribal affiliation and point scoring)
[*minor point, but there’s a large class of lies like jokes and stuff than I’m not counting here]
Would you place motivated errors, generally, in the same bucket as confirmation bias type thinking?
They’re implemented by very different cognitive algorithms, which differently constrain the sorts of falsehoods and strategies they can generate.
Motivated cognition is exclusively implemented in pre-conscious mechanisms: distortion of attention, distortion of intuition, selective forgetting. Direct lying, on the other hand, usually refers to lying which has System 2 involvement, which means a wider range of possible mistruths and a wider (and more destructive) range of supporting strategies.
For example: A motivated reasoner will throw out some of their data inappropriately, telling themself a plausible but false story about how that data didn’t mean anything, but they’ll never compose fake data from scratch. But a direct liar will do both, according to what they can get away with.
My guess is that you have an unrealistic picture of what ordinary lying is like. When I lie, it’s usually an automatic response (like most speech), said reflexively based on the social situation I’m in. (Think, “do I look fat in this?”) I can “catch myself” afterwards or during the process, but the response itself is generated by system 1.
Using system 2 while lying is usually a mistake, because it seems unnatural. If system 2 is used for lying, it’s usually offline: telling yourself a certain story before going in to a social situation, so the responses can come automatically. Having to use system 2 to lie during a conversation is a kind of failure mode.
There are extreme cases like faking entire datasets, which are pretty rare.
Hmm. It occurs to me that lying might be a domain that’s particularly prone to typical mind fallacy because people rarely share information about their lying habits. (See “Typical Sex Life Fallacy”)
Some categories of experiences I can recall, which I think fall on a spectrum of deliberateness to unconsciousness.
Lying while surprised.
As a teenager, my dad suddenly asked me “have you done Forbidden Activity?” at a time when I wasn’t expecting it. “Oh shit,” I thought. I said “no.”
[in this case, I was explicitly not epistemically cooperating with my dad. My understanding from Blatant Lying is the Best Kind is that this was simulacrum 2 behavior]
Rehearsing a narrative
Perhaps most similar to your experience: talking to a prospective employer at an interview, and realizing they’re about to ask me about X and the truest answer to X is pretty unflattering to myself. Rehearsing a narrative in my head to prepare for that moment, trying to come up with a story that’s true-ish enough that I can justify it to myself, so that by the time they ask me about X I can bullshit my way through it fluently.
[This seemed mostly lying playing a simulacrum 4 game with fairly established rules about what is acceptable]
Reflexing lying that’s easy to notice
If someone asks “does this dress make me look fat” and I say “no you look great!”, or someone asks “how’s your project coming along” and I say “great!”, and no she doesn’t look great and/or my project is not going great, it’s usually obvious to me almost immediately, even if I believed it (or wasn’t really paying attention one way or another) at the moment that I said “great!”.
This feels on the edge of the lying/motivated-cognition spectrum, and seems reasonable to me classify it as a lie.
Even if the first instance was unconscious, if the conversation continues about how my project is going, subsequent statements are probably deliberate S2 lies, or there is clear, continuous S2 thinking about how to maintain the “things are great!” narrative.
[where this falls on the simulacrum spectrum depends a bit on context, I could see it being level 3 or level 4]
Reflexively bad arguments
Sometimes someone says “Policy X is terrible!” and I think “no, policy X is good! Without policy X is entire project is doomed!”. And, well, I do think without Policy X, the project is going to be much harder and failure is more likely. But the statement was clearly politically motivated. “My preferred policy is absolutely essential” probably isn’t true.
A few years ago, I probably would have not even noticed that “without policy X the project is doomed” is a bad argument. A few years later (with much deliberate practice in noticing motivated cognition under my belt), and I’m capable of noticing that “this was a bad argument with the flavor of political motivation” within a few minutes. If we’re talking in person, that’s probably too long for me to catch it in time. In email or blogpost form, I can usually catch it.
[This seems like the sort of level 3 simulacrum thing that can strengthen the level 3-ness of the conversation. I don’t actually think it’s useful to think of simulacrum levels moving in a particular order, so I don’t think it’s usually accurate to say that this is moving the dial from 2 to 3, but I do think it makes it harder to get from 3 to 1]
This study found that 60% of students at UMASS lied at least once in a 10 minute conversation: https://www.eurekalert.org/pub_releases/2002-06/uoma-urf061002.php
And that of those who lied, many were surprised about how often they lied. I would not be surprised if this is true for many people (they lie at least once every ten minutes and would be surprised at how often they lie)
When I specifically started paying attention to little white lies (in particular, I found that I often reflexively exaggerated to make myself look good or prevent myself from looking bad) I found that I did it WAY more often than I thought. Once I got to a point where I could notice in the moment, I was able to begin correcting, but the first step was just noticing how often it occurred.
That link doesn’t have enough information to find the study, which is likely to contain important methodological caveats.
Here’s the study: https://sci-hub.tw/10.1207/S15324834BASP2402_8
I think the methodology is fairly OK for this sort of high level analysis, except of course for it being all university students from UMASS.
I haven’t thought about this topic much and don’t have a strong opinion here yet, but I wanted to chime in with some personal experience which makes me suspect there might be distinct categories:
I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told “yes, we’ve already got that feature (we hadn’t) and we already have several clients successfully using that (we hadn’t).” Others were invited to be part an “existing beta program” alongside others just like them (in fact, they would have been the very first). When I objected, I was told “no one wants to be the first, so you have to say that.” Another time, they denied that they ever lied, but they did, and it was more than motivated cognition. There is a very vast gulf between “we’ve built this feature already” and “we haven’t even asked the engineers what they think” and no amount of motivated cognition bridges it. It’s less work than faking data, but it’s no more subtle.
Motivated cognition is bad, but some people are really very willing to abandon truth for their own benefit in a completely adversarial way. The motivated cognition comes in to justify why what they’re doing is okay, but they have a very clear model of the falsehoods they’re presenting (they must in order to protect them).
I think they lie to themselves that they’re not lying (so that if you search their thoughts, they never think “I’m lying”), but they are consciously aware of the different stories they have told different people, and the ones that actually constrain their expectations. And it’s such a practiced way of being that even involving System 2, it’s fluid. Each context activating which story to tell, etc., in a way that appears natural from the outside. Maybe that’s offline S2, online S1? I’m not sure. I think people who interact like that have a very different relationship with the truth than do most people on LW.
Attempting to answer more concretely and principled-ly about what makes sense to distinguish here
Reflecting a bit more, I think there are two important distinctions to be made:
Situation A) Alice makes a statement, which is false, and either Alice knows beforehand it’s false, or Alice realizes it’s false as soon as she pays any attention to it after the fact. (this is slightly different from how I’d have defined “lie” yesterday, but after 24 hours of mulling it over I think this is the correct clustering)
Situation B) Alice makes a statement which is false, which to Alice appears locally valid, but which is built upon some number of premises or arguments that are motivated.
...
[edit:]
This comment ended up quite long, so a summary of my overall point:
Situation B is much more complicated than Situation A.
In Situation A, Alice only has one inferential step to make, and Alice and Bob have mutual understanding (although not common knowledge) of that one inferential step. Bob can say “Alice, you lied here” and have the conversation make sense.
In Situation B, Alice has many inferential steps to make, and if Bob says “Alice, you lied here”, Alice (even if rational and honest) needs to include probability mass on “Bob is wrong, Bob is motivated, and/or Bob is a malicious actor.”
These are sufficiently different epistemic states for Alice to be in that I think it makes sense to use different words for them.
...
Situation A
In situation A, if Bob says “Hey, Alice, you lied here”, Alice thinks internally either “shit I got caught” or “oh shit, I *did* lie.” In the first case, Alice might attempt to obfuscate further. In the second case, Alice hopefully says “oops”, admits the falsehood, and the conversation moves on. In either case, the incentives are *mostly* clear and direct to Alice – try to avoid doing this again, because you will get called on it.
If Alice obfuscates, or pretends to be in Situation B, she might get away with it this time, but identifying the lie will still likely reduce her incentives to make similar statements in the future (since at the very least, she’ll have to do work defending herself)
Situation B
In situation B, if you say “Hey Alice, you lied here”, Alice will say “what the hell? No?”.
And then a few things happen, which I consider justified on Alice’s part:
From Alice’s epistemic position, she just said a true thing. If Bob just claimed that true thing was a lie, alice has now has several major hypotheses to consider:
Alice actually said a false thing
maybe the argument that directly supports proposition B is faulty reasoning, or Alice is mistaken about the facts.
maybe somewhere in her background models/beliefs/ontology are nodes that are false due to motivated reasoning
maybe somewhere in her background models/beliefs/ontology are nodes that are false for non-motivated reasoning
Alice actually said a true thing
Bob’s models/beliefs/ontology are wrong, because *Bob* is motivated, causing Bob to incorrectly think Alice’s statement was false
Bob’s models/beliefs/ontology are wrong, for non-motivated reasons
Bob is making some kind of straightforward local error about the claim in question (maybe he’s misunderstanding her or defining words differently from her)
Bob’s models are fine… but Bob is politically motivated. He is calling Alice a liar, not to help truthseek, but to cast aspersions on Alice’s character. (this might part of an ongoing campaign to harm Alice, or just a random “Bob is having a bad day and looking to dump his frustration on someone else”)
Alice said a partially true, partially false thing (or, some other variation of “it’s complicated”).
Maybe Bob is correctly noticing that Alice has a motivated component to her belief, but in fact, the belief is still true, and most of her reasoning is still correct, and Bob is factually wrong about the statement being a lie.
Maybe Alice and Bob’s separate world models are pointing in different directions, which is making different aspects of Alice’s salient to each of them. (They might both be motivated, or non-motivated). If they talk for awhile they may both eventually learn to see the situation through different frames that broaden their understanding.
This is a much more complicated set of possibilities for Alice to evaluate. Incentives are getting applied here, but they could push her in a number of ways.
If Alice is a typical human and/or junior rationalist, she’s going to be defensive, which will make it harder for her to think clearly. She will be prone to exaggerating the probability of options that aren’t her fault. She may see Bob as socially threatening her – not as a truthseeking collaborator trying to help, but as a malicious actor out to harm her.
If Alice is a perfectly skilled rationalist, she’ll hopefully avoid feeling defensive, and will not exaggerate the probability any of the options for motivated reasons. But over half the options are still “this is Bob’s problem, not Alice’s, and/or they are both somewhat confused together”.
Exactly how the probabilities fall out depends on the situation, and how much Alice trusts her own reasoning, and how much she trusts Bob’s reasoning. But even perfect-rationalist Alice should have nonzero probability on “Bob is one who is wrong, perhaps maliciously, here”.
And if the answer is “Alice’s belief is built on some kind of motivated reasoning”, that’s not something that can be easily resolved. If Alice is wrong, but luckily so, where the chain of motivated beliefs might be only 1-2 nodes deep, she can check if they make sense and maybe discover she is wrong. But...
if she checks 1-2 nodes deep and she’s not obviously wrong, this isn’t clear evidence, since she might still be using motivated cognition to check for motivated cognition
if Alice is a skilled enough rationalist to easily check for motivated cognition, going 1-2 nodes deep still isn’t very reassuring. If the problem was that “many of Alice’s older observations were due to confirmation bias, and she no longer directly remembers those events but has cached them as prior probabilities”, that’s too computationally intractable to check in the moment.
And meanwhile, until Alice has verified that her reasoning was motivated, she needs to retain probability mass on Bob being the wrong one.
Takeaways
Situation B seems extremely different to me from Situation A. It makes a lot of sense to me for people to use different words or phrases for the two situations.
One confounding issue is that obfuscating liars in Situation A have an incentive to pretend to be in Situation B. But there’s still a fact-of-the-matter of what mental state Alice is in, which changes what incentives Alice will and should respond to.
I think I’ve lost the thread of your point. It seems a LOT like you’re looking at motivation and systemic issues _WAY_ too soon in situation B. Start with “I think that statement is incorrect, Alice”, and work to crux the disagreement and find out what’s going on. _THEN_ decide if there’s something motivated or systemic that needs to be addressed.
Basically, don’t sit bolt upright in alarm for situation B. That’s the common case for anything complicated, and you need to untangle it as part of deciding if it’s important.
(I edited the comment, curious if it’s clearer now)
Ah, sorry not being clearer. Yes, that’s actually the point I meant to be making. It’s inappropriate (and factually wrong) for Bob to lead with “hey Alice you lied here”. (I was trying to avoid editorializing too much about what seemed appropriate, and focus on why the two situations are different)
I agree that the correct opening move is “that statement is incorrect”, etc.
One further complication, though, is that it might be that Alice and Bob have talked a lot about whether Alice is incorrect, looked for cruxes, etc, and after several months of this Bob still thinks Alice is being motivated and Alice still think her model just makes sense. (This was roughly the situation in the OP)
From Bob’s epistemic state, he’s now in a world where it looks like Alice has a pattern of motivation that needs to be addressed, and Alice is non-cooperative because Alice disagrees (and it’s hard to tell the difference between “Alice actually disagrees” or “Alice is feigning disagreement for political convenience). I don’t think there’s any simple thing that can happen next, and [for good or for ill] what happens next is probably going to have something to do with Alice and Bob’s respective social standing.
I think there are practices and institutions one could develop to help keep the topic in the domain of epistemics instead of politics, and there are meta-practices Alice and Bob can try to follow if they both wish for it to remain in the domain of epistemics rather than politics. But there is no special trick for it.
I think a little clearer in the comment, but I’m confused about the main post—in the case of subtle disagreements that _aren’t_ clearly wrong nor intended to mislead, why do you want a word or concept that makes people sit up in alarm? Only after you’ve identified the object-level reasoning that shows it to be both incorrect and object-important, should you examine the process-importance of why Alice is saying it (though in reality, you’re evaluating this, just like she’s evaluating yours).
The biggest confounding issue in my experience is that for deep enough models that Alice has used for a long time, her prior that Bob is the one with a problem is MUCH higher than that her model is inappropriate for this question. In exactly the same way that Bob’s beliefs are pointing to the inverse and defying introspection of true reasons for his beliefs.
If you’re finding this after a fair bit of discussion, and it’s a topic without fairly straightforward empirical resolution, you’re probably in the “agree to disagree” state (admitting that on this topic you don’t have sufficient mutual knowledge of each others’ rational beliefs to agree). And then you CERTAINLY don’t want a word that makes people “sit up in alarm”, as it’s entirely about politics which of you is deemed to be biased.
There are other cases where Alice is uncooperative and you’re willing to assume her motives or process are so bad that you want others not to be infected. That’s more a warning to others than a statement that Alice should be expected to respond to. And it’s also going to hit politics and backfire on Bob, at least some of the time. This case comes up a lot in public statements by celebrities or authorities. There’s no room for discussion at the object level, so you kind of jump to assuming bad faith if you disagree with the statements. Reaction by those who disagree with Paul Krugman’s NYT column is an example of this—“he’s got a Nobel in Economics, he must be intentionally misleading people by ignoring all the complexity in his bad policy advice”.
I’ll try to write up a post that roughly summarizes the overall thesis I’m trying to build towards here, so that it’s clearer how individual pieces fit together.
But a short answer to the “why would I want a clear handle for ‘sitting upright in alarm’” is that I think it’s at least sometimes necessary (or at the very least, inevitable), for this sort of conversation to veer into politics, and what I want is to eventually be able to discuss politics-qua-politics sanely and truth-trackingly.
My current best guess (although very lightly held) is that politics will go better if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
I don’t think I agree with any of that—I think that rational discussion needs to have less rhetorical punch and more specific clarity of proposition, which tends not to meet political/other-dominating needs. And most of what we’re talking about in this subthread isn’t “need to say misleading things”, but “have a confused (or just different from mine) worldview that feels misleading without lots of discussion”.
I look forward to further iterations—I hope I’m wrong and only stuck in my confused model of how rationalists approach such disagreements.