As for plausible deniability, I suspect that it’s not always a question of wanting to maintain it vis-a-vis your conversation partner, but vis-a-vis a real or imagined third party/society/Big Other:
e.g. I may want to signal clearly to the customs officer that I’m willing to bribe him, but maintain plausible deniability in case his superiors are listening in, so as not to get in trouble, or I may want to signal clearly to the girl at the bar that I’m flirting, but if rejected I want to be able to tell my friends that I was just complementing her shirt and not really interested (or be able to retrospectively construct such a story in my own mind), so as to preserve ego, etc.
I came to a similar conclusion when thinking about the phenomenon of “technically true” deceptions.
Most people seem to have a strong instinct to say only technically-true things, even when they are deliberately deceiving someone (and even when this restriction significantly reduces their chances of success). Yet studies find that the victims of a deception don’t much care whether the deceiver was being technically truthful. So why the strong instinct to do this costly thing, if the interlocutor doesn’t care?
I currently suspect the main evolutionary reason is that a clear and direct lie makes it easier for the victim to trash your reputation with third parties. “They said X; the truth was not-X; they’re a liar.”
If you only deceive by implication, then the deception depends on a lot of context that’s difficult for the victim to convey to third parties. The act of making the accusation becomes more costly, because more stuff needs to be communicated. Third parties may question whether the deception was intentional. It becomes harder to create common knowledge of guilt: Even if one listener is convinced, they may doubt whether other listeners would be convinced.
Thus, though the victim is no less angry, the counter-attack is blunted.
I think most people want to be able to tell themselves a story in which they act in a way that society sees as praiseworthy, or at least not too blameworthy.
I think there’s a blurry line between whether that general preference is about “self-image” versus “image of oneself from the perspective of an imagined third party”. I’m not even sure if there’s a line at all—maybe that’s just saying the same thing twice.
Anyway, lying is generally seen as bad and blameworthy in our culture (with some exceptions like “lying to the hated outgroup in order to help avert the climate crisis”, or “a white lie for the benefit of the hearer”, etc.). Spinning / being misleading is generally seen as OK, or at least less bad, in our culture—everyone does that all the time.
Given that cultural background, obviously most people will feel motivated to spin rather than lie.
But that just pushes the question to “why is there a stronger cultural norm against lying than against spinning”? Probably just what you wrote—it’s easier to get away with spinning because of the “common knowledge of guilt” thing. It’s harder to police, so more people do it. And then everyone kinda get inured to it and starts seeing it as (relatively) culturally acceptable, I think.
Separately, I kinda think there was never any reason to expect the listener’s preferences to enter the equation. If I cared so much about the listener’s preferences, I wouldn’t be trying to deceive them in the first place, right? Even if I nominally cared about the listener’s preferences, well, accurately seeing a situation from someone else’s perspective is hard and rare even under the best of circumstances (i.e., when you like them and know them well and they’re in the same room as you). The result you mention (I presume this paper) is not the best of circumstances—it asks survey takers to answer questions about imaginary vignettes, which is kinda stacking the deck even more than usual against caring about the listener’s preferences. (And maybe all surveys are BS anyway.)
As for plausible deniability, I suspect that it’s not always a question of wanting to maintain it vis-a-vis your conversation partner, but vis-a-vis a real or imagined third party/society/Big Other:
e.g. I may want to signal clearly to the customs officer that I’m willing to bribe him, but maintain plausible deniability in case his superiors are listening in, so as not to get in trouble, or I may want to signal clearly to the girl at the bar that I’m flirting, but if rejected I want to be able to tell my friends that I was just complementing her shirt and not really interested (or be able to retrospectively construct such a story in my own mind), so as to preserve ego, etc.
I came to a similar conclusion when thinking about the phenomenon of “technically true” deceptions.
Most people seem to have a strong instinct to say only technically-true things, even when they are deliberately deceiving someone (and even when this restriction significantly reduces their chances of success). Yet studies find that the victims of a deception don’t much care whether the deceiver was being technically truthful. So why the strong instinct to do this costly thing, if the interlocutor doesn’t care?
I currently suspect the main evolutionary reason is that a clear and direct lie makes it easier for the victim to trash your reputation with third parties. “They said X; the truth was not-X; they’re a liar.”
If you only deceive by implication, then the deception depends on a lot of context that’s difficult for the victim to convey to third parties. The act of making the accusation becomes more costly, because more stuff needs to be communicated. Third parties may question whether the deception was intentional. It becomes harder to create common knowledge of guilt: Even if one listener is convinced, they may doubt whether other listeners would be convinced.
Thus, though the victim is no less angry, the counter-attack is blunted.
I think most people want to be able to tell themselves a story in which they act in a way that society sees as praiseworthy, or at least not too blameworthy.
I think there’s a blurry line between whether that general preference is about “self-image” versus “image of oneself from the perspective of an imagined third party”. I’m not even sure if there’s a line at all—maybe that’s just saying the same thing twice.
Anyway, lying is generally seen as bad and blameworthy in our culture (with some exceptions like “lying to the hated outgroup in order to help avert the climate crisis”, or “a white lie for the benefit of the hearer”, etc.). Spinning / being misleading is generally seen as OK, or at least less bad, in our culture—everyone does that all the time.
Given that cultural background, obviously most people will feel motivated to spin rather than lie.
But that just pushes the question to “why is there a stronger cultural norm against lying than against spinning”? Probably just what you wrote—it’s easier to get away with spinning because of the “common knowledge of guilt” thing. It’s harder to police, so more people do it. And then everyone kinda get inured to it and starts seeing it as (relatively) culturally acceptable, I think.
Separately, I kinda think there was never any reason to expect the listener’s preferences to enter the equation. If I cared so much about the listener’s preferences, I wouldn’t be trying to deceive them in the first place, right? Even if I nominally cared about the listener’s preferences, well, accurately seeing a situation from someone else’s perspective is hard and rare even under the best of circumstances (i.e., when you like them and know them well and they’re in the same room as you). The result you mention (I presume this paper) is not the best of circumstances—it asks survey takers to answer questions about imaginary vignettes, which is kinda stacking the deck even more than usual against caring about the listener’s preferences. (And maybe all surveys are BS anyway.)