Trivers on Self-Deception
People usually have good guesses about the origins of their behavior. If they eat, we believe them when they say it was because they were hungry; if they go to a concert, we believe them when they say they like the music, or want to go out with their friends. We usually assume people’s self-reports of their motives are accurate.
Discussions of signaling usually make the opposite assumption: that our stated (and mentally accessible) reasons for actions are false. For example, a person who believes they are donating to charity to “do the right thing” might really be doing it to impress others; a person who buys an expensive watch because “you can really tell the difference in quality” might really want to conspicuously consume wealth.
Signaling theories share the behaviorist perspective that actions do not derive from thoughts, but rather that actions and thoughts are both selected behavior. In this paradigm, predicted reward might lead one to signal, but reinforcement of positive-affect producing thoughts might create the thought “I did that because I’m a nice person”.
Robert Trivers is one of the founders of evolutionary psychology, responsible for ideas like reciprocal altruism and parent-offspring conflict. He also developed a theory of consciousness which provides a plausible explanation for the distinction between selected actions and selected thoughts.
TRIVERS’ THEORY OF SELF-DECEPTION
Trivers starts from the same place a lot of evolutionary psychologists start from: small bands of early humans grown successful enough that food and safety were less important determinants of reproduction than social status.
The Invention of Lying may have been a very silly movie, but the core idea—that a good liar has a major advantage in a world of people unaccustomed to lies—is sound. The evolutionary invention of lying led to an “arms race” between better and better liars and more and more sophisticated mental lie detectors.
There’s some controversy over exactly how good our mental lie detectors are or can be. There are certainly cases in which it is possible to catch lies reliably: my mother can identify my lies so accurately that I can’t even play minor pranks on her anymore. But there’s also some evidence that there are certain people who can reliably detect lies from any source at least 80% of the time without any previous training: microexpressions expert Paul Ekman calls them (sigh...I can’t believe I have to write this) Truth Wizards, and identifies them at about one in four hundred people.
The psychic unity of mankind should preclude the existence of a miraculous genetic ability like this in only one in four hundred people: if it’s possible, it should have achieved fixation. Ekman believes that everyone can be trained to this level of success (and has created the relevant training materials himself) but that his “wizards” achieve it naturally; perhaps because they’ve had a lot of practice. One can speculate that in an ancestral environment with a limited number of people, more face-to-face interaction and more opportunities for lying, this sort of skill might be more common; for what it’s worth, a disproportionate number of the “truth wizards” found in the study were Native Americans, though I can’t find any information about how traditional their origins were or why that should matter.
If our ancestors were good at lie detection—either “truth wizard” good or just the good that comes from interacting with the same group of under two hundred people for one’s entire life—then anyone who could beat the lie detectors would get the advantages that accrue from being the only person able to lie plausibly.
Trivers’ theory is that the conscious/unconscious distinction is partly based around allowing people to craft narratives that paint them in a favorable light. The conscious mind gets some sanitized access to the output of the unconscious, and uses it along with its own self-serving bias to come up with a socially admirable story about its desires, emotions, and plans. The unconscious then goes and does whatever has the highest expected reward—which may be socially admirable, since social status is a reinforcer—but may not be.
HOMOSEXUALITY: A CASE STUDY
It’s almost a truism by now that some of the people who most strongly oppose homosexuality may be gay themselves. The truism is supported by research: the Journal of Abnormal Psychology published a study measuring penile erection in 64 homophobic and nonhomophobic heterosexual men upon watching different types of pornography, and found significantly greater erection upon watching gay pornography in the homophobes. Although somehow this study has gone fifteen years without replication, it provides some support for the folk theory.
Since in many communities openly declaring one’s self homosexual is low status or even dangerous, these men have an incentive to lie about their sexuality. Because their facade may not be perfect, they also have an incentive to take extra efforts to signal heterosexuality by for example attacking gay people (something which, in theory, a gay person would never do).
Although a few now-outed gays admit to having done this consciously, Trivers’ theory offers a model in which this could also occur subconsciously. Homosexual urges never make it into the sanitized version of thought presented to consciousness, but the unconscious is able to deal with them. It objects to homosexuality (motivated by internal reinforcement—reduction of worry about personal orientation), and the conscious mind toes party line by believing that there’s something morally wrong with gay people and only I have the courage and moral clarity to speak out against it.
This provides a possible evolutionary mechanism for what Freud described as reaction formation, the tendency to hide an impulse by exaggerating its opposite. A person wants to signal to others (and possibly to themselves) that they lack an unacceptable impulse, and so exaggerates the opposite as “proof”.
SUMMARY
Trivers’ theory has been summed up by calling consciousness “the public relations agency of the brain”. It consists of a group of thoughts selected because they paint the thinker in a positive light, and of speech motivated in harmony with those thoughts. This ties together signaling, the many self-promotion biases that have thus far been discovered, and the increasing awareness that consciousness is more of a side office in the mind’s organizational structure than it is a decision-maker.
- Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think by 27 Dec 2019 5:09 UTC; 127 points) (
- The Library of Scott Alexandria by 14 Sep 2015 1:38 UTC; 126 points) (
- According to Dale Carnegie, You Can’t Win an Argument—and He Has a Point by 30 Nov 2013 6:23 UTC; 112 points) (
- Many therapy schools work with inner multiplicity (not just IFS) by 17 Sep 2022 10:27 UTC; 100 points) (EA Forum;
- Ego syntonic thoughts and values by 17 Jul 2011 20:43 UTC; 87 points) (
- Does Goal Setting Work? by 16 Oct 2013 20:54 UTC; 55 points) (
- Many therapy schools work with inner multiplicity (not just IFS) by 17 Sep 2022 10:27 UTC; 52 points) (
- Tendencies in reflective equilibrium by 20 Jul 2011 10:38 UTC; 51 points) (
- 19 Jan 2014 5:32 UTC; 5 points) 's comment on Will the world’s elites navigate the creation of AI just fine? by (
- 1 Mar 2023 2:49 UTC; 4 points) 's comment on Enemies vs Malefactors by (
- 14 Sep 2011 16:33 UTC; 0 points) 's comment on My Greatest Achievement by (
- My rationality story: Goals aren’t what you think they are by 26 Sep 2011 15:37 UTC; -7 points) (
This.
I don’t know if latent homosexuality in homophobes is the best example, but I’ve definitely seen it in myself. I will sometimes behave in certain ways, for motives I find perfectly virtuous or justified, and it is only by analysing my behaviour post-hoc that I realize it isn’t consistent with the motives I thought I had—but it is consistent with much more selfish motives.
I think the example that most shocked me was back when I played an online RPG, and organised an action in a newly-coded environment. I and others on my team noticed an unexpected consequence of the rules that would make it easy for us to win. Awesome ! We built our strategy around it, proud of our cleverness, and went forward with the action.
And down came the administrators, furious that we had cheated that way.
I was INCENSED at the accusation. How were we supposed to know this was a bug and not a feature ? How dare they presume bad faith on our part ? I loudly and vocally defended our actions.
It’s only later, as I was re-reading our posts on the private forum where we organised the action (posts that I realized as I re-read them the administrators had access to, and had probably read… please kill me now), that I noticed that not only did we discuss said bug, I specifically told everyone not to tell the administrators about it. At the time, my reasoning was that, well, they might decide to tell us not to use it, and we wouldn’t want that, right ?
But if I’d thought there was a chance that the administrators would disapprove of us using the bug, how could I possibly think it wasn’t a bug, and that using it wasn’t cheating ? If I was acting in good faith how could I possibly not want to check with the administrators and make sure ?
Well, I didn’t. I managed to cheat, obviously, blatantly, and have no conscious awareness I was doing so. That’s not even quite true; I bet if I’d thought it through, as I did afterwards, I would have realized it. But my subconscious was damn well not going to let me think it through now was it ?
And why would my subconscious not allow me to understand I was cheating ? Well, the answer is obvious : so that I could be INCENSED and defend myself vocally, passionately and with utter sincerity once I did get accused of cheating. Heck, I probably did get away with it in some people’s eyes. Those that didn’t read the incriminating posts on the private forum at least.
So basically, now I don’t take my motives for granted. I try to consider not only why I think I want to do something, but what motives one could infer from the actual consequences of what I want to do.
It also means I worry much less about other people’s motives. If motives are a perfect guide to people’s actions, then someone who thinks they truly love their partner while their actions result in abuse might just be an unfortunate klutz with anger issues, who should be pitied and given second chances instead of dumped. But if the subconscious can have selfish motives and cloak them in virtue for the benefit of the conscious mind, then that person can have the best intentions and still be an abuser, and one should very much DTMFA.
On reflection, I think it’s highly likely that in the past I’ve gone out of my way to signal high intelligence (by learning memory tricks, “deep” quotations, display intellectual reading prominently, etc) because on some level I suspected that I’m not actually very smart and yet I hugely value massive brainpower (alas, my parents praised me for “being smart”).
Interestingly (to me, anyway), I think that this has greatly diminished since I got involved with LessWrong. My belief is that interacting with actual extremely smart people made the whole thing seem silly, so I was able to get on with just trying to level up and not making such a big show about it.
That’s interesting.
Of course, it makes sense that signalling exceptional intelligence stops seeming like a worthwhile strategy when everyone in the community is perceived as equally or more intelligent, but it’s noteworthy and admirable that what replaced it was giving up on signaling altogether and concentrating on actual self-improvement, rather than the far more common (though less useful) tactic of signalling something else that was more reliably high-status in that community.
That’s pretty cool. Good for you!
You may have knowledge about this particular case I don’t, but unless we know XFrequentist is telling the truth rather than self-decieving (or we know that there is a high probability of such) we shouldn’t give him positive reinforcement.
Agreed (although still appreciated, TOD)! I could easily be wrong.
The evidence I would call on to support my belief are that:
I spend more time actually working on stuff than I used to,
I get less flustered in situations where others’ perception of my intellect could suffer a hit (presentations, meetings, group conversations),
in discussion/argument, I feel less concerned whether or not I come off as intelligent,
I’ve observed fewer people telling me that I’m smart.
I can think of alternate explanations for all these observations though. I’ll ask folk at our next meetup whether they think this is accurate, and I’ll also ask a few people that have known me well for the past few years. The outside view is clearly more reliable here.
Every smart person has this tendency, really. From the inside, being smart doesn’t feel like there’s anything different about you. It just feels like intellectual tasks are easier. There’s easy way to feel how hard it is for a not-smart person to learn or do something.
I think it’s more accurate to say “often irrelevant” than “false”.
I see at least two problems with this case study.
First, what sort of sampling bias is introduced by studying only men who are willing to view such materials? It seems highly implausible to me that this effect is zero.
Second, if true, this theory should generalize to other cases of people who express an exceptionally strong opposition towards some low-status/disreputable behavior that can be practiced covertly, or some low-status beliefs that can be held in secret. Yet it’s hard for me to think of any analogous examples that would be the subject of either folk theories or scientific studies.
In fact, this generalization would lead to the conclusion that respectable high-status activists who crusade against various behaviors and attitudes that are nowadays considered disreputable, evil, dangerous, etc., should be suspected that they do it because they themselves engage in such behaviors (or hold such attitudes) covertly. The funny thing is, in places and social circles where homophobia is considered disreputable, this should clearly apply to campaigners against homophobia!
There are a few other scientific results of this type: search the literature under “reaction formation”. For example:
Morokoff (1985): Women high in self-reported “sex guilt” have lower self-reported reaction to erotic stimuli but higher physiological arousal.
Dutton & Lake (1976): Whites with no history of prejudice and self-reported egalitarian beliefs were given bogus feedback during a task intended to convince them they were subconsciously prejudiced (falsely told that they had high skin response ratings of fear/anger when shown slides of interracial couples). After they had left the building, they were approached by either a black or white beggar. Whites who had received the false racism feedback gave more to the black beggar (though not to the white beggar) than whites who had not.
Sherman and Garkin (1980): Subjects were asked to solve a difficult riddle in which the trick answer involved sex-roles, such that after failing they felt “implicitly accused of sexism” (couldn’t find the exact riddle, but I imagine something like this). Afterwards they were asked to evaluate a sex-discrimination case. People who had previously had to solve the riddle gave harsher verdicts against a man accused of sexual discrimination than those who had not.
I’ve heard anecdotal theories of a few similar effects—for example, that the loudest and most argumentative religious believers are the ones who secretly doubt their own faith.
Overall I probably shouldn’t have included the case study because I don’t think Trivers’ theory stands or falls on this one point, and it’s probably not much more than tangential to the whole idea of a conscious/unconscious divide.
That’s extremely interesting—thanks for the references!
I’ve heard that any emotional response which causes an increase in blood pressure (including anxiety, anger, or disgust) will tend to increase penile circumference (which is what was measured in the homophobia study). This was discussed recently on Reddit (e.g., this comment).
Would this have an effect on the difference between homophobes and non-homophobes? Intuitively, it should have a uniform effect across the board so that the comparison of differences is still valid (though what Unnamed mentions in response to the parent undermines this), though this is hard to know without checking.
Silly example from my life. When I was three, I liked a girl named Katy in my Sunday school class. My greatest fear was that someone else would know. So I decided that I would be mean to Katy. I also realized that if I treated her differently, someone might read into that that I liked her. So I started treating all the girls in my Sunday school class horribly. And kept it going (consistency bias) until I was twelve. There were so many times that I wasn’t even sure myself if I liked or hated girls, since I always said I hated them, even though I had crushes on most of the ones I knew.
I found Modularity and the Social Mind: Are Psychologists Too Self-Ish? to be an excellent article relating to this. It also considerably helps question the concept of unified preferences.
And it also has plenty of other LW-related stuff and intriguing ideas packed into a very small space. In covers (and to me, clarifies) various ideas from modularity of mind, to the fact that having inconsistent beliefs need not cause dissonance, to our consciousness not being optimized for having true beliefs and being the PR firm instead of the president, and to the fact that any of our beliefs/behaviors that are not subjected to public scrutiny shouldn’t be expected to become consistent. Very much recommended.
Abstract: A modular view of the mind implies that there is no unitary “self” and that the mind consists of a set of informationally encapsulated systems, many of which have functions associated with navigating an inherently ambiguous and competitive social world. It is proposed that there are a set of cognitive mechanisms—a social cognitive interface (SCI)—designed for strategic manipulation of others’ representations of one’s traits, abilities, and prospects. Although constrained by plausibility, these mechanisms are not necessarily designed to maximize accuracy or to maintain consistency with other encapsulated representational systems. The modular view provides a useful framework for talking about multiple phenomena previously discussed under the rubric of the self.
Some excerpts:
This doesn’t follow. Just because it’s not a complex genetic adaptation doesn’t mean it’s environmental. Liar-detection-ability might just be an additive-effect quantitative trait like height or IQ, with truth-wizardry being the extreme right tail. This is consistent with evolutionary genetics, as Eliezer’s psychic unity point only applies for adaptations with multiple interdependent (and therefore non-additive) genetic parts.
I have a Scientific American that claims this has turned out to be false. I’ll try to find it and post back.
That sounds pleasant enough that it makes me wish I belonged to Triver’s species.
I cannot remember where, but I’m fairly sure I’ve read that Ekman’s Truth Wizards are more likely to come from a background of childhood domestic violence. Google is failing me, though, so if anyone else can corroborate this (or alternatively let me know if it was spurious bullcrap I saw on Lie To Me), that would be appreciated.
Apparently, most of what one sees on Lie To Me is spurious. At any rate, viewing the show causes people to make more false positive identifications of deception relative to a control group, without being any more accurate at catching real deception:
The Impact of Lie To Me on Viewers’ Actual Ability to Detect Deception
You mean, you can’t detect lies by standing three inches from someone and squinting up their nostrils?
I don’t know if if that’s true or in print, but I do remember it being mentioned on Lie To Me, in the context of Torres’ background. But at least one Truth Wizard believes it’s bunk, and I couldn’t find anything on Ekman’s blog about the subject one way or another.
See, I haven’t actually seen that much of the show, and I’ve definitely not seen that storyline. I still can’t seem to find anything to substantiate it, though, so provisionally chalking it down as spurious bullcrap seems safe.
That’s from the TV series, the story of one of the main characters, Ria Torres.
If I remember it right it isn’t only supposed to be about the amount of practice. It’s important that you practice in an enviroment where you want to spot lies but expect people to tell the truth.
The practice in law enforcement where the agent assumes that the person they are interrogating is guilty isn’t enough. In contrast the people in the secret service that guards important people get better practice. For any single person in the crowd they assume by default that the person is innocent but still check them to see if they might be guilty. As a result there are more “wizards” in the secret service than in law enforcement.
Does Trivers’ theory assert that the unconscious does not buy the flattering lies that the conscious mind tells itself? If so, has the assertion been tested?