In response to the folk suggesting that our questions were just unclear, etc.:
I notice rationalization all the time too (in myself and in others); but there totally seem to be people who don’tever notice it in themselves. Lots of them. Including both folks who seem never to have trained in rationality-type-stuff at all, and folks who have. I ignored my first counter-example, and my second, but not my third and forth; especially after the fourth counter-example kindly allowed us to cross-examine them for some hours, to go try accosting strangers with weird questions and see if they noticed themself rationalizing while approaching said strangers, etc.
Mercurial, and Eliezer, both suggested an analogy to the “thinking in words” vs “thinking in images” thing; some do one and others do another, and many tend to assume that everyone must experience life that way. We all updated toward thinking that there is some actual thing going on here—something we were initially not modeling.
But, I’m still confused about:
Whether we’re just still missing something obvious anyhow. Maybe our fourth counter-example, who consented to answering gobs of questions and trying experiments for us, was a fluke? (Try asking people yourself, please; don’t just say that it must be experimental error because you don’t work that way)
Whether they don’t rationalize, or just don’t notice themselves rationalizing. (Fourth datapoint seemed to maybe actually never make up reasons for choices; don’t have data on the others really).
What exactly the boundaries are on “rationalizing”—what exactly it is, that a sizable portion of the folks we’ve talked to never notices themselves doing.
So, I asked some people as you suggested, but I didn’t find anything as interesting as you. Over the last few days I’ve asked 10 people if they “rationalise”, giving them just one example, and all of them have immediately understood and spontaneously come up with valid examples of themselves doing so.
Incidentally, I quite often catch myself rationalising, but I really doubt accosting strangers with odd questions would trigger that in me. I’m not sure what else to suggest. Perhaps asking them when they last felt guilty? From the examples the people I mentioned above came up with, guilt seems to be a very strong trigger of rationalisation. An example: “I forgot to call my Mum on her birthday but I told myself she was really busy with the rest of the family”.
From the examples the people I mentioned above came up with, guilt seems to be a very strong trigger of rationalisation.
Perhaps rationalization is an adaptation that develops when people risk some kind of punishment for their irrationality.
We are irrational, and we already suffer the consequences of our irrationality. But if there is additional penalty for admitting irrationality, it gives us incentive to pretend that the irrational decision was in fact rational; to lie to others, and ultimately to lie to ourselves. Admitting irrationality can be a very bad signalling.
How exactly does guilt become a part of the equation? Probably by believing that there is no such thing as irrationality, and people are always perfectly following their utility function. So if you forgot to do something, it means you decided not to do it, because it gives you negative utility. So whenever your irrationality harms people around you, it means you hate them. (If your irrationality sometimes harms you, this can be explained away by saying that you didn’t really care about something, only pretended it.) From the outside view, our irrationality is not credible—it may be just a public act, while we are following our true preferences (defined circularly as “that what we are following”, plus some possible secrets).
Conflating irrationality with “self deception” here. But you seem to be defining rationality as “utility function” here. How is some idealized idea of “utility function” any different from just “preferences”.
“I’m trying to give up chocolate. Last weekend I saw a delicious cake and I found myself telling myself the only reason I wanted it was to boost my energy levels, hahaha you know the feeling, right?” If they didn’t immediately chime in with examples I’d prompt them with “and you know, its not just food, I rationalise all the time” and ask them if they do as well.
Over than half of them immediately came up with their own diet-related rationalisations. Of the other 4 I had the “calling my mum” one above, a couple of people who said they often caught themselves coming up with reasons for why they weren’t doing their work, and one “the dog wouldn’t like to be taken for a walk in this cold weather”.
The reason I mentioned guilt is that a few of them (I didn’t count) explicitly used the word “guilty” (like, I’m too tired to work, so I don’t have to feel guilty that I’m out drinking) and one person talked about trying to make himself feel better.
And, just to check, did you make sure that all the diet-related examples you got were examples of making false excuses to oneself, and not just examples of e.g. previously intending to diet, but then changing one’s mind when one saw the chocolate cake?
there totally seem to be people who don’t ever notice it in themselves
Count me in that group (“hardly ever”, maybe).
I’m pretty sure that I do rationalize, but I can’t recall any explicit occasions of catching myself in the act.
I’m pretty sure that I have abandoned beliefs in the past that I clung to for longer than I should have, but it’s hard for me to come up with an example right now.
Perhaps we differ in the explicitness of the meta-cognition we engage in. When confronted with incontrovertible evidence of my errors, I tend to facepalm, think something like “stupid me”, update and move on. I don’t generally attempt to classify the mistake into a particular fallacy.
Can you share some of the examples you’ve been using to illustrate rationalization? I’ll tell you if I get the same “can’t relate to this”, or if I can relate but failed to label the equivalent examples in my own past as rationalizations.
On February 3, 2007, shortly before lunch, I discovered that
I was a chronic liar. I was at home, writing a review article
on moral psychology, when my wife, Jayne, walked by my
desk. In passing, she asked me not to leave dirty dishes on
the counter where she prepared our baby’s food. Her
request was polite but its tone added a postscript: “As I
have asked you a hundred times before.”
My mouth started moving before hers had stopped.
Words came out. Those words linked themselves up to say
something about the baby having woken up at the same
time that our elderly dog barked to ask for a walk and I’m
sorry but I just put my breakfast dishes down wherever I
could. In my family, caring for a hungry baby and an
incontinent dog is a surefire excuse, so I was acquitted. [...]
So there I was at my desk, writing about how people
automatically fabricate justifications of their gut feelings,
when suddenly I realized that I had just done the same thing
with my wife. I disliked being criticized, and I had felt a flash
of negativity by the time Jayne had gotten to her third word
(“Can you not …”). Even before I knew why she was
criticizing me, I knew I disagreed with her (because
intuitions come first). The instant I knew the content of the
criticism (“… leave dirty dishes on the …”), my inner lawyer
went to work searching for an excuse (strategic reasoning
second). It’s true that I had eaten breakfast, given Max his
first bottle, and let Andy out for his first walk, but these
events had all happened at separate times. Only when my
wife criticized me did I merge them into a composite image
of a harried father with too few hands, and I created this
fabrication by the time she had completed her onesentence
criticism (“… counter where I make baby food?”).
I then lied so quickly and convincingly that my wife and I
both believed me.
I don’t know whether Anna used this as an illustration, but one way by which I tend to notice myself rationalizing is when I’m debating something with somebody. If they successfully attack my position, I might suddenly realize that I’m starting to defend myself with arguments that even I consider bad or even outright fallacious, and that I’ve generally gone from trying to discover the truth to trying to defend my original position, no matter what its truth value.
Another example is that I might decide to do or believe something, feel reluctant to explain my reasons to others because they wouldn’t hold up to outside scrutiny, and then realize that wait, if my reasons wouldn’t hold up to outside scrutiny they shouldn’t hold up to inside scrutiny either.
I wish there was a more standard term for this than “kinesthetic thinking”, that other people would be able to look up and understand what was meant.
(A related term is “motor cognition”, but that doesn’t denote a thinking style. Motor cognition is a theoretical paradigm in cognitive psychology, according to which most cognition is a kind of higher-order motor control/planning activity, connected in a continuous hierarchy with conventional concrete motor control and based on the same method of neural implementation. (See also: precuneus (reflective cognition?); compare perceptual control theory.) Another problem with the term “motor cognition” is that it doesn’t convey the important nuance of “higher-order motor planning except without necessarily any concurrent processing of any represented concrete motions”. (And the other would-be closest option, “kinesthetic learning”, actively denotes the opposite.)
Plausibly, people could be trained to introspectively attend to the aspect of cognition which was like motor planning with a combination of TCMS, to inhibit visual and auditory imagery, and cognitive tasks which involved salient constraints and tradeoffs. Maybe the cognitive tasks would also need to have specific positive or negative consequences for apparent execution of recognizable scripts of sequential actions typical of normally learned plans for the task. Some natural tasks, which are not intrinsically verbal or visual, with some of these features would be social reasoning, mathematical proof planning, or software engineering.)
when I am thinking kinesthetically I basically never rationalize as such
I think kinesthetic thinking still has things like rationalization. For example, you might have to commit to regarding a certain planned action a certain way as part of a complex motivational gambit, with the side effect that you commit to pretend that the action will have some other expected value than the one you would normally assign. If this ability to make commitments that affect perceived expected value can be used well, then by default this ability is probably also being used badly.
Could you give more details about the things like rationalization that you were thinking of, and what it feels like deciding not to do them in kinesthetic thinking?
Unfortunately most people don’t have particularly good introspection about their primary thinking style so it might be slightly tricky for you to look for interesting correlations here.
Aren’t there tests for the verbal/visual thinking distinction?
After reading the comments here I think I might be a person who doesn’t rationalize, or my tendency to do so is well below the norm. I previously thought the Litany of Tarski was about defeating ugh fields; I do experience those. I’m willing to answer questions about it, if that would help.
Thanks! Could you tell me about ugh fields you’ve experienced, and about any instances of selective search, fake justification, etc. that you can call to mind?
Also, what modality do you usually think in—words, images, … ?
Also, what do you do when you e.g. desire a cookie, but have previously decided to reduce cookie-consumption?
Could you tell me about ugh fields you’ve experienced, and about any instances of selective search, fake justification, etc. that you can call to mind?
If a thought with unpleasant implications comes up, I’m tempted to quickly switch to a completely different, more pleasant topic. Usually this happens in the context of putting off some unpleasant task, but I could imagine it also happening if I believed in God or had some other highly cherished belief. I can’t think of any beliefs that I actually feel that strongly about, though.
I do sometimes come up with plausible excuses or fake justifications if I’m doing something that someone might disapprove of, in case they confront me about it. I don’t remember ever doing that for my own benefit. I can’t remember doing a selective search either, but of course it’s possible I do it without being aware of it.
I just thought of another thing that might be relevant- I find moralizing less appealing than seems to be typical.
Also, what modality do you usually think in—words, images, … ?
I’m not sure how to describe it. Sort of like wordless analogies or relations between concepts, usually combined with some words and images. But also sometimes words or images by themselves.
Also, what do you do when you e.g. desire a cookie, but have previously decided to reduce cookie-consumption?
Distract myself by focusing on something else. If my thoughts keep coming back to eating cookies, I might try imagining something disgusting like eating a cookie with maggots in it.
Use it or lose it? Speculation: Keeping your previous beliefs consistent by rationalizing and distracting yourself are both ways to avoid changing your mind or to avoid thinking about unpleasant things. Maybe most people start with both strategies and the one they have success with develops more than the other. If you are really successful at distracting yourself, maybe rationalization skills never really develop to their full irrational potential.
You could try videotaping them in an argument and then go over the videotape looking for rationalizations. This could deal with varying definitions of rationalize. For best results, make the argument about something that people frequently rationalize. Maybe present them with some ambiguous data that might or might not support their political beliefs (several neutral observers say it didn’t affect them either way, since it was so ambiguous), and see if it makes them more certain that their political beliefs are true (as you’d expect in a cognitively normal human).
I’m assuming you’re using “rationalization” as a synonym for “motivated cognition”.
Perhaps something related to social inaptness or perceived social status, on the hypothesis that rationalization originates as a social psychological drive? I have a few broken social modes, for example there is no emotional drive to avoid pointing out errors or embarrassing facts to people, so I need to consciously stop myself if that’s called for.
In response to the folk suggesting that our questions were just unclear, etc.:
I notice rationalization all the time too (in myself and in others); but there totally seem to be people who don’t ever notice it in themselves. Lots of them. Including both folks who seem never to have trained in rationality-type-stuff at all, and folks who have. I ignored my first counter-example, and my second, but not my third and forth; especially after the fourth counter-example kindly allowed us to cross-examine them for some hours, to go try accosting strangers with weird questions and see if they noticed themself rationalizing while approaching said strangers, etc.
Mercurial, and Eliezer, both suggested an analogy to the “thinking in words” vs “thinking in images” thing; some do one and others do another, and many tend to assume that everyone must experience life that way. We all updated toward thinking that there is some actual thing going on here—something we were initially not modeling.
But, I’m still confused about:
Whether we’re just still missing something obvious anyhow. Maybe our fourth counter-example, who consented to answering gobs of questions and trying experiments for us, was a fluke? (Try asking people yourself, please; don’t just say that it must be experimental error because you don’t work that way)
Whether they don’t rationalize, or just don’t notice themselves rationalizing. (Fourth datapoint seemed to maybe actually never make up reasons for choices; don’t have data on the others really).
What exactly the boundaries are on “rationalizing”—what exactly it is, that a sizable portion of the folks we’ve talked to never notices themselves doing.
So, I asked some people as you suggested, but I didn’t find anything as interesting as you. Over the last few days I’ve asked 10 people if they “rationalise”, giving them just one example, and all of them have immediately understood and spontaneously come up with valid examples of themselves doing so.
Incidentally, I quite often catch myself rationalising, but I really doubt accosting strangers with odd questions would trigger that in me. I’m not sure what else to suggest. Perhaps asking them when they last felt guilty? From the examples the people I mentioned above came up with, guilt seems to be a very strong trigger of rationalisation. An example: “I forgot to call my Mum on her birthday but I told myself she was really busy with the rest of the family”.
Perhaps rationalization is an adaptation that develops when people risk some kind of punishment for their irrationality.
We are irrational, and we already suffer the consequences of our irrationality. But if there is additional penalty for admitting irrationality, it gives us incentive to pretend that the irrational decision was in fact rational; to lie to others, and ultimately to lie to ourselves. Admitting irrationality can be a very bad signalling.
How exactly does guilt become a part of the equation? Probably by believing that there is no such thing as irrationality, and people are always perfectly following their utility function. So if you forgot to do something, it means you decided not to do it, because it gives you negative utility. So whenever your irrationality harms people around you, it means you hate them. (If your irrationality sometimes harms you, this can be explained away by saying that you didn’t really care about something, only pretended it.) From the outside view, our irrationality is not credible—it may be just a public act, while we are following our true preferences (defined circularly as “that what we are following”, plus some possible secrets).
Conflating irrationality with “self deception” here. But you seem to be defining rationality as “utility function” here. How is some idealized idea of “utility function” any different from just “preferences”.
Much thanks for collecting this data. What example did you use? And what sorts of examples did you get back?
“I’m trying to give up chocolate. Last weekend I saw a delicious cake and I found myself telling myself the only reason I wanted it was to boost my energy levels, hahaha you know the feeling, right?” If they didn’t immediately chime in with examples I’d prompt them with “and you know, its not just food, I rationalise all the time” and ask them if they do as well.
Over than half of them immediately came up with their own diet-related rationalisations. Of the other 4 I had the “calling my mum” one above, a couple of people who said they often caught themselves coming up with reasons for why they weren’t doing their work, and one “the dog wouldn’t like to be taken for a walk in this cold weather”.
The reason I mentioned guilt is that a few of them (I didn’t count) explicitly used the word “guilty” (like, I’m too tired to work, so I don’t have to feel guilty that I’m out drinking) and one person talked about trying to make himself feel better.
And, just to check, did you make sure that all the diet-related examples you got were examples of making false excuses to oneself, and not just examples of e.g. previously intending to diet, but then changing one’s mind when one saw the chocolate cake?
yep, they were all valid examples of rationalisation
Count me in that group (“hardly ever”, maybe).
I’m pretty sure that I do rationalize, but I can’t recall any explicit occasions of catching myself in the act.
I’m pretty sure that I have abandoned beliefs in the past that I clung to for longer than I should have, but it’s hard for me to come up with an example right now.
Perhaps we differ in the explicitness of the meta-cognition we engage in. When confronted with incontrovertible evidence of my errors, I tend to facepalm, think something like “stupid me”, update and move on. I don’t generally attempt to classify the mistake into a particular fallacy.
Can you share some of the examples you’ve been using to illustrate rationalization? I’ll tell you if I get the same “can’t relate to this”, or if I can relate but failed to label the equivalent examples in my own past as rationalizations.
Another example, from The Righteous Mind:
I don’t know whether Anna used this as an illustration, but one way by which I tend to notice myself rationalizing is when I’m debating something with somebody. If they successfully attack my position, I might suddenly realize that I’m starting to defend myself with arguments that even I consider bad or even outright fallacious, and that I’ve generally gone from trying to discover the truth to trying to defend my original position, no matter what its truth value.
Another example is that I might decide to do or believe something, feel reluctant to explain my reasons to others because they wouldn’t hold up to outside scrutiny, and then realize that wait, if my reasons wouldn’t hold up to outside scrutiny they shouldn’t hold up to inside scrutiny either.
Do you experience either of those?
[temporarily deleting]
I wish there was a more standard term for this than “kinesthetic thinking”, that other people would be able to look up and understand what was meant.
(A related term is “motor cognition”, but that doesn’t denote a thinking style. Motor cognition is a theoretical paradigm in cognitive psychology, according to which most cognition is a kind of higher-order motor control/planning activity, connected in a continuous hierarchy with conventional concrete motor control and based on the same method of neural implementation. (See also: precuneus (reflective cognition?); compare perceptual control theory.) Another problem with the term “motor cognition” is that it doesn’t convey the important nuance of “higher-order motor planning except without necessarily any concurrent processing of any represented concrete motions”. (And the other would-be closest option, “kinesthetic learning”, actively denotes the opposite.)
Plausibly, people could be trained to introspectively attend to the aspect of cognition which was like motor planning with a combination of TCMS, to inhibit visual and auditory imagery, and cognitive tasks which involved salient constraints and tradeoffs. Maybe the cognitive tasks would also need to have specific positive or negative consequences for apparent execution of recognizable scripts of sequential actions typical of normally learned plans for the task. Some natural tasks, which are not intrinsically verbal or visual, with some of these features would be social reasoning, mathematical proof planning, or software engineering.)
I think kinesthetic thinking still has things like rationalization. For example, you might have to commit to regarding a certain planned action a certain way as part of a complex motivational gambit, with the side effect that you commit to pretend that the action will have some other expected value than the one you would normally assign. If this ability to make commitments that affect perceived expected value can be used well, then by default this ability is probably also being used badly.
Could you give more details about the things like rationalization that you were thinking of, and what it feels like deciding not to do them in kinesthetic thinking?
Aren’t there tests for the verbal/visual thinking distinction?
After reading the comments here I think I might be a person who doesn’t rationalize, or my tendency to do so is well below the norm. I previously thought the Litany of Tarski was about defeating ugh fields; I do experience those. I’m willing to answer questions about it, if that would help.
Thanks! Could you tell me about ugh fields you’ve experienced, and about any instances of selective search, fake justification, etc. that you can call to mind?
Also, what modality do you usually think in—words, images, … ?
Also, what do you do when you e.g. desire a cookie, but have previously decided to reduce cookie-consumption?
If a thought with unpleasant implications comes up, I’m tempted to quickly switch to a completely different, more pleasant topic. Usually this happens in the context of putting off some unpleasant task, but I could imagine it also happening if I believed in God or had some other highly cherished belief. I can’t think of any beliefs that I actually feel that strongly about, though.
I do sometimes come up with plausible excuses or fake justifications if I’m doing something that someone might disapprove of, in case they confront me about it. I don’t remember ever doing that for my own benefit. I can’t remember doing a selective search either, but of course it’s possible I do it without being aware of it.
I just thought of another thing that might be relevant- I find moralizing less appealing than seems to be typical.
I’m not sure how to describe it. Sort of like wordless analogies or relations between concepts, usually combined with some words and images. But also sometimes words or images by themselves.
Distract myself by focusing on something else. If my thoughts keep coming back to eating cookies, I might try imagining something disgusting like eating a cookie with maggots in it.
Use it or lose it? Speculation:
Keeping your previous beliefs consistent by rationalizing and distracting yourself are both ways to avoid changing your mind or to avoid thinking about unpleasant things. Maybe most people start with both strategies and the one they have success with develops more than the other. If you are really successful at distracting yourself, maybe rationalization skills never really develop to their full irrational potential.
You could try videotaping them in an argument and then go over the videotape looking for rationalizations. This could deal with varying definitions of rationalize. For best results, make the argument about something that people frequently rationalize. Maybe present them with some ambiguous data that might or might not support their political beliefs (several neutral observers say it didn’t affect them either way, since it was so ambiguous), and see if it makes them more certain that their political beliefs are true (as you’d expect in a cognitively normal human).
I’m assuming you’re using “rationalization” as a synonym for “motivated cognition”.
Perhaps something related to social inaptness or perceived social status, on the hypothesis that rationalization originates as a social psychological drive? I have a few broken social modes, for example there is no emotional drive to avoid pointing out errors or embarrassing facts to people, so I need to consciously stop myself if that’s called for.