I tend to agree that anyone who denies the tendency to rationalize is either in denial or has a different definition for the word “rationalize”. In fact I would argue that rationalization is the default for human beings, and that anything else requires either focused effort or serious mental re-programming (which is still probably only partially effective).
I absolutely relate. I totally would have said that a week ago. Evidence has smashed my belief’s face quite solidly in the nose, though.
One possible way to try to elicit an understanding for any given individual’s capacity for rationalization is to ask them about the last time they did something they knew was a bad idea (perhaps a comrpomise they felt uncomfortable making, or an indulgence they knew they were going to regret), and then to ask them what excuses went through their brains to justify it. If someone still denies ever having had such an experience then they are beyond redemption.
That’s a good idea, and we did it several times. They sincerely do deny having such experience, but not in a knee-jerk way. It’s more like a, “Huh. Hmm. Um… Well, I honestly can’t think of something quite like that, but maybe X is similar?” And “X” in this case is something like, “I knew eating a cookie wasn’t good for me, but I felt like it and so I did it anyway.” It’s like the need for justification is just missing, at least in their self-reports.
This reminds me of a bit in The Righteous Mind, where Haidt discusses some of his experiments about moral reasoning. When he asked his university students questions like “is it right or wrong for a man to buy a (dead) chicken from a store and then have sex with it before eating it”, the students had no problem providing a long list of various justifications pro or con, and generally ending up with an answer like “It’s perverted, but if it’s done in private, it’s his right”. In contrast, when Haidt went to a local McDonalds to ask working-class people the same questions, he tended to get odd looks when he asked them to explain why they thought that the chicken scenario was wrong.
Haidt puts this down to the working-class people having an additional set of moral intuitions, ones where e.g. acts violating someone’s purity are considered just as self-evidently bad as acts causing somebody needless pain, and therefore denouncing them as wrong needs no explanation. But I wonder if there’s also a component of providing explicit reasons for your actions or moral judgements being to some extent a cultural thing. If there are people who are never asked to provide justifications for their actions, then providing justifications never becomes a part of even their internal reasoning. If we accept the theory that verbal reasoning evolved for persuasion and not for problem-solving, then this would make perfect sense—reasoning is a tool for argumentation, and if you never need to argue for something, then there’s also no need to practice arguments related to that in your head.
Actually, Haidt does seem to suggest something like this a bit later, when he discusses cultures with a holistic morality, and says that they often seem to just follow a set of what seems to be (to us) ad-hoc rules, not derivable from any single axiom:
Several of the peculiarities of WEIRD culture can be
captured in this simple generalization: The WEIRDer you
are, the more you see a world full of separate objects,
rather than relationships. It has long been reported that
Westerners have a more independent and autonomous
concept of the self than do East Asians.3 For example,
when asked to write twenty statements beginning with the
words “I am …,” Americans are likely to list their own
internal psychological characteristics (happy, outgoing,
interested in jazz), whereas East Asians are more likely to
list their roles and relationships (a son, a husband, an
employee of Fujitsu). [...]
Related to this difference in perception is a difference in
thinking style. Most people think holistically (seeing the
whole context and the relationships among parts), but
WEIRD people think more analytically (detaching the focal
object from its context, assigning it to a category, and then
assuming that what’s true about the category is true about
the object).5 Putting this all together, it makes sense that
WEIRD philosophers since Kant and Mill have mostly
generated moral systems that are individualistic, rulebased,
and universalist. That’s the morality you need to
govern a society of autonomous individuals.
But when holistic thinkers in a non-WEIRD culture write
about morality, we get something more like the Analects of
Confucius, a collection of aphorisms and anecdotes that
can’t be reduced to a single rule.6 Confucius talks about a
variety of relationship-specific duties and virtues (such as
filial piety and the proper treatment of one’s subordinates).
If WEIRD and non-WEIRD people think differently and
see the world differently, then it stands to reason that they’d
have different moral concerns. If you see a world full of
individuals, then you’ll want the morality of Kohlberg and
Turiel—a morality that protects those individuals and their
individual rights. You’ll emphasize concerns about harm
and fairness.
But if you live in a non-WEIRD society in which people
are more likely to see relationships, contexts, groups, and
institutions, then you won’t be so focused on protecting
individuals. You’ll have a more sociocentric morality, which
means (as Shweder described it back in chapter 1) that
you place the needs of groups and institutions first, often
ahead of the needs of individuals. If you do that, then a
morality based on concerns about harm and fairness won’t
be sufficient. You’ll have additional concerns, and you’ll
need additional virtues to bind people together.
One might hypothesize that moral systems like utilitarianism or Kantian deontology, derived from a small set of logical axioms, are appealing specifically to those people who’ve learned that they need to defend their actions and beliefs (and who therefore also rationalize) - since it’s easier to craft elaborate and coherent defenses of them. People with less of a need for justifying themselves might be fine with Analects of Confucius -style moralities.
“I knew eating a cookie wasn’t good for me, but I felt like it and so I did it anyway.”
I’m like this for my trivial decisions but not for major ones. I virtually never rationalise eating choices, the choice is purely a conflict between deciding whether I’m going to do what I want vs what I ought.
I do notice myself rationalising when making more long-term decisions and in arguments—if I’m unsure of a decision I’ll sometimes make a list of pros and cons and catch myself trying to rig the outcome (which is an answer in itself, obviously). Or if I get into an argument I sometimes catch myself going into “arguments as soldiers” mode, which feels quite similar to rationalising.
Anyway, my point for both is that for me at least, rationalisation only seems to pop up when the stakes are higher. If you gave me your earlier example about wanting to eat pizza and making excuses about calcium, I’d probably look at you as though you had 3 heads too.
Evidence has smashed my belief’s face quite solidly in the nose, though.
Evidence other than the repeated denials of the subjects in question and a non-systematic observation of them acting as largely rational people in most respects? (That’s not meant to be rhetorical/mocking—I’m genuinely curious to know where the benefit of the doubt is coming from here)
“I knew eating a cookie wasn’t good for me, but I felt like it and so I did it anyway.”
The problem here is that there is a kind of perfectly rational decision making that involves being aware of a detrimental consequence but coming to the conclusion that it’s an acceptable cost. In fact that’s what “rationalizing” pretends to be. With anything other than overt examples (heavy drug-addiction, beaten spouses staying in a marriage) the only person who can really make the call is the individual (or perhaps, as mentioned above, a close friend).
If these people do consider themselves rational, then maybe they would respond to existing psychological and neurological research that emphasizes how prone the mind is to rationalizing (I don’t know of any specific studies off the top of my head but both Michael Shermer’s “The Believing Brain” and Douglas Kenrick’s “Sex, Murder, and the Meaning of Life” touch on this subject). At some point, an intelligent, skeptical person has to admit that the likelihood that they are the exception to the rule is slim.
If these people do consider themselves rational, then maybe they would respond to existing psychological and neurological research that emphasizes how prone the mind is to rationalizing (I don’t know of any specific studies off the top of my head but both Michael Shermer’s “The Believing Brain” and Douglas Kenrick’s “Sex, Murder, and the Meaning of Life” touch on this subject). At some point, an intelligent, skeptical person has to admit that the likelihood that they are the exception to the rule is slim.
Psychological research tends to be about the average or the typical case. If you e.g. ask the question “does this impulse elict rationalization in people while another impulse doesn’t”, psychologists generally try to answer that by asking a question like “does this statistical test say that the rationalization scores in the ‘rationalization elictation condition’ seem to come from a distribution with a higher mean than the rationalization scores in the control condition”. Which means that you may (and AFAIK, generally do) have people in the rationalization elictation condition who actually score lower on the rationalization test than some of the people in the control condition, but it’s still considered valid to say that the experimental condition causes rationalization—since that’s what seems to happen for most people. That’s assuming that weird outliers aren’t excluded from the analysis before it even gets started. Also, most samples are WEIRD and not very representative of the general population.
And “X” in this case is something like, “I knew eating a cookie wasn’t good for me, but I felt like it and so I did it anyway.” It’s like the need for justification is just missing, at least in their self-reports.
Thanks for this example—now I can imagine what “never rationalizing” could be like.
I did not realize there is a third option besides “rationalizing” and “always acting rationally”, and I couldn’t believe in people acting always rationally (at least not without proper training; but then they would remember what it was like before training). But the possibility of “acting irrationally but not inventing excuses for it” seems much more plausible.
I absolutely relate. I totally would have said that a week ago. Evidence has smashed my belief’s face quite solidly in the nose, though.
That’s a good idea, and we did it several times. They sincerely do deny having such experience, but not in a knee-jerk way. It’s more like a, “Huh. Hmm. Um… Well, I honestly can’t think of something quite like that, but maybe X is similar?” And “X” in this case is something like, “I knew eating a cookie wasn’t good for me, but I felt like it and so I did it anyway.” It’s like the need for justification is just missing, at least in their self-reports.
This reminds me of a bit in The Righteous Mind, where Haidt discusses some of his experiments about moral reasoning. When he asked his university students questions like “is it right or wrong for a man to buy a (dead) chicken from a store and then have sex with it before eating it”, the students had no problem providing a long list of various justifications pro or con, and generally ending up with an answer like “It’s perverted, but if it’s done in private, it’s his right”. In contrast, when Haidt went to a local McDonalds to ask working-class people the same questions, he tended to get odd looks when he asked them to explain why they thought that the chicken scenario was wrong.
Haidt puts this down to the working-class people having an additional set of moral intuitions, ones where e.g. acts violating someone’s purity are considered just as self-evidently bad as acts causing somebody needless pain, and therefore denouncing them as wrong needs no explanation. But I wonder if there’s also a component of providing explicit reasons for your actions or moral judgements being to some extent a cultural thing. If there are people who are never asked to provide justifications for their actions, then providing justifications never becomes a part of even their internal reasoning. If we accept the theory that verbal reasoning evolved for persuasion and not for problem-solving, then this would make perfect sense—reasoning is a tool for argumentation, and if you never need to argue for something, then there’s also no need to practice arguments related to that in your head.
Actually, Haidt does seem to suggest something like this a bit later, when he discusses cultures with a holistic morality, and says that they often seem to just follow a set of what seems to be (to us) ad-hoc rules, not derivable from any single axiom:
One might hypothesize that moral systems like utilitarianism or Kantian deontology, derived from a small set of logical axioms, are appealing specifically to those people who’ve learned that they need to defend their actions and beliefs (and who therefore also rationalize) - since it’s easier to craft elaborate and coherent defenses of them. People with less of a need for justifying themselves might be fine with Analects of Confucius -style moralities.
I’m like this for my trivial decisions but not for major ones. I virtually never rationalise eating choices, the choice is purely a conflict between deciding whether I’m going to do what I want vs what I ought.
I do notice myself rationalising when making more long-term decisions and in arguments—if I’m unsure of a decision I’ll sometimes make a list of pros and cons and catch myself trying to rig the outcome (which is an answer in itself, obviously). Or if I get into an argument I sometimes catch myself going into “arguments as soldiers” mode, which feels quite similar to rationalising.
Anyway, my point for both is that for me at least, rationalisation only seems to pop up when the stakes are higher. If you gave me your earlier example about wanting to eat pizza and making excuses about calcium, I’d probably look at you as though you had 3 heads too.
Evidence other than the repeated denials of the subjects in question and a non-systematic observation of them acting as largely rational people in most respects? (That’s not meant to be rhetorical/mocking—I’m genuinely curious to know where the benefit of the doubt is coming from here)
The problem here is that there is a kind of perfectly rational decision making that involves being aware of a detrimental consequence but coming to the conclusion that it’s an acceptable cost. In fact that’s what “rationalizing” pretends to be. With anything other than overt examples (heavy drug-addiction, beaten spouses staying in a marriage) the only person who can really make the call is the individual (or perhaps, as mentioned above, a close friend).
If these people do consider themselves rational, then maybe they would respond to existing psychological and neurological research that emphasizes how prone the mind is to rationalizing (I don’t know of any specific studies off the top of my head but both Michael Shermer’s “The Believing Brain” and Douglas Kenrick’s “Sex, Murder, and the Meaning of Life” touch on this subject). At some point, an intelligent, skeptical person has to admit that the likelihood that they are the exception to the rule is slim.
Psychological research tends to be about the average or the typical case. If you e.g. ask the question “does this impulse elict rationalization in people while another impulse doesn’t”, psychologists generally try to answer that by asking a question like “does this statistical test say that the rationalization scores in the ‘rationalization elictation condition’ seem to come from a distribution with a higher mean than the rationalization scores in the control condition”. Which means that you may (and AFAIK, generally do) have people in the rationalization elictation condition who actually score lower on the rationalization test than some of the people in the control condition, but it’s still considered valid to say that the experimental condition causes rationalization—since that’s what seems to happen for most people. That’s assuming that weird outliers aren’t excluded from the analysis before it even gets started. Also, most samples are WEIRD and not very representative of the general population.
Thanks for this example—now I can imagine what “never rationalizing” could be like.
I did not realize there is a third option besides “rationalizing” and “always acting rationally”, and I couldn’t believe in people acting always rationally (at least not without proper training; but then they would remember what it was like before training). But the possibility of “acting irrationally but not inventing excuses for it” seems much more plausible.