Evidence has smashed my belief’s face quite solidly in the nose, though.
Evidence other than the repeated denials of the subjects in question and a non-systematic observation of them acting as largely rational people in most respects? (That’s not meant to be rhetorical/mocking—I’m genuinely curious to know where the benefit of the doubt is coming from here)
“I knew eating a cookie wasn’t good for me, but I felt like it and so I did it anyway.”
The problem here is that there is a kind of perfectly rational decision making that involves being aware of a detrimental consequence but coming to the conclusion that it’s an acceptable cost. In fact that’s what “rationalizing” pretends to be. With anything other than overt examples (heavy drug-addiction, beaten spouses staying in a marriage) the only person who can really make the call is the individual (or perhaps, as mentioned above, a close friend).
If these people do consider themselves rational, then maybe they would respond to existing psychological and neurological research that emphasizes how prone the mind is to rationalizing (I don’t know of any specific studies off the top of my head but both Michael Shermer’s “The Believing Brain” and Douglas Kenrick’s “Sex, Murder, and the Meaning of Life” touch on this subject). At some point, an intelligent, skeptical person has to admit that the likelihood that they are the exception to the rule is slim.
If these people do consider themselves rational, then maybe they would respond to existing psychological and neurological research that emphasizes how prone the mind is to rationalizing (I don’t know of any specific studies off the top of my head but both Michael Shermer’s “The Believing Brain” and Douglas Kenrick’s “Sex, Murder, and the Meaning of Life” touch on this subject). At some point, an intelligent, skeptical person has to admit that the likelihood that they are the exception to the rule is slim.
Psychological research tends to be about the average or the typical case. If you e.g. ask the question “does this impulse elict rationalization in people while another impulse doesn’t”, psychologists generally try to answer that by asking a question like “does this statistical test say that the rationalization scores in the ‘rationalization elictation condition’ seem to come from a distribution with a higher mean than the rationalization scores in the control condition”. Which means that you may (and AFAIK, generally do) have people in the rationalization elictation condition who actually score lower on the rationalization test than some of the people in the control condition, but it’s still considered valid to say that the experimental condition causes rationalization—since that’s what seems to happen for most people. That’s assuming that weird outliers aren’t excluded from the analysis before it even gets started. Also, most samples are WEIRD and not very representative of the general population.
Evidence other than the repeated denials of the subjects in question and a non-systematic observation of them acting as largely rational people in most respects? (That’s not meant to be rhetorical/mocking—I’m genuinely curious to know where the benefit of the doubt is coming from here)
The problem here is that there is a kind of perfectly rational decision making that involves being aware of a detrimental consequence but coming to the conclusion that it’s an acceptable cost. In fact that’s what “rationalizing” pretends to be. With anything other than overt examples (heavy drug-addiction, beaten spouses staying in a marriage) the only person who can really make the call is the individual (or perhaps, as mentioned above, a close friend).
If these people do consider themselves rational, then maybe they would respond to existing psychological and neurological research that emphasizes how prone the mind is to rationalizing (I don’t know of any specific studies off the top of my head but both Michael Shermer’s “The Believing Brain” and Douglas Kenrick’s “Sex, Murder, and the Meaning of Life” touch on this subject). At some point, an intelligent, skeptical person has to admit that the likelihood that they are the exception to the rule is slim.
Psychological research tends to be about the average or the typical case. If you e.g. ask the question “does this impulse elict rationalization in people while another impulse doesn’t”, psychologists generally try to answer that by asking a question like “does this statistical test say that the rationalization scores in the ‘rationalization elictation condition’ seem to come from a distribution with a higher mean than the rationalization scores in the control condition”. Which means that you may (and AFAIK, generally do) have people in the rationalization elictation condition who actually score lower on the rationalization test than some of the people in the control condition, but it’s still considered valid to say that the experimental condition causes rationalization—since that’s what seems to happen for most people. That’s assuming that weird outliers aren’t excluded from the analysis before it even gets started. Also, most samples are WEIRD and not very representative of the general population.