Rationalization
In “The Bottom Line,” I presented the dilemma of two boxes, only one of which contains a diamond, with various signs and portents as evidence. I dichotomized the curious inquirer and the clever arguer. The curious inquirer writes down all the signs and portents, and processes them, and finally writes down, “Therefore, I estimate an 85% probability that box B contains the diamond.” The clever arguer works for the highest bidder, and begins by writing, “Therefore, box B contains the diamond,” and then selects favorable signs and portents to list on the lines above.
The first procedure is rationality. The second procedure is generally known as “rationalization.”
“Rationalization.” What a curious term. I would call it a wrong word. You cannot “rationalize” what is not already rational. It is as if “lying” were called “truthization.”
On a purely computational level, there is a rather large difference between:
Starting from evidence, and then crunching probability flows, in order to output a probable conclusion. (Writing down all the signs and portents, and then flowing forward to a probability on the bottom line which depends on those signs and portents.)
Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion. (Writing down the bottom line, and then flowing backward to select signs and portents for presentation on the lines above.)
What fool devised such confusingly similar words, “rationality” and “rationalization,” to describe such extraordinarily different mental processes? I would prefer terms that made the algorithmic difference obvious, like “rationality” versus “giant sucking cognitive black hole.”
Not every change is an improvement, but every improvement is necessarily a change. You cannot obtain more truth for a fixed proposition by arguing it; you can make more people believe it, but you cannot make it more true. To improve our beliefs, we must necessarily change our beliefs. Rationality is the operation that we use to obtain more accuracy for our beliefs by changing them. Rationalization operates to fix beliefs in place; it would be better named “anti-rationality,” both for its pragmatic results and for its reversed algorithm.
“Rationality” is the forward flow that gathers evidence, weighs it, and outputs a conclusion. The curious inquirer used a forward-flow algorithm: first gathering the evidence, writing down a list of all visible signs and portents, which they then processed forward to obtain a previously unknown probability for the box containing the diamond. During the entire time that the rationality-process was running forward, the curious inquirer did not yet know their destination, which was why they were curious. In the Way of Bayes, the prior probability equals the expected posterior probability: If you know your destination, you are already there.
“Rationalization” is a backward flow from conclusion to selected evidence. First you write down the bottom line, which is known and fixed; the purpose of your processing is to find out which arguments you should write down on the lines above. This, not the bottom line, is the variable unknown to the running process.
I fear that Traditional Rationality does not properly sensitize its users to the difference between forward flow and backward flow. In Traditional Rationality, there is nothing wrong with the scientist who arrives at a pet hypothesis and then sets out to find an experiment that proves it. A Traditional Rationalist would look at this approvingly, and say, “This pride is the engine that drives Science forward.” Well, it is the engine that drives Science forward. It is easier to find a prosecutor and defender biased in opposite directions, than to find a single unbiased human.
But just because everyone does something, doesn’t make it okay. It would be better yet if the scientist, arriving at a pet hypothesis, set out to test that hypothesis for the sake of curiosity—creating experiments that would drive their own beliefs in an unknown direction.
If you genuinely don’t know where you are going, you will probably feel quite curious about it. Curiosity is the first virtue, without which your questioning will be purposeless and your skills without direction.
Feel the flow of the Force, and make sure it isn’t flowing backwards.
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- An Alien God by 2 Nov 2007 6:57 UTC; 213 points) (
- The Tragedy of Group Selectionism by 7 Nov 2007 7:47 UTC; 121 points) (
- Pascal’s Mugging: Tiny Probabilities of Vast Utilities by 19 Oct 2007 23:37 UTC; 112 points) (
- The Power of Agency by 7 May 2011 1:38 UTC; 112 points) (
- Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by 29 Jun 2020 2:45 UTC; 105 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Bayesianism for Humans by 29 Oct 2013 23:54 UTC; 91 points) (
- Use curiosity by 25 Feb 2011 22:23 UTC; 85 points) (
- Anthropomorphic Optimism by 4 Aug 2008 20:17 UTC; 81 points) (
- Rationalists, Post-Rationalists, And Rationalist-Adjacents by 13 Mar 2020 20:25 UTC; 80 points) (
- Automatic Rate Limiting on LessWrong by 23 Jun 2023 20:19 UTC; 77 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 73 points) (
- Raised in Technophilia by 17 Sep 2008 2:06 UTC; 67 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Against “Context-Free Integrity” by 14 Apr 2021 8:20 UTC; 62 points) (
- Are Deontological Moral Judgments Rationalizations? by 16 Aug 2011 16:40 UTC; 52 points) (
- Speaking Truth to Power Is a Schelling Point by 30 Dec 2019 6:12 UTC; 52 points) (
- Many Worlds, One Best Guess by 11 May 2008 8:32 UTC; 51 points) (
- Against Devil’s Advocacy by 9 Jun 2008 4:15 UTC; 50 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- What Curiosity Looks Like by 6 Jan 2012 21:28 UTC; 44 points) (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:18 UTC; 36 points) (
- SotW: Avoid Motivated Cognition by 28 May 2012 15:57 UTC; 33 points) (
- 30 Oct 2011 11:21 UTC; 32 points) 's comment on Politics is the Mind-Killer by (
- Rationality vs. Rationalization: Reflecting on motivated beliefs by 26 Nov 2018 5:39 UTC; 31 points) (EA Forum;
- How to enjoy being wrong by 27 Jul 2011 5:48 UTC; 30 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- What is the right phrase for “theoretical evidence”? by 1 Nov 2020 20:43 UTC; 24 points) (
- Contaminated by Optimism by 6 Aug 2008 0:26 UTC; 22 points) (
- The Danger of Invisible Problems by 6 Nov 2014 22:28 UTC; 22 points) (
- Lighthaven Sequences Reading Group #3 (Tuesday 09/24) by 22 Sep 2024 2:24 UTC; 20 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- 27 May 2011 21:15 UTC; 18 points) 's comment on Teachable Rationality Skills by (
- Inward and outward steelmanning by 14 Jul 2022 23:32 UTC; 13 points) (
- 10 Oct 2022 14:33 UTC; 12 points) 's comment on Vegetarianism and depression by (
- 14 Dec 2011 20:02 UTC; 11 points) 's comment on Building case-studies of akrasia by (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:21 UTC; 10 points) (EA Forum;
- 17 Mar 2009 17:30 UTC; 10 points) 's comment on The “Spot the Fakes” Test by (
- [SEQ RERUN] Rationalization by 13 Sep 2011 3:47 UTC; 9 points) (
- Summarizing the Sequences Proposal by 4 Aug 2011 21:15 UTC; 9 points) (
- 8 Dec 2014 0:37 UTC; 9 points) 's comment on Rationality Quotes December 2014 by (
- The Solution to Sleeping Beauty by 4 Mar 2024 6:46 UTC; 8 points) (
- Adversarial System Hats by 11 Mar 2009 16:56 UTC; 8 points) (
- [Old] Mapmaking Series by 12 Mar 2019 17:32 UTC; 8 points) (
- 31 Jul 2023 7:33 UTC; 7 points) 's comment on Rationalization Maximizes Expected Value by (
- Word-Distance vs Idea-Distance: The Case for Lanoitaring by 6 Nov 2022 5:25 UTC; 7 points) (
- Rationality Reading Group: Part G: Against Rationalization by 12 Aug 2015 22:09 UTC; 7 points) (
- 5 Sep 2022 18:32 UTC; 6 points) 's comment on An unofficial “Highlights from the Sequences” tier list by (
- 1 Apr 2009 16:03 UTC; 6 points) 's comment on Proverbs and Cached Judgments: the Rolling Stone by (
- 17 Apr 2009 20:38 UTC; 6 points) 's comment on Anti-rationality quotes by (
- 11 Apr 2023 21:34 UTC; 6 points) 's comment on LessWrong moderation messaging container by (
- 1 Mar 2010 19:21 UTC; 5 points) 's comment on Open Thread: March 2010 by (
- If we have Human-level chatbots, won’t we end up being ruled by possible people? by 20 Sep 2022 13:59 UTC; 5 points) (
- 5 Nov 2009 5:00 UTC; 5 points) 's comment on Open Thread: November 2009 by (
- 27 Feb 2009 21:15 UTC; 5 points) 's comment on The Most Important Thing You Learned by (
- Evolution, bias and global risk by 23 May 2011 0:32 UTC; 5 points) (
- 11 Apr 2023 21:35 UTC; 5 points) 's comment on LessWrong moderation messaging container by (
- 14 Dec 2011 15:34 UTC; 4 points) 's comment on How to Not Lose an Argument by (
- [Link]Rationalization is Superior to Rationality by 18 Dec 2015 20:30 UTC; 4 points) (
- 6 Aug 2009 21:10 UTC; 4 points) 's comment on LW/OB Rationality Quotes—August 2009 by (
- 10 Oct 2022 14:32 UTC; 2 points) 's comment on Vegetarianism and depression by (EA Forum;
- 30 Apr 2010 19:32 UTC; 2 points) 's comment on What is missing from rationality? by (
- How to Choose a Goddess (Using a Spreadsheet) by 13 Mar 2017 2:48 UTC; 2 points) (
- 5 Apr 2011 20:44 UTC; 2 points) 's comment on Just Try It: Quantity Trumps Quality by (
- 19 Apr 2009 12:12 UTC; 2 points) 's comment on Rationality Quotes—April 2009 by (
- 27 Mar 2009 3:16 UTC; 2 points) 's comment on Levels of Power by (
- Common fallacies in human reasoning by 25 Sep 2023 14:35 UTC; 1 point) (EA Forum;
- 9 Sep 2011 5:13 UTC; 1 point) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- Meetup : Meetup #7 - Becoming Less Wrong by 21 Nov 2016 16:50 UTC; 1 point) (
- 17 Nov 2012 21:25 UTC; 1 point) 's comment on Feeling Rational by (
- Meetup : Meetup #6 - Still Amsterdam! by 8 Nov 2016 18:02 UTC; 1 point) (
- 6 Aug 2009 15:12 UTC; 1 point) 's comment on Rationality Quotes—August 2009 by (
- 28 Oct 2011 8:32 UTC; 1 point) 's comment on The Pleasures of Rationality by (
- 14 Aug 2009 0:13 UTC; 0 points) 's comment on Suffering by (
- 12 Dec 2015 3:46 UTC; 0 points) 's comment on Open thread, December 7-13, 2015 by (
- 6 Aug 2009 15:24 UTC; 0 points) 's comment on Rationality Quotes—August 2009 by (
- 24 Aug 2011 0:14 UTC; -3 points) 's comment on Antisocial personality traits predict utilitarian responses to moral dilemmas by (
- Matrix and Inspirational religious fiction by 18 Dec 2012 16:38 UTC; -3 points) (
Sadly, I almost always surprise economics graduate students looking for topics to research when I ask them; “What question, where you do not know the answer, would you most like to answer?”
How would this relate to shock Bruno Latour’s conceptualization of Actor-Network-Theory, where the sociologist simply tries to maximise the number of sources of uncertainty in a set of trials, without resorting to a “explanatory social theory”?
I find the linguistic distinction to be better than you relate—to rationalize something is to start with something that isn’t rational. (As if it were rational, it wouldn’t need to be rationalized—it’s already there.)
That being said, rationalization in action isn’t always bad, because we don’t always have conscious understanding of the algorithm used to produce our conclusions. This would be like, to use your example, Einstein coming to the conclusion of relativity—and then attempting to understand how he got there. Rationalization in this case is a useful tool, as it is, in effect, an attempt to obtain the variables that originally went into the algorithm, perhaps to examine their validity.
If you already understand how you got to a conclusion which you are then attempting to bolster—if the evidence that is filtering evidence is being ignored—then it is precisely as bad as you say.
I apologize, didn’t mean to double post.
It is as if “lying” were called “truthization”.
Apologies for the content-free comment, but this is a really great line. Worthy of Stephen Colbert.
Of course, in an etymological sense, “rationalization” doesn’t seem so odd. “Reason” means both logic and motivation. Those two concepts are conflated in the word and related words, and “rationalization” is simply formed from “rationale”. (Actual etymologists, or users of Google, may feel free to correct me.)
I agree with Adirian. Rationalization is a process of rational-explanation-seeking. It starts from statement that was obtained by non-rational process (as when you overheard something, or intuitively guessed something) and then creates a rational explanation according to one’s concept of rationality, concurrently adjusting statement if necessary. So normal rationalization does change the conclusion: it can change its status from ‘suspicious statement’ to ‘belief’, or it can adjust it to be consistent with facts. Now biased rationalization uses ‘biased rationality’ according to which it builds explanation, for example that ‘clever arguer’ applies selection bias.
It starts from statement that was obtained by non-rational process (as when you overheard something, or intuitively guessed something)
An intuitive guess is non-scientific but not non-rational.
Random comment:
Many years ago, there were a series of articles written by the pseudonym Archibald Putt, collectively referred to as “Putt’s Laws”, that appeared in Research/Development magazine. One law is relevant to the topic at hand.
“Decisions are justified by benefits to the organization; they are made by considering benefits to the decisionmakers.”
If it is easier to lie convincingly when you believe the lie, then rationalization makes perfect sense. One makes a decision based on selfish, primarily unconscious motives, and then comes up with a semi-convincing rationalization for public consumption. “I stole that because I deserved it” would be a classic example of this kind of justification.
Eliezer: An intuitive guess is non-scientific but not non-rational
It doesn’t affect my point; but do you argue that intuitive reasoning can be made free of bias?
An intuitive guess can be made without biasing the result (accept or reject), so long as one does not privilege the hypothesis.
Your wonderful essay contains a flaw.
There is no way in reality to check correctness of reasoning result “directly” (unable to “open box and see if it contains brilliant”). But, if result of reasoning is not directly influences the reasoner, it is also unfeasible .
So, correct story is: “one of two melted unopenable boxes contains bomb with timer. The task is select one box and throw in deep well, or else it shall explode and mutilate the reasoner”
Try answering this without any rationalization:
In my middle school science lab, a thermometer showed me that water boiled at 99.5 degrees C and not 100. Why?
I suspect you have a point that I’m missing.
My take is: either the reading was wrong (experimental error of some kind), or it wasn’t wrong. If it wasn’t wrong, then your water was boiling at a 99.5 degrees. There are a number of plausible explanations for the latter; the one that I assign the highest prior to is that you were at an elevation higher than sea level.
So, my answer is in the form of a probability distribution. Give me more evidence, and I will refine it, or demand and answer now, and I will tell you “altitude”, my current most plausible candidate (experimental error is my second candidate, first with how (where in the water) you measured, then with the quality of the thermometer. After that trails things like impurities in the water).
You’ve missed a key point, which is that rationalization refers to a process in which one of many possible hypothesis is arbitrarily selected, which the rationalizer then attempts to support using a fabricated argument. In your query, you are asking that a piece of data be explained. In the first case, one filters the evidence, rejecting any data that too strongly opposes a pre-selected hypothesis. In the second case, one generates a space of hypothesis that all fit the data, and selects the most likely one as a guess. The difference is between choosing data to fit a hypothesis, and finding a hypothesis that best fits the data. Rationalization is pointing to a blank spot on your map and saying, “There must be a lake somewhere around there, because there aren’t any other lakes nearby,” while ignoring the fact that it’s hot and there’s sand everywhere.
My experience leads me to assume that the thermometer was mismarked. My high school chemistry teacher drilled into us that the thermometers we had were all precise, but of varying accuracy. A thermometer might say that water boils at 99.5 C, but if it did, it would also say that it froze at −0.5 C. Again, there are conditions that actually change the temperature at which water boils, so it’s possible you were at a lower atmospheric pressure or that the water was contaminated. But, given that we have a grand total of one data point, I can’t narrow it down to a single answer.
Exactly!
Given just one data point, every explanation for why we didn’t observe water boiling at 100 degrees C is an excuse for why it should have. To honestly answer this question, we would have to have performed additional experiments.
But we had already had a conclusion we were supposed to have reached—a truth by definition, in our case. Reaching that conclusion in our imperfect circumstances required rationalization.
Uh, no. Pressure affects boiling point. If you’re at a different pressure, it should not boil at 100 degrees C. If your water is contaminated by, say, alcohol, the boiling point will change. We aren’t trying to explain away datapoints, we’re using them to build a system that’s larger than “Water boils at 100 degrees Centigrade.” Just adding “at standard temperature and pressure,” to the end of that gives a wider range of predictable and falsifiable results.
What we’re doing is rationality, not rationalization.
What altitude were you at?
What elevation was your school at?
I know this of course, but the way you state it here really drives the point home. Well written.
Apparently, this sense of the word “rationalize” only dates from 1922.
If rationality were able to select hypotheses from an infinte space of hypotheses, your distinction would be accurate. . Theoretical AIXI works that way, kind of, but nothing made of atoms can implement it. Rationality picks from the N hypotheses that have occurred to the thinker, and rationalization is the degenerate case where N=1.
According to this article, one can predict a decision 7 seconds before it is actually made. Doesn’t this, in some sense, mean that a large amount of our thought process(certainly those 7 seconds) are actually rationalizing a decision we have already made?
Is my thinking off or is this one more thing to actively guard against and realize when we are letting our unconscious decide for us?
in Hebrew there’s a synonym for rationalization that stems from the word “excuse” (“הַתְרָצָה”). i think it’s quite fitting, as that’s basically the process. you decide on a conclusion and excuse you way backward from it so it seems rational.
what do you think?
I’m not very good in English so I’m not sure, if we create a word for it that stems from excuse, what it would be—have any suggestions?
I didn’t know that was the word for excuse, but I think it’s an excellent word itself to use for rationalization. No synonym required. ״רצה״ is the root for “want” and “הַתְרָצָה” is the the reflexive conjugation, so it’s approximately “self-wanting.” Which is exactly what rationalization is—reasoning towards what you want to be true.
sorry, i made a communication error. “הַתְרָצָה” is the other word for rationalization in Hebrew, it stems from the word for excuse which is “תירוץ”.
Oh, right. Once upon a time I knew that was the word. Thanks.
Calling it “Rationalization” is just another instance of a proud tradition of referring to antonyms by almost identical words (hypothermia vs hyperthermia) for some fucking reason.