I noticed that there is a certain perfectly rational process that can feel a lot like rationalization from the inside:
Suppose I were to present you with plans for a perpetual motion machine. You would then engage in a process that looks a lot like rationalization to explain why my plan can’t work as advertised.
This is of course perfectly rational since the probability that my proposal would actually work is tiny. However, this example does leave me wondering how to separate rationalization from rationality possibly with excessively strong priors.
What’s happening there, I think, is that you have received a piece of evidence (“this guy’s claims to have designed a perpetual motion machine”) and you, upon processing that information, slightly increase your probability that perpetual motion machines are plausible and highly increase your probability that he’s lying or joking or ignorant. Then you seek to test that new hypothesis: you search for flaws in the blueprints first because your beliefs say you have the highest likelihood of finding new evidence if you do so, and you would think it more likely that you’ve missed something than that the machine could actually work. However, after the proper sequence of tests all coming out in favour, you would not be opposed to building the machine to check; you’re not opposed to the theoretical possibility that we’ve suddenly discovered free energy.
In rationalisation, at least the second and possibly both parts of the process differ. You seek to confirm the hypothesis, not test it, so check what the theoretical world in which the hypothesis is unarguably false feels like, maybe? Checking whether you had the appropriate evidence to form the hypothesis in the first place is also a useful check, though I suppose false positives would happen on that one.
I noticed that there is a certain perfectly rational process that can feel a lot like rationalization from the inside:
Suppose I were to present you with plans for a perpetual motion machine. You would then engage in a process that looks a lot like rationalization to explain why my plan can’t work as advertised.
This is of course perfectly rational since the probability that my proposal would actually work is tiny. However, this example does leave me wondering how to separate rationalization from rationality possibly with excessively strong priors.
What’s happening there, I think, is that you have received a piece of evidence (“this guy’s claims to have designed a perpetual motion machine”) and you, upon processing that information, slightly increase your probability that perpetual motion machines are plausible and highly increase your probability that he’s lying or joking or ignorant. Then you seek to test that new hypothesis: you search for flaws in the blueprints first because your beliefs say you have the highest likelihood of finding new evidence if you do so, and you would think it more likely that you’ve missed something than that the machine could actually work. However, after the proper sequence of tests all coming out in favour, you would not be opposed to building the machine to check; you’re not opposed to the theoretical possibility that we’ve suddenly discovered free energy.
In rationalisation, at least the second and possibly both parts of the process differ. You seek to confirm the hypothesis, not test it, so check what the theoretical world in which the hypothesis is unarguably false feels like, maybe? Checking whether you had the appropriate evidence to form the hypothesis in the first place is also a useful check, though I suppose false positives would happen on that one.
In rationalization you engage in motivated cognition, this is very similar to what happens in the perpetual motion example.