I agree that the type of rationalization you’ve described is often practically rational. And it’s at most a minor crime against epestemic rationality. If anything, the epestemic crime here is not anticipating that your preferences will change after you’ve made a choice.
However, I don’t think this case is what people have in mind when they critique rationalization.
The more central case is when we rationalize decisions that affect other people; for example, Alice might make a decision that maximizes her preferences and disregards Bob’s, but after the fact she’ll invent reasons that make her decision appear less callous: “I thought Bob would want me to do it!”
While this behavior might be practically rational from Alice’s selfish perspective, she’s being epestemically unvirtuous by lying to Bob, degrading his ability to predict her future behavior.
Maybe you can use specific terminology to differentiate your case from the more central one, maybe “preference rationalization”?
Nice point. Yeah, that sounds right to me—I definitely think there are things in the vicinity and types of “rationalization” that are NOT rational. The class of cases you’re pointing to seems like a common type, and I think you’re right that I should just restrict attention. “Preference rationalization” sounds like it might get the scope right.
Sometimes people use “rationalization” to by definition be irrational—like “that’s not a real reason, that’s just a rationalization”. And it sounds like the cases you have in mind fit that mold.
I hadn’t thought as much about the cross of this with the ethical version of the case. Of course, something can be (practically or epistemically) rational without being moral, so there are some versions of those cases that I’d still insist ARE rational even if we don’t like how the agent acts.
Yes, and another meaning of “rationalization” that people often talk about is inventing fake reasons for your own beliefs, which may also be practically rational in certain situations (certain false beliefs could be helpful to you) but it’s obviously a major crime against epistemic rationality.
I’m also not sure rationalizing your past personal decisions isn’t an instance of this; the phrase “I made the right choice” could be interpreted as meaning you believe you would have been less satisfied now if you chose differently, and if this isn’t true but you are trying to convince yourself it is to be happier then that is also a major crime against epistemic rationality.
I agree that the type of rationalization you’ve described is often practically rational. And it’s at most a minor crime against epestemic rationality. If anything, the epestemic crime here is not anticipating that your preferences will change after you’ve made a choice.
However, I don’t think this case is what people have in mind when they critique rationalization.
The more central case is when we rationalize decisions that affect other people; for example, Alice might make a decision that maximizes her preferences and disregards Bob’s, but after the fact she’ll invent reasons that make her decision appear less callous: “I thought Bob would want me to do it!”
While this behavior might be practically rational from Alice’s selfish perspective, she’s being epestemically unvirtuous by lying to Bob, degrading his ability to predict her future behavior.
Maybe you can use specific terminology to differentiate your case from the more central one, maybe “preference rationalization”?
Nice point. Yeah, that sounds right to me—I definitely think there are things in the vicinity and types of “rationalization” that are NOT rational. The class of cases you’re pointing to seems like a common type, and I think you’re right that I should just restrict attention. “Preference rationalization” sounds like it might get the scope right.
Sometimes people use “rationalization” to by definition be irrational—like “that’s not a real reason, that’s just a rationalization”. And it sounds like the cases you have in mind fit that mold.
I hadn’t thought as much about the cross of this with the ethical version of the case. Of course, something can be (practically or epistemically) rational without being moral, so there are some versions of those cases that I’d still insist ARE rational even if we don’t like how the agent acts.
Yes, and another meaning of “rationalization” that people often talk about is inventing fake reasons for your own beliefs, which may also be practically rational in certain situations (certain false beliefs could be helpful to you) but it’s obviously a major crime against epistemic rationality.
I’m also not sure rationalizing your past personal decisions isn’t an instance of this; the phrase “I made the right choice” could be interpreted as meaning you believe you would have been less satisfied now if you chose differently, and if this isn’t true but you are trying to convince yourself it is to be happier then that is also a major crime against epistemic rationality.