I don’t think I rationalize to any significant extent. Even the examples I came up with for Anna’s thread concern inefficient allocation of attention and using zero-information arguments, not something specifically directed to defense of a position. I admit being wrong or confused on simple things, sometimes incorrectly (so that I have to go back to embrace a momentarily-rejected position). It’s possible I’m completely incapable of noticing rationalization and would need a new basic skill to fix that, but doesn’t seem very likely.
(Alternatively, perhaps “rationalization” needs to be unpacked a bit, so that problems like those in the examples I referred to above can find a place in that notion. As it is, they seem more like flaws in understanding unbiased with respect to a favored conclusion, unless that conclusion is to be selected in the hindsight.)
That could actually be quite helpful. No offense to Vladimir; we’re just sincerely curious about this phenomenon, and if he’s really a case of someone who doesn’t relate to Tarski or rationalization, then it’d be helpful to have good evidence one way or the other about whether he rationalizes.
And yes, I agree, the term “rationalization” is a bit loaded. We already checked by tabooing the word in exploring with at least one case, so it’s not just that these people freeze at the word “rationalization.” But it’s quite possible that there are multiple things going on here that only seem similar at first glance.
I don’t remember the experience, but it sounds like a collection of absent-minded system 1 responses that build on each other, there doesn’t appear to be a preferred direction to them. This is also the characterization from the comment itself:
My mind confused this single thing for the light turning off, and then produced a whole sequence of complex thoughts around this single confusion, all the way relying on this fact being true.
As I understand, “rationalization” refers to something like optimization of thoughts in the direction of a preferred conclusion, not to any kind of thinking under a misconception. If I believe something wrong, of course I’ll be building on the wrong thing and making further wrong conclusions, until I notice that it’s wrong.
I don’t think I rationalize to any significant extent.
I recall you (doing what can most plausibly be described as) rationalizing at times. But perhaps you are right about the ‘unpacking’ thing. I might be thinking of things entirely different to those that Anna mentioned.
I don’t think I rationalize to any significant extent. Even the examples I came up with for Anna’s thread concern inefficient allocation of attention and using zero-information arguments, not something specifically directed to defense of a position. I admit being wrong or confused on simple things, sometimes incorrectly (so that I have to go back to embrace a momentarily-rejected position). It’s possible I’m completely incapable of noticing rationalization and would need a new basic skill to fix that, but doesn’t seem very likely.
(Alternatively, perhaps “rationalization” needs to be unpacked a bit, so that problems like those in the examples I referred to above can find a place in that notion. As it is, they seem more like flaws in understanding unbiased with respect to a favored conclusion, unless that conclusion is to be selected in the hindsight.)
Anyone volunteers to go through Vladimir_Nesov’s comments on LW and point out his rationalizations to him?
That could actually be quite helpful. No offense to Vladimir; we’re just sincerely curious about this phenomenon, and if he’s really a case of someone who doesn’t relate to Tarski or rationalization, then it’d be helpful to have good evidence one way or the other about whether he rationalizes.
That’s helpful. Thank you.
And yes, I agree, the term “rationalization” is a bit loaded. We already checked by tabooing the word in exploring with at least one case, so it’s not just that these people freeze at the word “rationalization.” But it’s quite possible that there are multiple things going on here that only seem similar at first glance.
What about this? Do you not count this because you were sleepy at the time, because it was a minor incident, or what?
(Also, I did not go through your comments to find that. Just thought I’d point that out because of shminux’s comment.)
I don’t remember the experience, but it sounds like a collection of absent-minded system 1 responses that build on each other, there doesn’t appear to be a preferred direction to them. This is also the characterization from the comment itself:
As I understand, “rationalization” refers to something like optimization of thoughts in the direction of a preferred conclusion, not to any kind of thinking under a misconception. If I believe something wrong, of course I’ll be building on the wrong thing and making further wrong conclusions, until I notice that it’s wrong.
I recall you (doing what can most plausibly be described as) rationalizing at times. But perhaps you are right about the ‘unpacking’ thing. I might be thinking of things entirely different to those that Anna mentioned.
I’d be grateful for specific examples.