I will say that it doesn’t even seem to be possible for there to be people who don’t rationalize. (Or enough that you’re at all likely to find them.)
You’d think not. Yet even Eliezer seems to think that one of our case studies really, truly might not ever rationalize and possibly never has before. This seems to be a case of a beautiful, sane theory beaten to death by a small gang of brutal facts.
“Some”, “signs”, “rather”. These words all show signs of being rather belief in belief. I notice you don’t say, “Some of these people are high-level rationalists,” just that they show warning signs of being so. What does this really mean?
It means that I don’t know how to measure how strong someone’s rationality skills are other than talking to others whom I intuitively want to say are good rationalists and comparing notes. So I’m hedging my assertions. But to whatever degree several people at the Singularity Institute are able to figure out who is or is not a reasonably good rationalist, some of our sample “non-rationalizers” appear to us to be good rationalists, and some appear not to be so.
Also, could you explain what you mean by “seem to have little clue what Tarski is for”?
Sure. We tell them the kinds of situations in which Tarski is useful, including some personal examples of our own applications of it, and they just blink at us and completely fail to relate. For instance, I might say, “So once I was walking past a pizza place and smelled pizza. Cheese turns out to be really bad for me, but at the time I was hungry. So I watched my mind construct arguments like, ‘I haven’t gotten much calcium for the last while.’” Nothing of this sort—fake justification, selective search, nothing—seems to connect to something they can relate to. So they just don’t see where they’d ever use Tarski.
And yes, we’ve had at least one person be openly skeptical that anyone could possibly find Tarski useful because he didn’t think anyone rationalized the way we were describing. And another of our case studies seemed to know rationalization only as a joke. (“The cake has fewer calories and doesn’t count if I eat it while standing, right?”)
Have you actually tested them for rationalizing? My own beliefs are that it’s more likely to run into someone who rationalizes so much they are blind to their own rationalizing (and so can’t recall any) than someone who is inhumanly honest.
(Tests in this case would include checking for hindsight bias, which is classic rationalizing, and having them do that test on YourMorals whose name I forget where you’re given two summaries of studies for and against gun control and asked to criticize them—usually showing imbalance towards your favored side. But you’re a LWer, I’m sure you can think of other tests.)
This is VERY interesting. I’m as baffled as you are, sorry to say.
It seems like you’ve described rationalizations that prevent true (or ‘maximally accurate’) beliefs. Have you tried asking these case studies their rationales for decision-making? One theme of my rationalization factory is spitting out true but misleading reasons for doing things, rarely allowing me to reason out doing what I know—somehow—that I should. Said factory operates by preventing me from thinking certain thoughts. Perhaps this goes on in these people?
I’ve performed one heck of an update thanks to your comment and realizing that I was generalizing from only a few examples.
I’m pretty sure I’m one of these unusual people. When I first read the litanies, I understood why they might be useful to some people (I have a lot of experience with religious fanatics), but I truly did not understand why they would be so important to Eliezer or other rationalists. I always figured they were meant to be a simple teaching tool, to help get across critical concepts and then to be discarded.
Gradually I came to realize that a large percentage of the community use the various litanies on a regular basis. This still confuses me in some cases—for example, it would never even occur to me that evidence/data could simply be ignored or that any rationalization could ever trump it.
I suspect this inability to simply ignore inconvenient data is the reason for my low rate of rationalization. I do actually catch myself beginning to rationalize from time to time, but there’s always the undercurrent of “wishful thinking isn’t real”. No matter how hard I rationalize, I cannot make the evidence go away, so the rationalization process gives up quickly.
I have been like this for most of my life, and have memories of the “wishful thinking isn’t real” effect going all the way back to my early memories of childish daydreaming and complex storytelling.
Speaking for myself, I think that rationalizing does typically (always?) involve ignoring something. Not ignoring the first piece of inconvenient data, necessarily, but the horrible inelegance of my ad-hoc auxiliary hypotheses, or such.
Another direction for measuring rationality might be how well people maintain their usual level under stress—this is something which would be harder to find out in conversation.
You’d think not. Yet even Eliezer seems to think that one of our case studies really, truly might not ever rationalize and possibly never has before. This seems to be a case of a beautiful, sane theory beaten to death by a small gang of brutal facts.
It means that I don’t know how to measure how strong someone’s rationality skills are other than talking to others whom I intuitively want to say are good rationalists and comparing notes. So I’m hedging my assertions. But to whatever degree several people at the Singularity Institute are able to figure out who is or is not a reasonably good rationalist, some of our sample “non-rationalizers” appear to us to be good rationalists, and some appear not to be so.
Sure. We tell them the kinds of situations in which Tarski is useful, including some personal examples of our own applications of it, and they just blink at us and completely fail to relate. For instance, I might say, “So once I was walking past a pizza place and smelled pizza. Cheese turns out to be really bad for me, but at the time I was hungry. So I watched my mind construct arguments like, ‘I haven’t gotten much calcium for the last while.’” Nothing of this sort—fake justification, selective search, nothing—seems to connect to something they can relate to. So they just don’t see where they’d ever use Tarski.
And yes, we’ve had at least one person be openly skeptical that anyone could possibly find Tarski useful because he didn’t think anyone rationalized the way we were describing. And another of our case studies seemed to know rationalization only as a joke. (“The cake has fewer calories and doesn’t count if I eat it while standing, right?”)
Have you actually tested them for rationalizing? My own beliefs are that it’s more likely to run into someone who rationalizes so much they are blind to their own rationalizing (and so can’t recall any) than someone who is inhumanly honest.
(Tests in this case would include checking for hindsight bias, which is classic rationalizing, and having them do that test on YourMorals whose name I forget where you’re given two summaries of studies for and against gun control and asked to criticize them—usually showing imbalance towards your favored side. But you’re a LWer, I’m sure you can think of other tests.)
This is VERY interesting. I’m as baffled as you are, sorry to say.
It seems like you’ve described rationalizations that prevent true (or ‘maximally accurate’) beliefs. Have you tried asking these case studies their rationales for decision-making? One theme of my rationalization factory is spitting out true but misleading reasons for doing things, rarely allowing me to reason out doing what I know—somehow—that I should. Said factory operates by preventing me from thinking certain thoughts. Perhaps this goes on in these people?
I’ve performed one heck of an update thanks to your comment and realizing that I was generalizing from only a few examples.
I’m pretty sure I’m one of these unusual people. When I first read the litanies, I understood why they might be useful to some people (I have a lot of experience with religious fanatics), but I truly did not understand why they would be so important to Eliezer or other rationalists. I always figured they were meant to be a simple teaching tool, to help get across critical concepts and then to be discarded.
Gradually I came to realize that a large percentage of the community use the various litanies on a regular basis. This still confuses me in some cases—for example, it would never even occur to me that evidence/data could simply be ignored or that any rationalization could ever trump it.
I suspect this inability to simply ignore inconvenient data is the reason for my low rate of rationalization. I do actually catch myself beginning to rationalize from time to time, but there’s always the undercurrent of “wishful thinking isn’t real”. No matter how hard I rationalize, I cannot make the evidence go away, so the rationalization process gives up quickly.
I have been like this for most of my life, and have memories of the “wishful thinking isn’t real” effect going all the way back to my early memories of childish daydreaming and complex storytelling.
This seems wrong, rationalizing is what you do to inconvenient data instead of ignoring it.
Speaking for myself, I think that rationalizing does typically (always?) involve ignoring something. Not ignoring the first piece of inconvenient data, necessarily, but the horrible inelegance of my ad-hoc auxiliary hypotheses, or such.
Another direction for measuring rationality might be how well people maintain their usual level under stress—this is something which would be harder to find out in conversation.