Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher’s password.
I agree with this. But I can’t think of such a rationality test. I think part of the problem is that a smart but irrational person could use his intelligence to figure out the answers that a rational person would come up with and then choose those answers.
On an IQ test, if you are smart enough to figure out the answers that a smart person would choose, then you yourself must be pretty smart. But I don’t think the same thing holds for rationality.
If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong.
Well yes, but it’s hard to think of how to do it right. What’s an example of a question you might put on a rationality test?
I agree that rationality tests will be much more difficult than IQ tests. First, we already have the IQ tests so if we tried to create a new one, we already know what to do and what to expect. Second, the rationality tests can be inherently more difficult.
Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections. Today we know that the first IQ tests got a few things wrong. And we also know that the “online IQ tests” are nonsense from the psychometrics point of view, but to people without psychological education they seem right, because their intuitive idea of IQ is “being able to answer difficult questions invented by other intelligent people”, when if fact the questions in Raven’s progressive matrices are rather simple.
20 years later we may have analogical knowledge about the rationality tests, and some things may seem obvious in hindsight. At this moment, while respecting that intelligence is not the same thing as rationality, IQ tests are the outside-view equivalent I will use for making guesses, because I have no better analogy.
The IQ tests were first developed for small children. The original purpose of the early IQ tests was to tell whether a 6 years old child is ready to go to elementary school, or whether we should give them another year. They probably even weren’t called IQ tests yet, but school readiness tests. Only later was the idea of some people being “smarter/dumber for their age” generalized to all ages.
Analogically, we could probably start measuring rationality where it is easiest; on children. I’m not saying it will be easy, just easier that with adults. Many of the small children’s logical mistakes will be less politically controversial. And it is easier to reason about the mistakes that you are already not prone to making. Some of the things we learn on children may be later useful also for studying adults.
Within intelligence, there was a controversy (and some people still try to keep it alive) whether “intelligence” is just one thing, or many different things (multiple intelligences). There will be analogical questions about “rationality”. And the proper way to answer these questions is to create tests for individual hypothetical components, and then to gather the data and see how these abilities correlate. Measurement and math; not speculation. Despite making an analogy here, I am not saying the answer will be the same. Maybe “resisting peer pressure” and “updating on new evidence” and “thinking about multiple possibilities before choosing and defending one of them” and “not having a strong identity that dictates all answers” will strongly correlate with each other; maybe they will be independent or even contradictory; maybe some of them will correlate together and the other will not, so we get two or three clusters of traits. This is an empirical question and must be answered my measurement.
Some of the intelligence tests in the past were strongly culturally biased (e.g. contained questions from history or literature, knowledge of proverbs or cultural norms), some of them required specific skills (e.g. mathematical). But some of them were not. Now that we have many different solutions, we can pick the less biased ones. But even the old ones were better than nothing; useful approximations within a given cultural group. If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.
I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: “What would HJPEV, the disgusting little heathen, do in this situation?” and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone’s capacity to think rationally if they choose to. Such person could still later become an ugly surprise. Well… I suppose we just have to accept this, and add it to the list of warnings of what the rationality tests don’t show.
As an example of the questions in tests; I would probably not try to test “rationality” as a whole in a single answer, but make separate answers focused on each component. For example, a test of resisting peer pressure would describe a story where one person provides a good evidence for X, but many people provide obviously bad reasoning for Y; and you have to choose which is more likely. For a test of updating, I would provide multiple pieces of evidence, where the first three point towards an answer X, but the following seven point towards an answer Y, and might even contain explanation why the first three pieces were misleading. The reader would be asked to write an answer answer reading first three pieces, and after reading all of them. For seeing multiple solutions, I would present some puzzle with multiple solutions, and task would be to find within a time limit as much as possible.
Each of these questions has some obvious flaws. But, analogically with the IQ tests, I believe the correct approach is to try dozens of flawed questions, gather data, and see how much they correlate with each other, make a factor analysis, gradually replace them with more pure versions, etc.
Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections.
It’s hard to say given that we have the benefit of hindsight, but at least we wouldn’t have to deal with what I believe to be the killer objection—that irrational people would subconsciously cheat if they know they are being tested.
If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.
I agree, but that still doesn’t get you any closer to overcoming the problem I described.
I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: “What would HJPEV, the disgusting little heathen, do in this situation?” and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone’s capacity to think rationally if they choose to.
To my mind that’s not very helpful because the irrational people I meet have been pretty good at thinking rationally if they choose to. Let me illustrate with a hypothetical: Suppose you meet a person with a fervent belief in X, where X is some ridiculous and irrational claim. Instead of trying to convince them that X is wrong, you offer them a bet, the outcome of which is closely tied to whether X is true or not. Generally they will not take the bet. And in general, when you watch them making high or medium stakes decisions, they seem to know perfectly well—at some level—that X is not true.
Of course not all beliefs are capable of being tested in this way, but when they can be tested the phenomenon I described seems pretty much universal. The reasonable inference is that irrational people are generally speaking capable of rational thought. I believe this is known as “standby rationality mode.”
I agree with you that people who assert crazy beliefs frequently don’t behave in the crazy ways those beliefs would entail.
This doesn’t necessarily mean they’re engaging in rational thought.
For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn’t follow that my behavior is sane… only that it isn’t crazy in the specific way indicated by X. There are lots of ways to be crazy.
More generally, though… for my own part what I find is that most people’s betting/decision making behavior is neither particularly “rational” nor “irrational” in the way I think you’re using these words, but merely conventional.
That is, I find most people behave the way they’ve seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional). Sometimes that behavior is sane, sometimes it’s crazy, but in neither case does it reflect sanity or insanity as a fundamental attribute.
This doesn’t necessarily mean they’re engaging in rational thought.
For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn’t follow that my behavior is sane… only that it isn’t crazy in the specific way indicated by X. There are lots of ways to be crazy.
More generally, though… for my own part what I find is that most people’s betting/decision making behavior is neither particularly “rational” nor “irrational” in the way I think you’re using these words, but merely conventional.
That is, I find most people behave the way they’ve seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional)
That may very well be true . . .I’m not sure what it says about rationality testing. If there is a behavior which is conventional but possibly irrational, it might not be so easy to assess its rationality. And if it’s conventional and clearly irrational, how can you tell if a testee engages in it? Probably you cannot trust self-reporting.
A lot of words are getting tossed around here whose meanings I’m not confident I understand. Can you say what it is you want to test for, here, without using the word “rational” or its synonyms? Or can you describe two hypothetical individuals, one of whom you’d expect to pass such a test and the other you’d expect to fail?
Our hypothetical person believes himself to be very good at not letting his emotions and desires color his judgments. However his judgments are heavily informed by these things and then he subconsciously looks for rationalizations to justify them. He is not consciously aware that he does this.
Ideally, he should fail the rationality test.
Conversely, someone who passes the test is someone who correctly believes that his desires and emotions have very little influence over his judgments.
Does that make sense?
And by the way, one of the desires of Person #1 is to appear “rational” to himself and others. So it’s likely he will subconsiously attempt to cheat on any “rationality test. ”
If I were constructing a test to distinguish person #1 from person #2, I would probably ask for them to judge a series of scenarios that were constructed in such a way that formally, the scenarios were identical, but each one had different particulars that related to common emotions and desires, and each scenario was presented in isolation (e.g., via a computer display) so it’s hard to go back and forth and compare.
I would expect P2 to give equivalent answers in each scenario, and P1 not to (though they might try).
It’s a fair question, but I don’t have a good example to give you, and constructing one would take more effort than I feel like putting into it. So, no, sorry.
That said, what you seem to be saying is that P2 is capable of making decisions that aren’t influenced by their emotions and desires (via “standby rationality mode”) but does not in fact do so except when taking rationality tests, whereas P1 is capable of it and also does so in real life.
If I’ve understood that correctly, then I agree that no rationality test can distinguish P1 and P2′s ability to make decisions that aren’t influenced by their emotions and desires.
It’s a fair question, but I don’t have a good example to give you, and constructing one would take more effort than I feel like putting into it. So, no, sorry.
That’s unfortunate, because this strikes me as a very important issue. Even being able to measure one’s own rationality would be very helpful, let alone that of others.
That said, what you seem to be saying is that P2 is capable of making decisions that aren’t influenced by their emotions and desires (via “standby rationality mode”) but does not in fact do so except when taking rationality tests, whereas P1 is capable of it and also does so in real life.
I’m not sure I would put it in terms of “making decisions” so much as “making judgments,” but basically yes. Also, P1 does make rational judgments in real life but the level of rationality depends on what is at stake.
If I’ve understood that correctly, then I agree that no rationality test can distinguish P1 and P2′s ability to make decisions that aren’t influenced by their emotions and desires.
Well one idea is to look more directly at what is going on in the brain with some kind of imaging technique. Perhaps self-deception or result-oriented reasoning have a tell tale signature.
Also, perhaps this kind of irrationality is more cognitively demanding. To illustrate, suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.
Okay, I think there is a decent probability that you are right, but at this moment we need more data, which we will get by trying to create different kinds of rationality tests.
A possible outcome is that we won’t get true rationality tests, but at least something partially useful, e.g. tests selecting the people capable of rational though, which includes a lot of irrational people, but still not everyone. Which may still appear to be just another form of intelligence tests (a sufficiently intelligent irrational person is able to make rational bets, and still believe they have an invisible dragon in the garage).
So… perhaps this is a moment where I should make a bet about my beliefs. Assuming that Stanovich does not give up, and other people will follow him (that is, assuming that enough psychologists will even try to create rationality tests), I’d guess… probability 20% within 5 years, 40% within 10 years, 80% ever (pre-Singularity) that there will be a test which predicts rationality significantly better than an IQ test. Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test, even if you had to pay more for it. (Which doesn’t mean that employers actually will want to use it. Or will be legally allowed to.) And probability 10% within 10 years, 60% ever that a true “rationality test” will be invented, at least for values up to 130 (which still many compartmentalizing people will pass). These numbers are just a wild guess, tomorrow I would probably give different values; I just thought it would be proper to express my beliefs in this format, because it encourages rationality in general.
Which may still appear to be just another form of intelligence tests (
Yes, I have a feeling that “capability of rationality” would be highly correlated with IQ.
Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test
Your mention of employees raises another issue, which is who the test would be aimed at. When we first started discussing the issue, I had an (admittedly vague) idea in my head that the test could be for aspiring rationalists. i.e. that it could it be used to bust irrational lesswrong posters who are far less rational than they realize. It’s arguably more of a challenge to come up with a test to smoke out the self-proclaimed paragon of rationality who has the advantage of careful study and who knows exactly what he is being tested for.
By analogy, consider the Crown-Marlow Social Desirability Scale, which has been described as a test which measures “the respondent’s desire to exaggerate his own moral excellence and to present a socially desirable facade” Here is a sample question from the test:
T F I have never intensely disliked anyone
Probably the test works pretty well for your typical Joe or Jane Sixpack. But someone who is intelligent; who has studied up in this area; and who knows what’s being tested will surely conceal his desire to exaggerate his moral excellence.
That said, having thought about it, I do think there is a decent chance that solid rationality tests will be developed. At least for subjects who are unprepared. One possibility is to measure reaction times as with “Project Implicit.” Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it. But you still might run into the problem of subconscious cheating.
Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it.
If anything, I might expect the opposite to be true in this context. Neurotypical people have fast and frugal conformity heuristics to fall back on, while self-honestly on a lot of questions would probably take some reflection; at least, that’s true for questions that require aggregating information or assessing personality characteristics rather than coming up with a single example of something.
It’d definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.
If anything, I might expect the opposite to be true in this context.
Well consider the hypothetical I proposed:
suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.
See what I mean?
I do agree that in other contexts, self-deception might require less thought. e.g. spouting off the socially preferable answer to a question without really thinking about what the correct answer is.
It’d definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.
That sample question reminds me of a “lie score”, which is a hidden part of some personality tests. Among the serious questions, there are also some questions like this, where you are almost certain that the “nice” answer is a lie. Most people will lie on one or two of ten such question, but the rule of thumb is that if they lie in five or more, you just throw the questionnaire away and declare them a cheater. -- However, if they didn’t lie on any of these question, you do a background check whether they have studied psychology. And you keep in mind that the test score may be manipulated.
Okay, I admit that this problem would be much worse for rationality tests, because if you want a person with given personality, they most likely didn’t study psychology. But if CFAR or similar organizations become very popular, then many candidates for highly rational people will be “tainted” by the explicit study of rationality, simply because studying rationality explicitly is probably a rational thing to do (this is just an assumption), but it’s also what an irrational person self-identifying as a rationalist would do. Also, practicing for IQ tests is obvious cheating, but practicing for getting better at doing rational tasks is the rational thing to do, and a wannabe rationalist would do it, too.
Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits… maybe even the reaction times or lie detectors.
Among the serious questions, there are also some questions like this, where you are almost certain that the “nice” answer is a lie.
On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am “more willing than most people to respond to tests truthfully”; but what it indicates to me is that, for all but 6 out of 33 questions, the “nice” answer was a lie, at least for me.
The 6 questions were the ones where the answer I gave was, according to the test, the “nice” one, but just happened to be the truth in my case: for example, one of the 6 was “T F I like to gossip at times”; I answered “F”, which is the “nice” answer according to the test—presumably on the assumption that most people do like to gossip but don’t want to admit it—but I genuinely don’t like to gossip at all, and can’t stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-)
Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.
Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits… maybe even the reaction times or lie detectors.
Yes, reaction times seem like an interesting possibility. There is an online test for racism which uses this principle. But it would be pretty easy to beat the test if the results counted for anything. Actually lie detectors can be beaten too.
Perhaps brain imaging will eventually advance to the point where you can cheaply and accurately determine if someone is engaged in deception or self-deception :)
I agree with this. But I can’t think of such a rationality test. I think part of the problem is that a smart but irrational person could use his intelligence to figure out the answers that a rational person would come up with and then choose those answers.
On an IQ test, if you are smart enough to figure out the answers that a smart person would choose, then you yourself must be pretty smart. But I don’t think the same thing holds for rationality.
Well yes, but it’s hard to think of how to do it right. What’s an example of a question you might put on a rationality test?
I agree that rationality tests will be much more difficult than IQ tests. First, we already have the IQ tests so if we tried to create a new one, we already know what to do and what to expect. Second, the rationality tests can be inherently more difficult.
Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections. Today we know that the first IQ tests got a few things wrong. And we also know that the “online IQ tests” are nonsense from the psychometrics point of view, but to people without psychological education they seem right, because their intuitive idea of IQ is “being able to answer difficult questions invented by other intelligent people”, when if fact the questions in Raven’s progressive matrices are rather simple.
20 years later we may have analogical knowledge about the rationality tests, and some things may seem obvious in hindsight. At this moment, while respecting that intelligence is not the same thing as rationality, IQ tests are the outside-view equivalent I will use for making guesses, because I have no better analogy.
The IQ tests were first developed for small children. The original purpose of the early IQ tests was to tell whether a 6 years old child is ready to go to elementary school, or whether we should give them another year. They probably even weren’t called IQ tests yet, but school readiness tests. Only later was the idea of some people being “smarter/dumber for their age” generalized to all ages.
Analogically, we could probably start measuring rationality where it is easiest; on children. I’m not saying it will be easy, just easier that with adults. Many of the small children’s logical mistakes will be less politically controversial. And it is easier to reason about the mistakes that you are already not prone to making. Some of the things we learn on children may be later useful also for studying adults.
Within intelligence, there was a controversy (and some people still try to keep it alive) whether “intelligence” is just one thing, or many different things (multiple intelligences). There will be analogical questions about “rationality”. And the proper way to answer these questions is to create tests for individual hypothetical components, and then to gather the data and see how these abilities correlate. Measurement and math; not speculation. Despite making an analogy here, I am not saying the answer will be the same. Maybe “resisting peer pressure” and “updating on new evidence” and “thinking about multiple possibilities before choosing and defending one of them” and “not having a strong identity that dictates all answers” will strongly correlate with each other; maybe they will be independent or even contradictory; maybe some of them will correlate together and the other will not, so we get two or three clusters of traits. This is an empirical question and must be answered my measurement.
Some of the intelligence tests in the past were strongly culturally biased (e.g. contained questions from history or literature, knowledge of proverbs or cultural norms), some of them required specific skills (e.g. mathematical). But some of them were not. Now that we have many different solutions, we can pick the less biased ones. But even the old ones were better than nothing; useful approximations within a given cultural group. If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.
I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: “What would HJPEV, the disgusting little heathen, do in this situation?” and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone’s capacity to think rationally if they choose to. Such person could still later become an ugly surprise. Well… I suppose we just have to accept this, and add it to the list of warnings of what the rationality tests don’t show.
As an example of the questions in tests; I would probably not try to test “rationality” as a whole in a single answer, but make separate answers focused on each component. For example, a test of resisting peer pressure would describe a story where one person provides a good evidence for X, but many people provide obviously bad reasoning for Y; and you have to choose which is more likely. For a test of updating, I would provide multiple pieces of evidence, where the first three point towards an answer X, but the following seven point towards an answer Y, and might even contain explanation why the first three pieces were misleading. The reader would be asked to write an answer answer reading first three pieces, and after reading all of them. For seeing multiple solutions, I would present some puzzle with multiple solutions, and task would be to find within a time limit as much as possible.
Each of these questions has some obvious flaws. But, analogically with the IQ tests, I believe the correct approach is to try dozens of flawed questions, gather data, and see how much they correlate with each other, make a factor analysis, gradually replace them with more pure versions, etc.
It’s hard to say given that we have the benefit of hindsight, but at least we wouldn’t have to deal with what I believe to be the killer objection—that irrational people would subconsciously cheat if they know they are being tested.
I agree, but that still doesn’t get you any closer to overcoming the problem I described.
To my mind that’s not very helpful because the irrational people I meet have been pretty good at thinking rationally if they choose to. Let me illustrate with a hypothetical: Suppose you meet a person with a fervent belief in X, where X is some ridiculous and irrational claim. Instead of trying to convince them that X is wrong, you offer them a bet, the outcome of which is closely tied to whether X is true or not. Generally they will not take the bet. And in general, when you watch them making high or medium stakes decisions, they seem to know perfectly well—at some level—that X is not true.
Of course not all beliefs are capable of being tested in this way, but when they can be tested the phenomenon I described seems pretty much universal. The reasonable inference is that irrational people are generally speaking capable of rational thought. I believe this is known as “standby rationality mode.”
I agree with you that people who assert crazy beliefs frequently don’t behave in the crazy ways those beliefs would entail.
This doesn’t necessarily mean they’re engaging in rational thought.
For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn’t follow that my behavior is sane… only that it isn’t crazy in the specific way indicated by X. There are lots of ways to be crazy.
More generally, though… for my own part what I find is that most people’s betting/decision making behavior is neither particularly “rational” nor “irrational” in the way I think you’re using these words, but merely conventional.
That is, I find most people behave the way they’ve seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional). Sometimes that behavior is sane, sometimes it’s crazy, but in neither case does it reflect sanity or insanity as a fundamental attribute.
You might find yvain’s discussion of epistemic learned helplessness enjoyable and interesting.
That may very well be true . . .I’m not sure what it says about rationality testing. If there is a behavior which is conventional but possibly irrational, it might not be so easy to assess its rationality. And if it’s conventional and clearly irrational, how can you tell if a testee engages in it? Probably you cannot trust self-reporting.
A lot of words are getting tossed around here whose meanings I’m not confident I understand. Can you say what it is you want to test for, here, without using the word “rational” or its synonyms? Or can you describe two hypothetical individuals, one of whom you’d expect to pass such a test and the other you’d expect to fail?
Our hypothetical person believes himself to be very good at not letting his emotions and desires color his judgments. However his judgments are heavily informed by these things and then he subconsciously looks for rationalizations to justify them. He is not consciously aware that he does this.
Ideally, he should fail the rationality test.
Conversely, someone who passes the test is someone who correctly believes that his desires and emotions have very little influence over his judgments.
Does that make sense?
And by the way, one of the desires of Person #1 is to appear “rational” to himself and others. So it’s likely he will subconsiously attempt to cheat on any “rationality test. ”
Yeah, that helps.
If I were constructing a test to distinguish person #1 from person #2, I would probably ask for them to judge a series of scenarios that were constructed in such a way that formally, the scenarios were identical, but each one had different particulars that related to common emotions and desires, and each scenario was presented in isolation (e.g., via a computer display) so it’s hard to go back and forth and compare.
I would expect P2 to give equivalent answers in each scenario, and P1 not to (though they might try).
I doubt that would work, since P1 most likely has a pretty good standby rationality mode which can be subconsciously invoked if necessary.
But can you give an example of two such formally identical scenarios so I can think about it?
It’s a fair question, but I don’t have a good example to give you, and constructing one would take more effort than I feel like putting into it. So, no, sorry.
That said, what you seem to be saying is that P2 is capable of making decisions that aren’t influenced by their emotions and desires (via “standby rationality mode”) but does not in fact do so except when taking rationality tests, whereas P1 is capable of it and also does so in real life.
If I’ve understood that correctly, then I agree that no rationality test can distinguish P1 and P2′s ability to make decisions that aren’t influenced by their emotions and desires.
That’s unfortunate, because this strikes me as a very important issue. Even being able to measure one’s own rationality would be very helpful, let alone that of others.
I’m not sure I would put it in terms of “making decisions” so much as “making judgments,” but basically yes. Also, P1 does make rational judgments in real life but the level of rationality depends on what is at stake.
Well one idea is to look more directly at what is going on in the brain with some kind of imaging technique. Perhaps self-deception or result-oriented reasoning have a tell tale signature.
Also, perhaps this kind of irrationality is more cognitively demanding. To illustrate, suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.
[ha-ha-only-serious](http://www.catb.org/jargon/html/H/ha-ha-only-serious.html)
Rationality is commonly defined as winning. Therefore rationality testing is easy—just check if the subject is a winner or a loser.
Okay, I think there is a decent probability that you are right, but at this moment we need more data, which we will get by trying to create different kinds of rationality tests.
A possible outcome is that we won’t get true rationality tests, but at least something partially useful, e.g. tests selecting the people capable of rational though, which includes a lot of irrational people, but still not everyone. Which may still appear to be just another form of intelligence tests (a sufficiently intelligent irrational person is able to make rational bets, and still believe they have an invisible dragon in the garage).
So… perhaps this is a moment where I should make a bet about my beliefs. Assuming that Stanovich does not give up, and other people will follow him (that is, assuming that enough psychologists will even try to create rationality tests), I’d guess… probability 20% within 5 years, 40% within 10 years, 80% ever (pre-Singularity) that there will be a test which predicts rationality significantly better than an IQ test. Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test, even if you had to pay more for it. (Which doesn’t mean that employers actually will want to use it. Or will be legally allowed to.) And probability 10% within 10 years, 60% ever that a true “rationality test” will be invented, at least for values up to 130 (which still many compartmentalizing people will pass). These numbers are just a wild guess, tomorrow I would probably give different values; I just thought it would be proper to express my beliefs in this format, because it encourages rationality in general.
Yes, I have a feeling that “capability of rationality” would be highly correlated with IQ.
Your mention of employees raises another issue, which is who the test would be aimed at. When we first started discussing the issue, I had an (admittedly vague) idea in my head that the test could be for aspiring rationalists. i.e. that it could it be used to bust irrational lesswrong posters who are far less rational than they realize. It’s arguably more of a challenge to come up with a test to smoke out the self-proclaimed paragon of rationality who has the advantage of careful study and who knows exactly what he is being tested for.
By analogy, consider the Crown-Marlow Social Desirability Scale, which has been described as a test which measures “the respondent’s desire to exaggerate his own moral excellence and to present a socially desirable facade” Here is a sample question from the test:
Probably the test works pretty well for your typical Joe or Jane Sixpack. But someone who is intelligent; who has studied up in this area; and who knows what’s being tested will surely conceal his desire to exaggerate his moral excellence.
That said, having thought about it, I do think there is a decent chance that solid rationality tests will be developed. At least for subjects who are unprepared. One possibility is to measure reaction times as with “Project Implicit.” Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it. But you still might run into the problem of subconscious cheating.
If anything, I might expect the opposite to be true in this context. Neurotypical people have fast and frugal conformity heuristics to fall back on, while self-honestly on a lot of questions would probably take some reflection; at least, that’s true for questions that require aggregating information or assessing personality characteristics rather than coming up with a single example of something.
It’d definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.
Well consider the hypothetical I proposed:
See what I mean?
I do agree that in other contexts, self-deception might require less thought. e.g. spouting off the socially preferable answer to a question without really thinking about what the correct answer is.
Yes.
That sample question reminds me of a “lie score”, which is a hidden part of some personality tests. Among the serious questions, there are also some questions like this, where you are almost certain that the “nice” answer is a lie. Most people will lie on one or two of ten such question, but the rule of thumb is that if they lie in five or more, you just throw the questionnaire away and declare them a cheater. -- However, if they didn’t lie on any of these question, you do a background check whether they have studied psychology. And you keep in mind that the test score may be manipulated.
Okay, I admit that this problem would be much worse for rationality tests, because if you want a person with given personality, they most likely didn’t study psychology. But if CFAR or similar organizations become very popular, then many candidates for highly rational people will be “tainted” by the explicit study of rationality, simply because studying rationality explicitly is probably a rational thing to do (this is just an assumption), but it’s also what an irrational person self-identifying as a rationalist would do. Also, practicing for IQ tests is obvious cheating, but practicing for getting better at doing rational tasks is the rational thing to do, and a wannabe rationalist would do it, too.
Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits… maybe even the reaction times or lie detectors.
On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am “more willing than most people to respond to tests truthfully”; but what it indicates to me is that, for all but 6 out of 33 questions, the “nice” answer was a lie, at least for me.
The 6 questions were the ones where the answer I gave was, according to the test, the “nice” one, but just happened to be the truth in my case: for example, one of the 6 was “T F I like to gossip at times”; I answered “F”, which is the “nice” answer according to the test—presumably on the assumption that most people do like to gossip but don’t want to admit it—but I genuinely don’t like to gossip at all, and can’t stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-)
Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.
Yes, reaction times seem like an interesting possibility. There is an online test for racism which uses this principle. But it would be pretty easy to beat the test if the results counted for anything. Actually lie detectors can be beaten too.
Perhaps brain imaging will eventually advance to the point where you can cheaply and accurately determine if someone is engaged in deception or self-deception :)