I wouldn’t call it orthogonal either. Rationality is about having correct beliefs, and I would label a belief-based litmus test rational to the extent it’s correct.
Writing a post about how $political_belief is a litmus test is probably a bad idea because of the reasons you mentioned.
Rationality is about have correct beliefs. But a single belief that has only two possible answers is never going to stand in for the entirety of a person’s belief structure. That’s why you have to look at the process by which a person forms beliefs to have any idea if they are rational.
I’d call it the “Paying-Good-Attention-While-Doing-Simple-Math Test”. :D
But yeah… I can imagine that something similarly simple could be an important part of rationality. Some simple task that predicts the ability to do more complex tasks of a similar type.
However, in that case the test will resemble a kind of puzzle, instead of pattern-matching “Do you agree with Greens?”
Specifically for updating, I can imagine a test where the person is gradually given more and more information; the initial information is an evidence of an outcome “A”, but most of the latter information is an evidence of an outcome “B”. The person is informally asked to make a guess soon after the beginning (when the reasonable answer is “A”), and at the end they are asked to provide a final answer. Some people would probably get stuck as “A”, and some would update to “B”. But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.
Specifically for updating, I can imagine a test where the person is gradually given more and more information; the initial information is an evidence of an outcome “A”, but most of the latter information is an evidence of an outcome “B”. The person is informally asked to make a guess soon after the beginning (when the reasonable answer is “A”), and at the end they are asked to provide a final answer. Some people would probably get stuck as “A”, and some would update to “B”. But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.
I’ve seen experiments that tested this; I thought they were mentioned in Thinking and Deciding or Thinking Fast and Slow, but I didn’t see it in a quick check of either of those. If I recall the experimental setup correctly (I doubt I got the numbers right), they began with a sequence that was 80% red and 20% blue, which switched to being 80% blue and 20% red after n draws. The subjects’ estimate that the next draw would be red stayed above 50% for significantly longer than n draws from the second distribution, and some took until 2n or 3n draws from the second distribution to assign 50% chance to each, at which point almost two thirds of the examples they had seen were blue!
But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.
I dunno… people who do fine at the Wason selection task with ages and drinks get it wrong with numbers and colours. (I’m not sure whether that’s a bug or a feature.)
That seems to me like a reason not to test the skill on real-life examples.
We wouldn’t want a rationality test that a person can pass with original wording, but will fail if we replace “Republicans” by “Democrats”… or by Green aliens. We wouldn’t want the person to merely recognize logical fallacies when spoken by Republicans. This is in my opinion a risk with real-life examples. Is the example with drinking age easier because it is easier to imagine, or because it is something we already agree with?
Okay, I am curious here… what exactly would happen if we replaced the Wason selection task with something that uses words from real life (is less abstract), but is not an actual rule (therefore it cannot be answered using only previous experience)? For example: “Only dogs are allowed at jumping competitions, cats are not allowed. We have a) a dog going to unknown competition; b) a cat going to unknown competition; c) an unknown animal going to swimming competition, and d) an unknown animal going to jumping competition—which of these cases do you have to check thoroughly to make sure the rule is not broken?”
I wouldn’t call it orthogonal either. Rationality is about having correct beliefs, and I would label a belief-based litmus test rational to the extent it’s correct.
Writing a post about how $political_belief is a litmus test is probably a bad idea because of the reasons you mentioned.
Rationality is about have correct beliefs. But a single belief that has only two possible answers is never going to stand in for the entirety of a person’s belief structure. That’s why you have to look at the process by which a person forms beliefs to have any idea if they are rational.
Exactly. If there is any hope in using a list of beliefs as a test of rationality, it will need multiple items.
You know, IQ tests also don’t have a single question. Neither do any other personality tests.
OTOH the Cognitive Reflection Test has a shockingly low three questions and I’ve been told it’s surprisingly accurate.
I’d call it the “Paying-Good-Attention-While-Doing-Simple-Math Test”. :D
But yeah… I can imagine that something similarly simple could be an important part of rationality. Some simple task that predicts the ability to do more complex tasks of a similar type.
However, in that case the test will resemble a kind of puzzle, instead of pattern-matching “Do you agree with Greens?”
Specifically for updating, I can imagine a test where the person is gradually given more and more information; the initial information is an evidence of an outcome “A”, but most of the latter information is an evidence of an outcome “B”. The person is informally asked to make a guess soon after the beginning (when the reasonable answer is “A”), and at the end they are asked to provide a final answer. Some people would probably get stuck as “A”, and some would update to “B”. But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.
I’ve seen experiments that tested this; I thought they were mentioned in Thinking and Deciding or Thinking Fast and Slow, but I didn’t see it in a quick check of either of those. If I recall the experimental setup correctly (I doubt I got the numbers right), they began with a sequence that was 80% red and 20% blue, which switched to being 80% blue and 20% red after n draws. The subjects’ estimate that the next draw would be red stayed above 50% for significantly longer than n draws from the second distribution, and some took until 2n or 3n draws from the second distribution to assign 50% chance to each, at which point almost two thirds of the examples they had seen were blue!
I dunno… people who do fine at the Wason selection task with ages and drinks get it wrong with numbers and colours. (I’m not sure whether that’s a bug or a feature.)
That seems to me like a reason not to test the skill on real-life examples.
We wouldn’t want a rationality test that a person can pass with original wording, but will fail if we replace “Republicans” by “Democrats”… or by Green aliens. We wouldn’t want the person to merely recognize logical fallacies when spoken by Republicans. This is in my opinion a risk with real-life examples. Is the example with drinking age easier because it is easier to imagine, or because it is something we already agree with?
Okay, I am curious here… what exactly would happen if we replaced the Wason selection task with something that uses words from real life (is less abstract), but is not an actual rule (therefore it cannot be answered using only previous experience)? For example: “Only dogs are allowed at jumping competitions, cats are not allowed. We have a) a dog going to unknown competition; b) a cat going to unknown competition; c) an unknown animal going to swimming competition, and d) an unknown animal going to jumping competition—which of these cases do you have to check thoroughly to make sure the rule is not broken?”