I mean, yes, most people would get the [cancer cure/cancer cure+flood right]. It’s easy. But the example of Linda the [bank teller/bank teller+active feminist] does have exactly the same logical structure.
But it doesn’t have the same structure as presented to the person being asked. They specifically put in a bunch of context designed so that if you try to use the contextual information provided you get the “wrong” answer (literally wrong, ignoring the fact that the person isn’t answering the question literally written down).
It’s like if you added the right context to the flood one, you could get people to say both. Like if you talked about how a flood is practically guaranteed for a while, and then ask, some people are going to look at the answers and go “he was talking about floods being likely. this one has a flood. the other one is irrelevant”. Why? Well they don’t know or care what you mean by the technical question, “Which is more probable?”. Answering a different question than the one the researcher claims to have communicated to his subjects is different than people actually, say, doing probability math wrong.
Yes, they put in a bunch of irrelevant context, and then asked the question. And rather than answering the question that was actually asked, people tend to get distracted by all those details and make a bunch of assumptions and answer a different question.
Say the researchers put it this way instead: “When we surround a certain simple easy question with irrelevant distracting context, we end up miscommunicating very badly with the subject.
Furthermore, even in ‘natural’ conditions where no-one is deliberately contriving to create such bad communication, people still often run into similar situations with irrelevant distracting context ‘in the wild’, and react in the same sorts of ways, creating the same kind of effects.
Thus people often make the mistake of thinking that adding more details to explanations/plans makes them more likely to be good, and thus do people tend to be insufficiently careful about pinning down every step very firmly from multiple directions.”
That’s bending over backwards to put the ‘fault’ with the researchers (and the entire rest of the world), and yet it still leads to the same significant, important real world day-to-day phenomena.
That’s why I think you’re “wrong” and the “conjunction fallacy” does actually “exist”.
Or, to put it another way, “conjunction fallacy exists—y/n” correlates with some actual feature that the territory can meaningfully have or not have, and I’m pretty damn sure that the territory does have it.
However! I would like to talk about how the same criticism might apply to the Wason selection task, and the hypothesis that humans are naturally bad at reasoning about it generally, but have a specialized brain circuit for dealing with it in social contexts.
Check it. Looking at the wikipedia article, and zeroing in on this part:
Which card(s) should you turn over in order to test the truth of the proposition that if a >card shows an even number on one face, then its opposite face is red?
I know I had to translate that to “even number cards always have red on the other side” before I could solve it, thinking, “well, they didn’t specifically say anything about odd number cards, so I guess that means they can be any color on the other side”.
Because that’s the way people would normally phrase such a concept (usually seeking a quick verification of the assumption about it being intended to mean odds can be whatever, if there’s anyone to ask the question).
And we do that cuz English aint got an easy distinction between ‘if’ and ‘if and only if’, and instead we have these vague and highly variable rules for what they probably mean in different contexts (ever noticed that “let’s” always seems to be an ‘inclusive we’ and “let us” always seems to be an ‘exclusive we’? Maybe you haven’t cuz that’s just an oddity of my dialect! I dunno! :P).
And maybe in that sort of context, for people who are used to using ‘if’ normally and not in a special math jargon way, it seems to be more likely to mean ‘if and only if’.
And if the people in the experiment are encouraged to ask questions to make sure they understand the question properly, or presented with the phrasing “even number cards always have red on the other side” instead, maybe in that case they’d be as good at reasoning about evens and odds and reds and browns on cards as they are at reasoning about minors and majors and cokes and beers in a bar.
But that seems like an obvious thing to check, trying to find a way of phrasing it so that people get it right even though it’s not involving anything “social”, and I’d be surprised if somebody hasn’t already. Guess I oughta post my own discussion topic on that to find out! But later. Right now, I’ma go snooze.
The thing that bemuses me most at the moment is how bad I am at predicting the voting response to a post of mine on LW. Granted that I and most people upvote and downvote to get post ratings to where we think they should be, rather than rating them by what we thought of them and letting the aggregate chips all where they may, I still can’t do well predicting it (that factor should actually be improving my accuracy).
This post of yours is truly fantastic and a moral success; you avoided the opportunities to criticize flaws in the OP and found a truly interesting instance of when an argument resembling that in the OP would actually be valid.
Granted that I and most people upvote and downvote to get post ratings to where we think they should be,
(This factor applies most significantly to mid-quality contributions. There is a threshold beyond which votes tend to just spiral off towards infinity.)
I think this is the only example I have seen of a negative infinity spiral on LW. The OP shouldn’t be too discouraged, if what you are saying is right (and I think it is). Content rated approaching negative infinity isn’t (adjudged) that much worse than content with a high negative rating.
But it doesn’t have the same structure as presented to the person being asked. They specifically put in a bunch of context designed so that if you try to use the contextual information provided you get the “wrong” answer (literally wrong, ignoring the fact that the person isn’t answering the question literally written down).
It’s like if you added the right context to the flood one, you could get people to say both. Like if you talked about how a flood is practically guaranteed for a while, and then ask, some people are going to look at the answers and go “he was talking about floods being likely. this one has a flood. the other one is irrelevant”. Why? Well they don’t know or care what you mean by the technical question, “Which is more probable?”. Answering a different question than the one the researcher claims to have communicated to his subjects is different than people actually, say, doing probability math wrong.
Yes, they put in a bunch of irrelevant context, and then asked the question. And rather than answering the question that was actually asked, people tend to get distracted by all those details and make a bunch of assumptions and answer a different question.
Say the researchers put it this way instead: “When we surround a certain simple easy question with irrelevant distracting context, we end up miscommunicating very badly with the subject.
Furthermore, even in ‘natural’ conditions where no-one is deliberately contriving to create such bad communication, people still often run into similar situations with irrelevant distracting context ‘in the wild’, and react in the same sorts of ways, creating the same kind of effects.
Thus people often make the mistake of thinking that adding more details to explanations/plans makes them more likely to be good, and thus do people tend to be insufficiently careful about pinning down every step very firmly from multiple directions.”
That’s bending over backwards to put the ‘fault’ with the researchers (and the entire rest of the world), and yet it still leads to the same significant, important real world day-to-day phenomena.
That’s why I think you’re “wrong” and the “conjunction fallacy” does actually “exist”.
Or, to put it another way, “conjunction fallacy exists—y/n” correlates with some actual feature that the territory can meaningfully have or not have, and I’m pretty damn sure that the territory does have it.
However! I would like to talk about how the same criticism might apply to the Wason selection task, and the hypothesis that humans are naturally bad at reasoning about it generally, but have a specialized brain circuit for dealing with it in social contexts.
Check it. Looking at the wikipedia article, and zeroing in on this part:
I know I had to translate that to “even number cards always have red on the other side” before I could solve it, thinking, “well, they didn’t specifically say anything about odd number cards, so I guess that means they can be any color on the other side”.
Because that’s the way people would normally phrase such a concept (usually seeking a quick verification of the assumption about it being intended to mean odds can be whatever, if there’s anyone to ask the question).
And we do that cuz English aint got an easy distinction between ‘if’ and ‘if and only if’, and instead we have these vague and highly variable rules for what they probably mean in different contexts (ever noticed that “let’s” always seems to be an ‘inclusive we’ and “let us” always seems to be an ‘exclusive we’? Maybe you haven’t cuz that’s just an oddity of my dialect! I dunno! :P).
And maybe in that sort of context, for people who are used to using ‘if’ normally and not in a special math jargon way, it seems to be more likely to mean ‘if and only if’.
And if the people in the experiment are encouraged to ask questions to make sure they understand the question properly, or presented with the phrasing “even number cards always have red on the other side” instead, maybe in that case they’d be as good at reasoning about evens and odds and reds and browns on cards as they are at reasoning about minors and majors and cokes and beers in a bar.
But that seems like an obvious thing to check, trying to find a way of phrasing it so that people get it right even though it’s not involving anything “social”, and I’d be surprised if somebody hasn’t already. Guess I oughta post my own discussion topic on that to find out! But later. Right now, I’ma go snooze.
Great post.
The thing that bemuses me most at the moment is how bad I am at predicting the voting response to a post of mine on LW. Granted that I and most people upvote and downvote to get post ratings to where we think they should be, rather than rating them by what we thought of them and letting the aggregate chips all where they may, I still can’t do well predicting it (that factor should actually be improving my accuracy).
This post of yours is truly fantastic and a moral success; you avoided the opportunities to criticize flaws in the OP and found a truly interesting instance of when an argument resembling that in the OP would actually be valid.
(This factor applies most significantly to mid-quality contributions. There is a threshold beyond which votes tend to just spiral off towards infinity.)
I think this is the only example I have seen of a negative infinity spiral on LW. The OP shouldn’t be too discouraged, if what you are saying is right (and I think it is). Content rated approaching negative infinity isn’t (adjudged) that much worse than content with a high negative rating.