A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.
There’s only room for making it easier when the word “probable” is not synonymous with “larger N out of 100“. So I maintain that alternate understanding of the word “probable” (and perhaps also an invalid idea of what one should bet on) are relevant. edit: to clarify, I can easily imagine an alternate cultural context where “blerg” is always, universally, invariably, a shorthand for “N out of 100”. In such context, asking about “N out of 100” or about “blerg” should produce nearly identical results.
Also, in your study, about half of the questions were answered correctly.
The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic.
I guess that’s fair enough, albeit its not clear how that works on Linda-like examples.
In my opinion its just that through their life people are exposed to a training dataset which consists of
Detailed accounts of real events.
Speculative guesses.
and (1) is much more commonly correct than (2) even though (1) is more conjunctive. So people get mis-trained through a biased training set. A very wide class of learning AIs would get mis-trained by this sort of thing too.
I’m not familiar with the effect of variable string length difference, and quick Googling isn’t helping. If you could direct me to some research on this, I’d appreciate it.
The point is that you can’t pull the representativeness trick with e.g. R vs RGGRRGRRRGG . All research I ever seen had strings with small % difference in their length. I am assuming that the research is strongly biased towards researching something un-obvious, while it is fairly obvious that R is more probable than RGGRRGRRRGG and frankly we do not expect to find anyone who thinks that RGGRRGRRRGG is more probable than R.
There’s only room for making it easier when the word “probable” is not synonymous with “larger N out of 100″. So I maintain that alternate understanding of the word “probable” (and perhaps also an invalid idea of what one should bet on) are relevant.
Maybe a misunderstanding about the word is relevant, but it clearly isn’t entirely responsible for the effect. Like I said, the conjunction fallacy is much less common if the structure of the question is made clear to the subject using a diagram (e.g. if it is made obvious that feminist bank tellers are a proper subset of bank tellers). It seems implausible that providing this extra information will change the subject’s judgment about what the experimenter means by “probable”.
I guess that’s fair enough, albeit its not clear how that works on Linda-like examples.
The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
Maybe a misunderstanding about the word is relevant, but it clearly isn’t entirely responsible for the effect.
In the study you quoted, a bit less than half of the answers were wrong, in sharp contrast to the Linda example, where 90% of the answers were wrong. It implies that at least 40% of the failures were a result of misunderstanding. This only leaves 60% for fallacies. Of that 60%, some people have other misunderstandings and other errors of reasoning, and some people are plain stupid (10% are the dumbest people out of 10, i.e. have an IQ of 80 or less), leaving easily less than 50% for the actual conjunction fallacy.
It seems implausible that providing this extra information will change the subject’s judgment about what the experimenter means by “probable”.
Why so? If the word “probable” is fairly ill defined (as well as the whole concept of probability), then it will or will not acquire specific meaning depending on the context.
The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
Then the representativeness works in the opposite direction from what’s commonly assumed of the dice example.
Speaking of which, “is” is sometimes used to describe traits for identification purposes, e.g. “in general, an alligator is shorter and less aggressive than a crocodile” is more correct than “in general, an alligator is shorter than a crocodile”. If you were to compile traits for finding Linda, you’d pick the most descriptive answer. People know they need to do something with what they are told, they don’t necessarily understand correctly what they need to do.
There’s only room for making it easier when the word “probable” is not synonymous with “larger N out of 100“. So I maintain that alternate understanding of the word “probable” (and perhaps also an invalid idea of what one should bet on) are relevant. edit: to clarify, I can easily imagine an alternate cultural context where “blerg” is always, universally, invariably, a shorthand for “N out of 100”. In such context, asking about “N out of 100” or about “blerg” should produce nearly identical results.
Also, in your study, about half of the questions were answered correctly.
I guess that’s fair enough, albeit its not clear how that works on Linda-like examples.
In my opinion its just that through their life people are exposed to a training dataset which consists of
Detailed accounts of real events.
Speculative guesses.
and (1) is much more commonly correct than (2) even though (1) is more conjunctive. So people get mis-trained through a biased training set. A very wide class of learning AIs would get mis-trained by this sort of thing too.
The point is that you can’t pull the representativeness trick with e.g. R vs RGGRRGRRRGG . All research I ever seen had strings with small % difference in their length. I am assuming that the research is strongly biased towards researching something un-obvious, while it is fairly obvious that R is more probable than RGGRRGRRRGG and frankly we do not expect to find anyone who thinks that RGGRRGRRRGG is more probable than R.
Maybe a misunderstanding about the word is relevant, but it clearly isn’t entirely responsible for the effect. Like I said, the conjunction fallacy is much less common if the structure of the question is made clear to the subject using a diagram (e.g. if it is made obvious that feminist bank tellers are a proper subset of bank tellers). It seems implausible that providing this extra information will change the subject’s judgment about what the experimenter means by “probable”.
The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.
In the study you quoted, a bit less than half of the answers were wrong, in sharp contrast to the Linda example, where 90% of the answers were wrong. It implies that at least 40% of the failures were a result of misunderstanding. This only leaves 60% for fallacies. Of that 60%, some people have other misunderstandings and other errors of reasoning, and some people are plain stupid (10% are the dumbest people out of 10, i.e. have an IQ of 80 or less), leaving easily less than 50% for the actual conjunction fallacy.
Why so? If the word “probable” is fairly ill defined (as well as the whole concept of probability), then it will or will not acquire specific meaning depending on the context.
Then the representativeness works in the opposite direction from what’s commonly assumed of the dice example.
Speaking of which, “is” is sometimes used to describe traits for identification purposes, e.g. “in general, an alligator is shorter and less aggressive than a crocodile” is more correct than “in general, an alligator is shorter than a crocodile”. If you were to compile traits for finding Linda, you’d pick the most descriptive answer. People know they need to do something with what they are told, they don’t necessarily understand correctly what they need to do.