Is there really a way to show that all fallacious reasoning in K&T’s experiments is due to presentation of information in terms of probabilities as opposed to frequencies?
Anchoring is a phenomena that occurs in more places than just estimating probabilities and thus it seems to be a pretty common method of approximation use by our brains. This is one of the reasons why I argued (Gigerenzer doesn’t argue this) that we only use heuristics when probabilities are in difficult to use form, but when they’re in frequencies we just compute the answer
Is there really a way to show that all fallacious reasoning in K&T’s experiments is due to presentation of information in terms of probabilities as opposed to frequencies?
The experiment you would run seems to be straightforward, as long as you’re just considering anchoring for probability estimates. Just find a previously-ran experiment, replicate it (as a control) and run another treatment changing the language to frequencies. Someone may have already ran this experiment, in fact.
To illustrate, in a study conduced by Tversky and Kahneman (1974), a random number was generated by spinning a wheel. Participants were then asked to specify whether this random number was higher or lower than was the percentage of nations that are located in Africa—referred to as a comparative question. Finally, participants were instructed to estimate the percentage of nations that are located in Africa-an absolute question. Participants who had received a high random number were more inclined to overestimate the percentage of nations that are located in Africa. The anchor, as represented by the random number, biased their final estimate.
Here, the biased thinking isn’t a result of thinking in terms of abstract probabilities as opposed to concrete frequencies.
I’m sympathetic to the points G makes. It’s just that K&T’s results don’t always depend on information presented as probabilities.
I must wonder whether, and to what extent, these results would replicate in a real-world situation where the question is perceived as truly important by the parties concerned.
When discussing research like this, people often imagine the subjects fully applying themselves, as if they were on an important exam or in a business situation where big money is involved. However, to get a more realistic picture, you should imagine yourself in a situation where someone is asking you obscure TV quiz-style questions about things that you don’t care about in the slightest, bored to death, being there only because of some miserable incentive like getting a course credit or a few dollars of pocket money. I can easily imagine people in such a situation giving casual answers without any actual thought involved, based on random clues from the environment—just like you might use e.g. today’s date as an inspiration for choosing lottery numbers.
Therefore, the important question is: has anyone made similar observations in a situation where the subjects had a strong incentive to really give their best when thinking about the answers? If not, I think one should view these results with a strong dose of skepticism.
Your idea that the subjects are not taking the question seriously is a good one.
I had a discussion with someone about a very similar real life ‘Linda’. It was finally resolved by realizing that the other person didn’t think of ‘and’ and ‘or’ as defined terms that always differed and was quite put out that I thought he should know that. To put it in ‘Linda’ terms: he know that Linda was a feminist and doubted that she was a teller. This being the case the ‘and’ should be thought of as an ‘or’ and b was more likely than a. Why would anyone think differently? It kind of blew my mind that I was being accused of being sloppy or illogical by using the fixed defined meaning for ‘and’ and ‘or’. I have since that time noticed that people actually often have this vagueness about logical terms.
I generally think of “and” and “or” in the strict senses, but, by the same token, I get really annoyed when I use the word “or” (which, in English, is ambiguous about whether it is meant in the exclusive or inclusive sense) and people say “yes” or “true”.
English already has words like “both” to answer in that question, which tells you in one syllable that the “exclusive or” reading is false but the “inclusive or” reading is true. This is not a standard part of symbolic logic curriculum, and is simply helpful rather than a sign of having taken such a class and learned a technical jargon that borrowed the word “or” to strictly mean “inclusive or”.
I’d never head of someone generously interpreting an “and” as an “or” (or vice versa) but it makes sense to me that it would be common and helpful in the absence of assumed exposure to a technical curriculum with truth tables and quantified predicate logic and such (at least when a friendly discussion was happening, instead of a debate).
I generally think of “and” and “or” in the strict senses, but, by the same token, I get really annoyed when I use the word “or” (which, in English, is ambiguous about whether it is meant in the exclusive or inclusive sense) and people say “yes” or “true”.
People actually do that when not trying to be annoying? That’s surprising.
Yeah, I would say “both” or “yes, it could be either” depending on what I meant. I also use “and/or” whenever I mean the inclusive or, though that’s frowned on in formal writing.
That suggests another variant of the Linda problem: replace the “and” with “and also”, and leave the rest unchanged. If this makes a big difference, it would suggest that many of the people who fail on the Linda problem fail for linguistic reasons (they have the wrong meaning for the word “and”) rather than logical reasons.
Many subjects fail to recognize that when a 6-sided die with 4 green faces and 2 red faces will be rolled several times, betting on the occurrence of the sequence GRRRRRG is dominated by betting on the sequence RRRRRG, when the subject is given the option to bet on either at the same payoff. This (well, something similar, I didn’t bother to look up the actual sequences used) is cited as evidence that more is going on than subjects misunderstanding the meaning of “and” or “or”. Sure, some subjects just don’t use those words as the experimenters do, and perhaps this accounts for some of why “Linda” shows such a strong effect, but it is a very incomplete explanation of the effect.
Explanations of “Linda” based on linguistic misunderstandings, conversational maxims, etc., generally fail to explain other experiments that produce the same representativeness bias (though perhaps not as strongly) in contexts where there is no chance that the particular misunderstanding alleged could be present.
I would not say that this person replaced “and” by “or”. I guess they considered the statement “Lisa is a bank teller and a feminist” to be “50%” true if Lisa turns out to be a feminist but not a bank teller.
The formula used would be something like P(AB)=1/2*(P(A)+P(B))
These articles talk only about the conjunction fallacy. Maybe it wasn’t clear enough from the context, but my above reply was to a comment about the anchoring bias, and was meant to comment on that specific finding.
But in any case, I have no doubt that these results are reproducible in the lab. What I’m interested in is how much of these patterns we can see in the real world and where exactly they tend to manifest themselves. Surely you will agree that findings about the behavior of captive undergraduates and other usual sorts of lab subjects should be generalized to human life in general only with some caution.
Moreover, if clear patterns of bias are found to occur in highly artificial experimental setups, it still doesn’t mean that they are actually relevant in real-life situations. What I’d like to see are not endless lab replications of these findings, but instead examples of relevant real-life decisions where these particular biases have been identified.
Given these considerations, I think that article by Eliezer Yudkowsky shows a bit more enthusiasm for these results than is actually warranted.
I believe this has been discussed in the context of the Efficient Market Hypothesis. I view it as something akin to the feud between Islam and Christianity.
Two schools of economics / religion (behavioural / neoclassical, islam / christianity) with many shared assumptions (similar holy texts) that have attracted followers due to offering common sense advice and a solid framework of practical value but pursue an ongoing holy war over certain doctrinal issues that are equally flawed and ungrounded in reality.
Or: what they agree on is largely wrong but has some instrumentally useful elements. What they disagree on is largely irrelevant. The priesthood considers the differences very significant but most people ignore everything but the useful bits and get on with their lives.
Matt,
I see how Gigerenzer’s point is relevant to some of the biases such as the conjunction fallacy.
But what about other biases such as the anchoring bias?
Is there really a way to show that all fallacious reasoning in K&T’s experiments is due to presentation of information in terms of probabilities as opposed to frequencies?
Thanks.
Anchoring is a phenomena that occurs in more places than just estimating probabilities and thus it seems to be a pretty common method of approximation use by our brains. This is one of the reasons why I argued (Gigerenzer doesn’t argue this) that we only use heuristics when probabilities are in difficult to use form, but when they’re in frequencies we just compute the answer
The experiment you would run seems to be straightforward, as long as you’re just considering anchoring for probability estimates. Just find a previously-ran experiment, replicate it (as a control) and run another treatment changing the language to frequencies. Someone may have already ran this experiment, in fact.
Well, to clarify, here’s an example from here :
Here, the biased thinking isn’t a result of thinking in terms of abstract probabilities as opposed to concrete frequencies.
I’m sympathetic to the points G makes. It’s just that K&T’s results don’t always depend on information presented as probabilities.
I must wonder whether, and to what extent, these results would replicate in a real-world situation where the question is perceived as truly important by the parties concerned.
When discussing research like this, people often imagine the subjects fully applying themselves, as if they were on an important exam or in a business situation where big money is involved. However, to get a more realistic picture, you should imagine yourself in a situation where someone is asking you obscure TV quiz-style questions about things that you don’t care about in the slightest, bored to death, being there only because of some miserable incentive like getting a course credit or a few dollars of pocket money. I can easily imagine people in such a situation giving casual answers without any actual thought involved, based on random clues from the environment—just like you might use e.g. today’s date as an inspiration for choosing lottery numbers.
Therefore, the important question is: has anyone made similar observations in a situation where the subjects had a strong incentive to really give their best when thinking about the answers? If not, I think one should view these results with a strong dose of skepticism.
Your idea that the subjects are not taking the question seriously is a good one.
I had a discussion with someone about a very similar real life ‘Linda’. It was finally resolved by realizing that the other person didn’t think of ‘and’ and ‘or’ as defined terms that always differed and was quite put out that I thought he should know that. To put it in ‘Linda’ terms: he know that Linda was a feminist and doubted that she was a teller. This being the case the ‘and’ should be thought of as an ‘or’ and b was more likely than a. Why would anyone think differently? It kind of blew my mind that I was being accused of being sloppy or illogical by using the fixed defined meaning for ‘and’ and ‘or’. I have since that time noticed that people actually often have this vagueness about logical terms.
I generally think of “and” and “or” in the strict senses, but, by the same token, I get really annoyed when I use the word “or” (which, in English, is ambiguous about whether it is meant in the exclusive or inclusive sense) and people say “yes” or “true”.
English already has words like “both” to answer in that question, which tells you in one syllable that the “exclusive or” reading is false but the “inclusive or” reading is true. This is not a standard part of symbolic logic curriculum, and is simply helpful rather than a sign of having taken such a class and learned a technical jargon that borrowed the word “or” to strictly mean “inclusive or”.
I’d never head of someone generously interpreting an “and” as an “or” (or vice versa) but it makes sense to me that it would be common and helpful in the absence of assumed exposure to a technical curriculum with truth tables and quantified predicate logic and such (at least when a friendly discussion was happening, instead of a debate).
People actually do that when not trying to be annoying? That’s surprising.
Yeah, I would say “both” or “yes, it could be either” depending on what I meant. I also use “and/or” whenever I mean the inclusive or, though that’s frowned on in formal writing.
That suggests another variant of the Linda problem: replace the “and” with “and also”, and leave the rest unchanged. If this makes a big difference, it would suggest that many of the people who fail on the Linda problem fail for linguistic reasons (they have the wrong meaning for the word “and”) rather than logical reasons.
Many subjects fail to recognize that when a 6-sided die with 4 green faces and 2 red faces will be rolled several times, betting on the occurrence of the sequence GRRRRRG is dominated by betting on the sequence RRRRRG, when the subject is given the option to bet on either at the same payoff. This (well, something similar, I didn’t bother to look up the actual sequences used) is cited as evidence that more is going on than subjects misunderstanding the meaning of “and” or “or”. Sure, some subjects just don’t use those words as the experimenters do, and perhaps this accounts for some of why “Linda” shows such a strong effect, but it is a very incomplete explanation of the effect.
Explanations of “Linda” based on linguistic misunderstandings, conversational maxims, etc., generally fail to explain other experiments that produce the same representativeness bias (though perhaps not as strongly) in contexts where there is no chance that the particular misunderstanding alleged could be present.
Good idea. At the next such situation, I’ll try that. Hopefully it will not be soon but you never know.
I would not say that this person replaced “and” by “or”.
I guess they considered the statement “Lisa is a bank teller and a feminist” to be “50%” true if Lisa turns out to be a feminist but not a bank teller.
The formula used would be something like P(AB)=1/2*(P(A)+P(B))
You should read Conjunction Controversy (Or, How They Nail It Down) before proposing these sort of things.
In particular, if you haven’t already, please read Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment in full—it contains details of 22 different experiments designed to address problems like this.
These articles talk only about the conjunction fallacy. Maybe it wasn’t clear enough from the context, but my above reply was to a comment about the anchoring bias, and was meant to comment on that specific finding.
But in any case, I have no doubt that these results are reproducible in the lab. What I’m interested in is how much of these patterns we can see in the real world and where exactly they tend to manifest themselves. Surely you will agree that findings about the behavior of captive undergraduates and other usual sorts of lab subjects should be generalized to human life in general only with some caution.
Moreover, if clear patterns of bias are found to occur in highly artificial experimental setups, it still doesn’t mean that they are actually relevant in real-life situations. What I’d like to see are not endless lab replications of these findings, but instead examples of relevant real-life decisions where these particular biases have been identified.
Given these considerations, I think that article by Eliezer Yudkowsky shows a bit more enthusiasm for these results than is actually warranted.
I believe this has been discussed in the context of the Efficient Market Hypothesis. I view it as something akin to the feud between Islam and Christianity.
mattnewport:
I’m unable to grasp the analogy—could you elaborate on that?
Two schools of economics / religion (behavioural / neoclassical, islam / christianity) with many shared assumptions (similar holy texts) that have attracted followers due to offering common sense advice and a solid framework of practical value but pursue an ongoing holy war over certain doctrinal issues that are equally flawed and ungrounded in reality.
Or: what they agree on is largely wrong but has some instrumentally useful elements. What they disagree on is largely irrelevant. The priesthood considers the differences very significant but most people ignore everything but the useful bits and get on with their lives.
But that example is probabilities. Here’s how I would redesign the experiment to make the subjects think in frequencies:
Generate a random integer from 0 to the number of countries in the world
Ask subjects whether this number is higher or lower than the number in Africa
Ask subjects to estimate the number of nations that are in Africa
sub treatments: either tell subjects the total number of nations in the world or don’t.