Not all the experiments are the same thing, just because they reached the same conclusion.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
The paper doesn’t describe the test conditions (that makes it unscholarly. scientific papers are supposed to have things like “possible sources of error” sections and describe their experimental procedure carefully. It doesn’t even say if it was double blind. Presumably not. If not, an explanation of why it doesn’t need to be is required but not provided.). There’s various issues there, e.g. pressure not to ask questions could easily have been present.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well. It has a consistent bias: people are better at real life than artificial situations designed in such a way that people will fail.
EDIT: Look at this sentence in the 1983 paper:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
See. Biased. Specifically designed to fool people. They directly admit it. You never read the paper, right? You just heard a summary of the conclusions and trusted that they had used the methods of science, which they had not.
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
See. Biased. Specifically designed to fool people.
What that says is that the test in question addresses only one of two obvious questions about the conjunction fallacy. It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice. It’s like if they were trying to find out if overdosing on a certain vitamin can be fatal: They’ll give their (animal) subjects some absurd amount of it, and see what happens, and if that turns out to be fatal, then they go trying different amounts to see where the effect starts. (That way they know to look at, say, kidney function, if that’s what killed the first batch of subjects, rather than having to test all the different body systems to find out what’s going on.) All the first test tells you is whether that kind of overdose is possible at all.
Just because it doesn’t answer every question you’d like it to doesn’t mean that it doesn’t answer any questions.
Do you understand that the situation of “someone using his intelligence to try to fool you” and the situation “living life” are different? Studies about the former do not give results about the latter. The only valid conclusion from this study is “people can sometimes be fooled, on purpose”. But that isn’t the conclusion it claims to support. Being tricked by people intentionally is different than being inherently biased.
The point is “people can be fooled into making this specific mistake”, which is an indication that that specific mistake is one that people do make in some circumstances, rather than that specific mistake not being made at all. (As a counterexample, I imagine that it would be rather hard to trick someone into claiming that if you put two paperclips in a bowl, and then added two more paperclips to that bowl, and then counted the paperclips, you’d find three—that’s a mistake that people don’t make, even if someone tries to trick them.)
“Some circumstances” might only be “when someone is trying to trick them”, but even if that’s true (which the experiment with the dice suggests it’s not) that’s not as far removed from “living life” as you’re trying to claim—people do try to trick each other in real life, and it’s not too unusual to encounter other situations that are just as tricky as dealing with a malicious human.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well.
This is exhausting. Whatever this “real life” of yours is, it’s so boring, predictable, and uniform as to be not worth living. Who hasn’t had to adapt to new (yes, even “artificial”) situations before? Who hasn’t played a game? Who hasn’t bet on the outcome of an event?
Propose an experiment already. My patience is waning.
Yes. If there’s no way to test your claim that systemic pseudoscientific practice brought about the conjunction fallacy, how can we know?
I have something like a thirty percent chance of correctly determining whether or not you will judge a given experimental design to be biased. Therefore I surely can’t do it.
We should just believe the best experiment anyone did, even if it’s no good?
I don’t know of any experiment that would be any good. I think the experiments of this kind should stop, pending a new idea about how to do fundamentally better ones.
Not all the experiments are the same thing, just because they reached the same conclusion.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
The paper doesn’t describe the test conditions (that makes it unscholarly. scientific papers are supposed to have things like “possible sources of error” sections and describe their experimental procedure carefully. It doesn’t even say if it was double blind. Presumably not. If not, an explanation of why it doesn’t need to be is required but not provided.). There’s various issues there, e.g. pressure not to ask questions could easily have been present.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well. It has a consistent bias: people are better at real life than artificial situations designed in such a way that people will fail.
EDIT: Look at this sentence in the 1983 paper:
See. Biased. Specifically designed to fool people. They directly admit it. You never read the paper, right? You just heard a summary of the conclusions and trusted that they had used the methods of science, which they had not.
What that says is that the test in question addresses only one of two obvious questions about the conjunction fallacy. It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice. It’s like if they were trying to find out if overdosing on a certain vitamin can be fatal: They’ll give their (animal) subjects some absurd amount of it, and see what happens, and if that turns out to be fatal, then they go trying different amounts to see where the effect starts. (That way they know to look at, say, kidney function, if that’s what killed the first batch of subjects, rather than having to test all the different body systems to find out what’s going on.) All the first test tells you is whether that kind of overdose is possible at all.
Just because it doesn’t answer every question you’d like it to doesn’t mean that it doesn’t answer any questions.
Do you understand that the situation of “someone using his intelligence to try to fool you” and the situation “living life” are different? Studies about the former do not give results about the latter. The only valid conclusion from this study is “people can sometimes be fooled, on purpose”. But that isn’t the conclusion it claims to support. Being tricked by people intentionally is different than being inherently biased.
The point is “people can be fooled into making this specific mistake”, which is an indication that that specific mistake is one that people do make in some circumstances, rather than that specific mistake not being made at all. (As a counterexample, I imagine that it would be rather hard to trick someone into claiming that if you put two paperclips in a bowl, and then added two more paperclips to that bowl, and then counted the paperclips, you’d find three—that’s a mistake that people don’t make, even if someone tries to trick them.)
“Some circumstances” might only be “when someone is trying to trick them”, but even if that’s true (which the experiment with the dice suggests it’s not) that’s not as far removed from “living life” as you’re trying to claim—people do try to trick each other in real life, and it’s not too unusual to encounter other situations that are just as tricky as dealing with a malicious human.
This is exhausting. Whatever this “real life” of yours is, it’s so boring, predictable, and uniform as to be not worth living. Who hasn’t had to adapt to new (yes, even “artificial”) situations before? Who hasn’t played a game? Who hasn’t bet on the outcome of an event?
Propose an experiment already. My patience is waning.
EDIT: I have, in fact, read the 1983 paper.
You think I have to propose a better experiment? We should just believe the best experiment anyone did, even if it’s no good?
Yes. If there’s no way to test your claim that systemic pseudoscientific practice brought about the conjunction fallacy, how can we know?
I have something like a thirty percent chance of correctly determining whether or not you will judge a given experimental design to be biased. Therefore I surely can’t do it.
No. Of course not.
Ceterum censeo propose an experiment already.
I don’t know of any experiment that would be any good. I think the experiments of this kind should stop, pending a new idea about how to do fundamentally better ones.
Then we’re in invisible dragon territory. I can safely ignore it.