Then you’re talking about conditional probability (Y | X), rather than joint probability.
It’s quite possible that if you thought the questions were asking about conditional probability, participants in these studies might have, too. Let’s take the Russia question:
“A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
“A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
Taken more literally, this question is asking about the joint probability: P( invasion & suspension of diplomatic relations)
But in English, the question could be read as: “A Russian invasion of Poland and, given this invasion, a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
in which case participants making that interpretation might give P( suspension of diplomatic relations | invasion )
It’s not at all clear that the participants interpreted these questions the way the experimenters thought.
What I mean is, I might not have any particular notion of what could cause a complete suspension of diplomatic relations, and give it, say, .01 probability. Then, when asked the second question, I might think “Oh! I hadn’t thought of that—it’s actually quite likely (.5) that there’ll be an invasion, and that would be likely (.5) to cause a suspension of diplomatic relations, so A^B has a probability of .25 (actually, slightly more). Of course, this means that B has a probability higher than .25, so if I can go back and change my answer, I will.”
(These numbers are not representative of anything in particular!)
I do agree, however, that the literature overall strongly suggests the veracity of this bias.
In the case of Russia/Poland question the subjects were professional political analysts. For this case in particular we can assume that for these subjects the amount of information about political relations included in the question itself is insignificant.
Then you’re talking about conditional probability (Y | X), rather than joint probability.
It’s quite possible that if you thought the questions were asking about conditional probability, participants in these studies might have, too. Let’s take the Russia question:
Taken more literally, this question is asking about the joint probability: P( invasion & suspension of diplomatic relations)
But in English, the question could be read as: “A Russian invasion of Poland and, given this invasion, a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
in which case participants making that interpretation might give P( suspension of diplomatic relations | invasion )
It’s not at all clear that the participants interpreted these questions the way the experimenters thought.
What I mean is, I might not have any particular notion of what could cause a complete suspension of diplomatic relations, and give it, say, .01 probability. Then, when asked the second question, I might think “Oh! I hadn’t thought of that—it’s actually quite likely (.5) that there’ll be an invasion, and that would be likely (.5) to cause a suspension of diplomatic relations, so A^B has a probability of .25 (actually, slightly more). Of course, this means that B has a probability higher than .25, so if I can go back and change my answer, I will.”
(These numbers are not representative of anything in particular!)
I do agree, however, that the literature overall strongly suggests the veracity of this bias.
In the case of Russia/Poland question the subjects were professional political analysts. For this case in particular we can assume that for these subjects the amount of information about political relations included in the question itself is insignificant.
That’s… somewhat discouraging.
Enough so that I had to triple check the source to be sure I hadn’t got the details wrong.
It certainly contradicts the claim that these studies test artificial judgments that the subjects would never face in day-to-day life.