At some point I need to write a post about how I’m worried that there’s an “unpacking fallacy” or “conjunction fallacy fallacy” practiced by people who have heard about the conjunction fallacy but don’t realize how easy it is to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions.
Luke asked me to look into this literature for a few hours. Here’s what I found.
The original paper (Tversky and Koehler 1994) is about disjunctions, and how unpacking them raises people’s estimate of the probaility. So for example, asking people to estimate the probability someone died of “heart disease, cancer, or other natural causes” yields a higher probability estimate than if you just ask about “natural causes.”
They consider the hypothesis this might be because they take the researcher’s apparent emphasis as evidence that’s it’s more likely, but they tested & disconfirmed this hypothesis by telling people to take the last digit of their phone number and estimate the percentage of couples that have that many children. Percentages sum to greater than 1.
Finally, they check whether experts are vulnerable to this bias by doing an experiment similar to the first experiment, but using physicians at Stanford University as the subjects and asking them about a hypothetical case of a woman admitted to an emergency room. They confirmed that yes, experts are vulnerable to this mistake too.
This phenomenon is known as “subadditivity.” A subsequent study (RottenStreich and Tversky 1997) found that subadditivity can even occur when dealing with explicit conjunctions. Macci et al. (1999) found evidence of superadditivity: ask some people how probable it is that the freezing point of alcohol is below that of gasoline, other people how probable it is that the freezing point of gasoline is below that of alcohol, average answers sum to less than 1.
Other studies try to refine the mathematical model of how people make judgements in these kinds of cases, but the experiments I’ve described are the most striking empirical results, I think. One experiment that talks about unpacking conjunctions (rather than disjunctions, like the experiments I’ve described so far) is Boven and Epley (2003, particularly their first experiment, where they ask people how much an oil refinery should be punished for pollution. This pollution is described either as leading to an increase in “asthma, lung cancer, throat cancer, or all varieties of respiratory diseases,” or just as leading to an increase in “all varieties of respiratory diseases.” In the first condition, people want to punish refinery more. But, in spite of being notably different from previous unpacking experiments, still not what Eliezer was talking about.
What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it’s unclear to me whether we should talk about an “unpacking fallacy” or a “failure to unpack fallacy”.
Here’s a handy example discussion of related conjunction issues from the Project Cyclops report:
We have outlined the development of technologically competent life on Earth as a succession of steps to each of [which] we must assign an a priori probability less than unity. The probability of the entire sequence occurring is the product of the individual (conditional) probabilities. As we study the chain of events in greater detail we may become aware of more and more apparently independent or only slightly correlated steps. As this happens, the a priori probability of the entire sequence approaches zero, and we are apt to conclude that, although life indeed exists here, the probability of its occurrence elsewhere is vanishingly small.
The trouble with this reasoning is that it neglects alternate routes that converge to the same (or almost the same) end result. We are reminded of the old proof that everyone has only an infinitesimal chance of existing. One must assign a fairly small probability to one’s parents and all one’s grandparents and (great)^n-grandparents having met and mated. Also one must assign a probability on the order of 2^-46 to the exact pairing of chromosomes arising from any particular mating. When the probabilities of all these independent events that led to a particular person are multiplied, the result quickly approaches zero. This is all true. Yet here we all are. The [explanation] is that, if an entirely different set of matings and fertilizations had occurred, none of “us” would exist, but a statistically indistinguishable generation would have been born, and life would have gone on much the same.
Related: There’s a small literature on what Tversky called “support theory,” which discusses packing and unpacking effects: Tversky & Koehler (1994); Ayton (1997); Rottenstreich & Tversky (1997); Macchi et al. (1997); Fox & Tversky (1998); Brenner & Koehler (1999); Chen et al. (2001); Boven & Epley (2003); Brenner et al. (2005); Bligin & Brenner (2008).
Luke asked me to look into this literature for a few hours. Here’s what I found.
The original paper (Tversky and Koehler 1994) is about disjunctions, and how unpacking them raises people’s estimate of the probaility. So for example, asking people to estimate the probability someone died of “heart disease, cancer, or other natural causes” yields a higher probability estimate than if you just ask about “natural causes.”
They consider the hypothesis this might be because they take the researcher’s apparent emphasis as evidence that’s it’s more likely, but they tested & disconfirmed this hypothesis by telling people to take the last digit of their phone number and estimate the percentage of couples that have that many children. Percentages sum to greater than 1.
Finally, they check whether experts are vulnerable to this bias by doing an experiment similar to the first experiment, but using physicians at Stanford University as the subjects and asking them about a hypothetical case of a woman admitted to an emergency room. They confirmed that yes, experts are vulnerable to this mistake too.
This phenomenon is known as “subadditivity.” A subsequent study (RottenStreich and Tversky 1997) found that subadditivity can even occur when dealing with explicit conjunctions. Macci et al. (1999) found evidence of superadditivity: ask some people how probable it is that the freezing point of alcohol is below that of gasoline, other people how probable it is that the freezing point of gasoline is below that of alcohol, average answers sum to less than 1.
Other studies try to refine the mathematical model of how people make judgements in these kinds of cases, but the experiments I’ve described are the most striking empirical results, I think. One experiment that talks about unpacking conjunctions (rather than disjunctions, like the experiments I’ve described so far) is Boven and Epley (2003, particularly their first experiment, where they ask people how much an oil refinery should be punished for pollution. This pollution is described either as leading to an increase in “asthma, lung cancer, throat cancer, or all varieties of respiratory diseases,” or just as leading to an increase in “all varieties of respiratory diseases.” In the first condition, people want to punish refinery more. But, in spite of being notably different from previous unpacking experiments, still not what Eliezer was talking about.
Below are some other messy notes I took:
http://commonsenseatheism.com/wp-content/uploads/2013/10/Fox-Tversky-A-belief-based-account-of-decision-under-uncertainty.pdf Uses support theory to develop account of decision under uncertainty.
http://commonsenseatheism.com/wp-content/uploads/2013/10/Brenner-Koehler-Subjective-probability-of-disjunctive-hypotheses-local-weight-models-for-decomposition-and-evidential-support.pdf Something about local weights; didn’t look at this one much.
http://commonsenseatheism.com/wp-content/uploads/2013/10/Chen-et-al-The-relation-between-probability-and-evidence-judgment-an-extension-of-support-theory.pdf Tweaking math behind support theory to allow for superadditivity.
http://commonsenseatheism.com/wp-content/uploads/2013/10/Brenner-et-al-Modeling-patterns-of-probability-calibration-with-random-support-theory.pdf Introduces notion of random support theory.
http://bear.warrington.ufl.edu/brenner/papers/bilgin-brenner-jesp08.pdf Unpacking effects weaker when dealing with near future as opposed to far future.
Other articles debating how to explain basic support theory results: http://bcs.siu.edu/facultypages/young/JDMStuff/Sloman%20(2004)%20unpacking.pdf http://aris.ss.uci.edu/~lnarens/Submitted/problattice11.pdf http://eclectic.ss.uci.edu/~drwhite/pw/NarensNewfound.pdf
What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it’s unclear to me whether we should talk about an “unpacking fallacy” or a “failure to unpack fallacy”.
Here’s a handy example discussion of related conjunction issues from the Project Cyclops report: