Burdensome Details
Merely corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative . . .
—Pooh-Bah, in Gilbert and Sullivan’s The Mikado
The conjunction fallacy is when humans assign a higher probability to a proposition of the form “A and B” than to one of the propositions “A” or “B” in isolation, even though it is a theorem that conjunctions are never likelier than their conjuncts. For example, in one experiment, 68% of the subjects ranked it more likely that “Reagan will provide federal support for unwed mothers and cut federal support to local governments” than that “Reagan will provide federal support for unwed mothers.”1
A long series of cleverly designed experiments, which weeded out alternative hypotheses and nailed down the standard interpretation, confirmed that conjunction fallacy occurs because we “substitute judgment of representativeness for judgment of probability.”2 By adding extra details, you can make an outcome seem more characteristic of the process that generates it. You can make it sound more plausible that Reagan will support unwed mothers, by adding the claim that Reagan will also cut support to local governments. The implausibility of one claim is compensated by the plausibility of the other; they “average out.”
Which is to say: Adding detail can make a scenario sound more plausible, even though the event necessarily becomes less probable.
If so, then, hypothetically speaking, we might find futurists spinning unconscionably plausible and detailed future histories, or find people swallowing huge packages of unsupported claims bundled with a few strong-sounding assertions at the center.
If you are presented with the conjunction fallacy in a naked, direct comparison, then you may succeed on that particular problem by consciously correcting yourself. But this is only slapping a band-aid on the problem, not fixing it in general.
In the 1982 experiment where professional forecasters assigned systematically higher probabilities to “Russia invades Poland, followed by suspension of diplomatic relations between the USA and the USSR” than to “Suspension of diplomatic relations between the USA and the USSR,” each experimental group was only presented with one proposition.3 What strategy could these forecasters have followed, as a group, that would have eliminated the conjunction fallacy, when no individual knew directly about the comparison? When no individual even knew that the experiment was about the conjunction fallacy? How could they have done better on their probability judgments?
Patching one gotcha as a special case doesn’t fix the general problem. The gotcha is the symptom, not the disease.
What could the forecasters have done to avoid the conjunction fallacy, without seeing the direct comparison, or even knowing that anyone was going to test them on the conjunction fallacy? It seems to me, that they would need to notice the word “and.” They would need to be wary of it—not just wary, but leap back from it. Even without knowing that researchers were afterward going to test them on the conjunction fallacy particularly. They would need to notice the conjunction of two entire details, and be shocked by the audacity of anyone asking them to endorse such an insanely complicated prediction. And they would need to penalize the probability substantially—a factor of four, at least, according to the experimental details.
It might also have helped the forecasters to think about possible reasons why the US and Soviet Union would suspend diplomatic relations. The scenario is not “The US and Soviet Union suddenly suspend diplomatic relations for no reason,” but “The US and Soviet Union suspend diplomatic relations for any reason.”
And the subjects who rated “Reagan will provide federal support for unwed mothers and cut federal support to local governments”? Again, they would need to be shocked by the word “and.” Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them. They would need to think, “Reagan might or might not cut support to local governments (1 bit), but it seems very unlikely that he will support unwed mothers (4 bits). Total absurdity: 5 bits.” Or maybe, “Reagan won’t support unwed mothers. One strike and it’s out. The other proposition just makes it even worse.”
Similarly, consider Tversky and Kahnemans (1983) experiment based around a six-sided die with four green faces and two red faces.4 The subjects had to bet on the sequence (1) RGRRR, (2) GRGRRR, or (3) GRRRRR appearing anywhere in twenty rolls of the dice. Sixty-five percent of the subjects chose GRGRRR, which is strictly dominated by RGRRR, since any sequence containing GRGRRR also pays off for RGRRR. How could the subjects have done better? By noticing the inclusion? Perhaps; but that is only a band-aid, it does not fix the fundamental problem. By explicitly calculating the probabilities? That would certainly fix the fundamental problem, but you can’t always calculate an exact probability.
The subjects lost heuristically by thinking: “Aha! Sequence 2 has the highest proportion of green to red! I should bet on Sequence 2!” To win heuristically, the subjects would need to think: “Aha! Sequence 1 is short! I should go with Sequence 1!”
They would need to feel a stronger emotional impact from Occam’s Razor—feel every added detail as a burden, even a single extra roll of the dice.
Once upon a time, I was speaking to someone who had been mesmerized by an incautious futurist (one who adds on lots of details that sound neat). I was trying to explain why I was not likewise mesmerized by these amazing, incredible theories. So I explained about the conjunction fallacy, specifically the “suspending relations ± invading Poland” experiment. And he said, “Okay, but what does this have to do with—” And I said, “It is more probable that universes replicate for any reason, than that they replicate via black holes because advanced civilizations manufacture black holes because universes evolve to make them do it.” And he said, “Oh.”
Until then, he had not felt these extra details as extra burdens. Instead they were corroborative detail, lending verisimilitude to the narrative. Someone presents you with a package of strange ideas, one of which is that universes replicate. Then they present support for the assertion that universes replicate. But this is not support for the package, though it is all told as one story.
You have to disentangle the details. You have to hold up every one independently, and ask, “How do we know this detail?” Someone sketches out a picture of humanity’s descent into nanotechnological warfare, where China refuses to abide by an international control agreement, followed by an arms race . . . Wait a minute—how do you know it will be China? Is that a crystal ball in your pocket or are you just happy to be a futurist? Where are all these details coming from? Where did that specific detail come from?
For it is written:
If you can lighten your burden you must do so.
There is no straw that lacks the power to break your back.
1Amos Tversky and Daniel Kahneman, “Judgments of and by Representativeness: Heuristics and Biases,” in Judgment Under Uncertainty, ed. Daniel Kahneman, Paul Slovic, and Amos Tversky (New York: Cambridge University Press, 1982), 84–98.
2 See Amos Tversky and Daniel Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (1983): 293–315 and Daniel Kahneman and Shane Frederick, “Representativeness Revisited: Attribute Substitution in Intuitive Judgment,” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (Cambridge University Press, 2002) for more information.
3 Tversky and Kahneman, “Extensional Versus Intuitive Reasoning.”
4 Ibid.
- The Plan by 10 Dec 2021 23:41 UTC; 254 points) (
- Hero Licensing by 21 Nov 2017 21:13 UTC; 239 points) (
- An Alien God by 2 Nov 2007 6:57 UTC; 213 points) (
- Incorrect hypotheses point to correct observations by 20 Nov 2018 21:10 UTC; 167 points) (
- A Semitechnical Introductory Dialogue on Solomonoff Induction by 4 Mar 2021 17:27 UTC; 142 points) (
- Brainstorm of things that could force an AI team to burn their lead by 24 Jul 2022 23:58 UTC; 134 points) (
- The ‘Wild’ and ‘Wacky’ Claims of Karnofsky’s ‘Most Important Century’ by 26 Apr 2023 15:45 UTC; 122 points) (EA Forum;
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 114 points) (
- Zombies Redacted by 2 Jul 2016 20:16 UTC; 94 points) (
- Bayesianism for Humans by 29 Oct 2013 23:54 UTC; 91 points) (
- For FAI: Is “Molecular Nanotechnology” putting our best foot forward? by 22 Jun 2013 4:44 UTC; 86 points) (
- I Was Not Almost Wrong But I Was Almost Right: Close-Call Counterfactuals and Bias by 8 Mar 2012 5:39 UTC; 86 points) (
- An Especially Elegant Evpsych Experiment by 13 Feb 2009 14:58 UTC; 76 points) (
- Perpetual Motion Beliefs by 27 Feb 2008 20:22 UTC; 74 points) (
- Lessons learned running the Survey on AI existential risk scenarios by 13 Oct 2021 11:33 UTC; 69 points) (EA Forum;
- Contra Yudkowsky on Epistemic Conduct for Author Criticism by 13 Sep 2023 15:33 UTC; 69 points) (
- Do Scientists Already Know This Stuff? by 17 May 2008 2:25 UTC; 65 points) (
- Dreams with Damaged Priors by 8 Aug 2009 22:31 UTC; 59 points) (
- Steering systems by 4 Apr 2023 0:56 UTC; 50 points) (
- Some conceptual highlights from “Disjunctive Scenarios of Catastrophic AI Risk” by 12 Feb 2018 12:30 UTC; 45 points) (
- Blanchard’s Dangerous Idea and the Plight of the Lucid Crossdreamer by 8 Jul 2023 18:03 UTC; 38 points) (
- How likely is it that SARS-CoV-2 originated in a laboratory? by 25 Jan 2021 20:22 UTC; 38 points) (
- 30 Apr 2021 15:35 UTC; 30 points) 's comment on Draft report on existential risk from power-seeking AI by (EA Forum;
- The Burden of Worldbuilding by 4 Jun 2022 1:15 UTC; 27 points) (
- Brainstorm of things that could force an AI team to burn their lead by 25 Jul 2022 0:00 UTC; 26 points) (EA Forum;
- But What’s Your *New Alignment Insight,* out of a Future-Textbook Paragraph? by 7 May 2022 3:10 UTC; 26 points) (
- 25 Jun 2011 6:56 UTC; 24 points) 's comment on Exclude the supernatural? My worldview is up for grabs. by (
- Allais Hack—Transform Your Decisions! by 3 May 2009 22:37 UTC; 22 points) (
- Total Nano Domination by 27 Nov 2008 9:54 UTC; 21 points) (
- What will be the aftermath of the US intelligence lab leak report? by 26 Jun 2021 20:33 UTC; 20 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Strevens on scientific explanation by 14 Feb 2022 8:10 UTC; 17 points) (
- The Conscious Sorites Paradox by 28 Apr 2008 2:58 UTC; 17 points) (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- Disappointment in the Future by 1 Dec 2008 4:45 UTC; 16 points) (
- AIs and Gatekeepers Unite! by 9 Oct 2008 17:04 UTC; 14 points) (
- David C Denkenberger on Food Production after a Sun Obscuring Disaster by 17 Sep 2017 21:06 UTC; 13 points) (
- 4 May 2011 3:03 UTC; 12 points) 's comment on The importance of open-source cryptography for Singleton prevention by (
- Investing for the Long Slump by 22 Jan 2009 8:56 UTC; 12 points) (
- 15 Jun 2010 4:23 UTC; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- Transferring credence without transferring evidence? by 4 Feb 2022 8:11 UTC; 11 points) (
- [SEQ RERUN] Burdensome Details by 4 Sep 2011 4:39 UTC; 9 points) (
- Summarizing the Sequences Proposal by 4 Aug 2011 21:15 UTC; 9 points) (
- Multiplicitous by 18 Dec 2016 16:39 UTC; 9 points) (
- 1 Mar 2009 11:18 UTC; 9 points) 's comment on The Most Important Thing You Learned by (
- 25 May 2019 5:49 UTC; 8 points) 's comment on Comment section from 05/19/2019 by (
- 11 Oct 2010 16:55 UTC; 8 points) 's comment on The Irrationality Game by (
- Thinking without priors? by 2 Aug 2022 9:17 UTC; 7 points) (
- 2 Aug 2011 19:34 UTC; 7 points) 's comment on The elephant in the room, AMA by (
- 22 Jun 2023 0:22 UTC; 6 points) 's comment on I still think it’s very unlikely we’re observing alien aircraft by (
- 12 Jun 2014 3:05 UTC; 6 points) 's comment on [Meta] The Decline of Discussion: Now With Charts! by (
- 4 Sep 2011 16:16 UTC; 6 points) 's comment on What is the most rational view of Peak Oil and its near term consequences? by (
- 28 Jan 2015 18:28 UTC; 5 points) 's comment on Open thread, Jan. 26 - Feb. 1, 2015 by (
- 27 Mar 2020 22:33 UTC; 5 points) 's comment on When to assume neural networks can solve a problem by (
- 4 Sep 2011 3:37 UTC; 5 points) 's comment on What is the most rational view of Peak Oil and its near term consequences? by (
- 4 Sep 2011 23:01 UTC; 5 points) 's comment on What is the most rational view of Peak Oil and its near term consequences? by (
- 25 Apr 2011 13:05 UTC; 4 points) 's comment on Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing by (
- 9 Jan 2012 15:43 UTC; 4 points) 's comment on New Year’s Prediction Thread (2012) by (
- 7 Mar 2014 14:37 UTC; 4 points) 's comment on How to Study Unsafe AGI’s safely (and why we might have no choice) by (
- 30 Oct 2009 22:03 UTC; 4 points) 's comment on David Deutsch: A new way to explain explanation by (
- 29 Apr 2011 0:58 UTC; 3 points) 's comment on Is Kiryas Joel an Unhappy Place? by (
- 27 Mar 2012 21:33 UTC; 3 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 12 by (
- 23 Sep 2013 10:03 UTC; 3 points) 's comment on Help us Optimize the Contents of the Sequences eBook by (
- 16 Nov 2011 1:33 UTC; 3 points) 's comment on Cryonics costs: given estimates are low by (
- 23 Oct 2011 13:19 UTC; 3 points) 's comment on Rational vs. Scientific Ev-Psych by (
- 6 Dec 2010 6:05 UTC; 2 points) 's comment on LINK: BBC News on living forever, cryonics by (
- Meetup : Frankfurt Meet-Up by 4 Jul 2015 16:45 UTC; 2 points) (
- 1 May 2018 23:02 UTC; 2 points) 's comment on Sentience by (
- 11 Nov 2010 2:59 UTC; 2 points) 's comment on A note on the description complexity of physical theories by (
- Meetup : Frankfurt Meetup Revival by 20 Jun 2015 11:00 UTC; 2 points) (
- 31 Aug 2012 9:10 UTC; 2 points) 's comment on Whole Brain Emulation : the gentleman’s choice for Friendly AI. Feedback needed, editing’s a mess. by (
- 26 May 2011 19:34 UTC; 2 points) 's comment on Freedom From Choice: Should we surrender our freedom to an external agent? How much? by (
- 25 Mar 2009 22:30 UTC; 2 points) 's comment on The Good Bayesian by (
- 7 May 2018 20:53 UTC; 1 point) 's comment on Computational Morality (Part 1) - a Proposed Solution by (
- 1 May 2018 23:39 UTC; 1 point) 's comment on Computational Morality (Part 1) - a Proposed Solution by (
- 2 May 2018 21:13 UTC; 1 point) 's comment on Sentience by (
- 3 Mar 2009 14:07 UTC; 1 point) 's comment on That You’d Tell All Your Friends by (
- 14 Oct 2019 5:33 UTC; 1 point) 's comment on I would like to try double crux. by (
- 14 Oct 2019 19:06 UTC; 1 point) 's comment on I would like to try double crux. by (
- 13 Jun 2010 20:34 UTC; 0 points) 's comment on Open Thread June 2010, Part 2 by (
- 6 Apr 2011 2:40 UTC; 0 points) 's comment on Bayesianism versus Critical Rationalism by (
- 9 Apr 2015 15:25 UTC; 0 points) 's comment on If you could push a button to eliminate one cognitive bias, which would you choose? by (
- 27 Mar 2012 20:32 UTC; 0 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 12 by (
- 26 Aug 2010 2:02 UTC; 0 points) 's comment on Taking Ideas Seriously by (
- 23 Feb 2011 20:28 UTC; 0 points) 's comment on BOOK DRAFT: ‘Ethics and Superintelligence’ (part 1, revised) by (
- 7 Aug 2013 14:59 UTC; -1 points) 's comment on Open thread, August 5-11, 2013 by (
- 14 Jan 2011 19:45 UTC; -2 points) 's comment on Link—total body transplant by (
In some situations, part of the problem may simply be that instead of calculating a joint probability, they are calculating a conditional probability. The effect of mistakenly calculating a conditional probability is that one event may seem highly probable given the other event occurred; I believe this could be a mathematical reason explaining the plausibility effect in some situations.
For example: the probability that USA and USSR suspend diplomatic relations given that Russia invades Poland is probably more likely than the marginal event, USA and USSR suspend diplomatic relations.
Here’s a candidate for a question to illustrate a couple of related biases:
Given the following two dice roll records:
1: HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
2: THTTHTHHTHTTHHHTTHTHTTHHTHHTTTH
Which of the following is true:
A) 1 is more probable than 2.
B) 2 is more probable than 1.
C) Both are equally probable.
Now, I predict that there will be at least 1 “normal” person who answers C.
“Unbelievable,” you say?
Stay tuned!
I will make a stronger prediction: If this question were posed to 1000 randomly selected, well-dressed, Nordic-looking people found purposely walking the downtown sidewalks during daytime in a large American city (with luck, eliminating the possibility that I cheat by selecting 1000 people from insane asylums or from people who know no English), I predict that there will be at least 1 person who answers C.
Why? Because it is a well known fact that there exist, in much larger numbers than 1 in 1000, people capable, willing, and even eager to use the “toilet paper tube fallacy”. Any of such people combined with any of those who are susceptible to the “literalist fallacy” will answer C.
Let me make a stronger prediction. Even given a 4th choice, so slyly left out:
D) Beats me.
I predict that, still, at least one person will select C.
Now, list the following in order of probability:
a) That one person is a moron.
b) That one person is a computer programmer.
c) That one person is a card shark.
d) That one person believed that choice B was to be taken literally. That is, that B really (really!) means that the very first coin flip came out tails—NOT HEADS! - tails, the second heads, the third tails, and so on.
e) That one person ignored as much context around the dice roll question as he could. That is, that person pretended he was similar to a computer in seeing the world through what amounts to a toilet paper tube. Just the facts, Ma’am.
f) That one person is a card shark and a computer programmer.
g) b and c
h) d and e and f
i) All of the above.
“h”, anyone? :)
But, a thought on this question: How to avoid the conjunction fallacy?
Perhaps a better way to do so than keying on the word “and”, (which, as we all know, means “OR”, but not “OR and not AND”) is to key on the word “probability”. That is, when you see that word (or sense its meaning) as a goal, hand the question to the modern equivalent of a four-function calculator and let it grind out the numbers. To do so otherwise would be like multiplying 10821 by 11409 in your head, wouldn’t it?
I’m sorry—I suppose I’m probably missing something, but I can’t think of any other possible way to interpret this question. I agree that it is far more probable to see a sequence equally containing both heads and tails than one containing only heads, but it seems like you are asking for the relative probabilities of two highly specific sequences of the same length. Could someone please explain?
A. There is significantly greater than a 1 in 2^31 chance that the coin is significantly biased towards heads. This sequence overwhelms almost all priors of fairness, and thus we can conclude that the coin is almost certainly biased towards heads.
He’s rolling a die. As such, both “possibilities” are overwhelmingly improbable, as I have never seen a die labeled with heads and tails, and I spend a lot of time around dice.
Tabletop RPGs often use the term “roll M N-sided dice”, or “MdN” for short, to mean, “generate M high-quality random numbers between 1 and N”. The dice themselves are merely an implementation detail; they could be physical dice, or some random-number generator built into a collaborative RPG software program, etc. It’s common to refer to coins as “d2″s, because that’s the function that they serve.
Another interesting die roll that comes up quite often is “Md3”; the 3-sided die is usually implemented by taking the more familiar 6-sided die and replacing 4,5,6 with 1,2,3 on its faces.
The percentile die, which is a golf-ball sized polyhedron with 100 faces, is also quite iconic, though rarely used in practice due to being ridiculous. Most people just roll two 10-sided dice, instead.
When I hear “high-quality random numbers” I think “crypto-quality random numbers”—which certainly suffice, but are clearly overkill...
You would be amazed at what tabletop gamers do and do not consider “overkill” :-)
EDIT: In the interests of full disclosure, I am a tabletop gamer, and yet I do consider crypto-quality random numbers to be overkill, but I may be in the minority on this.
Yes, but those are typically the same people who have rituals around their dice. Which, on reflection, seems kinda contradictory...
Yes, but a d2 has the values 1, and 2, not heads and tails.
Okay, Felix, I have read your painfully detailed description of a hypothetically situation. Now I wanna know what your point is.
Ooooo! “Dice roll?” By, God, my good fellow, you mean, “coin flips!”
Most of the time, detailed futuristic scenarios are not presented with the intent to say, “exactly this will happen.” Rather, they are intended to say, “something like this will happen.” Many people have trouble with purely abstract thinking, and benefit from the clarity of specific examples. If you make a general statement about the dangers of reckless scientific experiments (for example), it is likely that many of your listeners either won’t be able to connect that to specifics, or will come up with examples in their minds that are very different from what you meant. If you write a novel about it called “Frankenstein,” those people will get the point very vividly. The novel has approximately zero chance of coming to pass exactly as written, but it is easier to understand. Unfortunately, the detailed approach carries with it the very real danger that some people will take the fictional details to be the substance of the claim, rather than the abstract principle that they illustrate.
The variable-width typeface my browser is using typeface makes option 1 look longer than option 2; I had to copy/paste it into Notepad to see that both sequences were equally long. If I hadn’t double checked, I would have said sequence 2 is more probable than sequence 1 because it is shorter.
Echoing Richard, I can see another good reason (and yes, I read your last post) why a more complicated scenario could be assigned a higher probability. Take the USSR example. Suppose that a USSR invasion of Poland is the only reasonably-likely event that could cause a suspension of diplomatic relations. Suppose also that no one would have thought of that as a possibility until it was suggested. Suppose further that once it was suggested as a possibility, the forecasters would realize quickly that it was actually reasonably likely. (I know I’ve twisted the example beyond all plausibility, but there probably are real situations fitting this form.) Effectively, the forecasters who got the question including Poland have information the others don’t—the realization that there is a probable incident that could cause suspension of diplomatic relations—and can rationally assign a higher probability. The forecasters are still at fault for a massive failure of imagination, but not for anything as simple and stupid as the Conjunction Fallacy.
Felix, are you saying that someone shouldn’t answer C, because they should consider the context and consider a biased coin? If I knew the coin might be biased, I would answer A, but I don’t see what that has to do with any of Eliezer’s examples.
Nick: Kahneman and Tversky actually mention the mechanism you describe as one cause of the conjunction fallacy, in their 1983 paper that Eliezer linked to. I agree that in the case where the people who see “X” and the people who see “X and Y” are different, this makes it rather unfair to call it a fallacy; K&T don’t seem to think so, perhaps because it’s clear that people in this situation are either underestimating the probability of X or overestimating that of X&Y or both, so they’re making a mistake even if the explanation for that mistake is reasonably non-embarrassing.
I think that Felix is mostly making fun of people who try to think mathematically and who try to answer the question they’re asked rather than some different question that they think might make more sense, rather than trying to make a serious point about the nature of biased coins.
Nick: Nice spin! :) Context would be important if Eliezer had not asserted as a given that many, many experiments have been done to preclude any influence of context. My extremely limited experience and knowledge of psychological experiments says that there is a 100% chance that such is not a valid assertion. Imagine a QA engineer trying to skate by with the setups of psych experiments you have run in to. But, personal, anecdotal experience aside, it’s real easy to believe Eliezer’s assertion is true. Most people might have a hard time tuning out context, though, and therefore might have a harder time, both with conjunction fallacy questionnaires and accepting Eliezer’s assertion.
g: Yes, keeping in mind that I would be first in line to answer C, myself!
Choice (B) seems a poster boy for “representation”. So, that a normal person would choose B is yet another example of this, “probability” question not being a question about probability, but about “representation”. Which is the point. Why is it hard to imagine that the word, “probable” does not mean, in such questions’ contexts, or even, perhaps, in normal human communication, “probable” as a gambler or statistician would think of its meaning? Or, put another way, g, “who try to answer the question they’re asked rather...” is an assumptive close. I don’t buy it. They were not asked the question you, me, Eliezer, the logician or the autistic thought. They were asked the question that they understood. And, they have the votes to prove it. :)
So far as people making simple logical errors in computing probabilities, as is implied by the word, “fallacy”, well, yeah. Your computer can beat you in both logic and probabilities. Just as your calculator can multiply better than you.
Anyway, I believe that the functional equivalent of visual illusions are inherent in anything one might call a mind. I’m just not convinced that this conjunction fallacy is such a case. The experiments mentioned seem more to identify and wonderfully clarify an interesting communications issue—one that probably stands out simply because there are, in these times, many people who make a living answering C.
Interesting post!
But: “Similarly, consider the six-sided die with four green faces and one red face.”
I seem to be good at catching trivial mistakes.
I was going to reply that this is not obviously a mistake, since we might just be ignorant of what the other side is. Then I realized the guesses listed after suggest that the die has four red faces and one green face. Nice catch.
Reagan would be unlikely to provide support to unwed mothers, but maybe as part of a deal in which he got what he wanted, a reduction in expenditures.
Irrelevant. If there is any possible explanation where he provides the support without that specific deal, it is automatically less likely that both happen, even if the most likely scenario (90%+) of supporting unwed mothers is given said deal. If it is the only possibility, the scenarios would be equally likely; the conjunction could still not possibly be more likely.
Your comment cleared up quite a bit for me: this was my initial objection (that Reagan would likely only do so as part of a compromise), but the conjunction is at most equally likely. It does bring up for me another question, though: that of the hidden disjunction. For myself, this is the most insidious tripping point: my brain assumes that if Reagan were to compromise, that information would be provided, and so extrapolates the first statement to “Reagan provides support for unwed mothers without taking anything,” and then rules that conjunction as less likely than that he did trade something. I’d be curious to know if anyone else has the same sticking point: it seems to be baked into how I process language (a la Tom Scott’s rules of implicit assumption of utility).
Did you mean to put a (3) in there?
This is a very nice measure (which I’ve seen before) and term for it (which I have not seen).
Eliezer, did you develop this yourself? Should I say to other people ‘Artificial-intelligence researcher Eliezer Yudkowsky defines the absurdity of a proposition to be the opposite of the logarithm of its probability, A = –log P.’ as an introduction before I start to use it? (I threw in a minus sign so that higher probability would be lower absurdity; maybe you were taking the logarithm base 1⁄2 so you didn’t have to do that.)
Thirteen years later I come to point out that this would make the entropy of a distribution its expected absurdity, which actually feels deep somehow.
One possibility is that our intuitive sense of ‘is this statement likely to be true’ is developed to detect lies by other human beings, rather than to simulate the external world.
For example if someone is trying to convince you of a tribe members bad behaviour, the ability to produce extra details (time/location/etc) makes it more plausible they are truthful rather than lying. However in probability terms each extra detail makes it less likely (e.g. ‘probability of bad behaviour’ ’probability of doing it in location x etc).
Cross-posted in the sequence re-run
Depends on people’s definition of truth, surely?
If your scoring system for a conjunction statement where one part is true and the other is untrue is to score that as half-true, then the probabilities for the Reagan case are wholly reasonable.
(ie for “Reagan will provide federal support for unwed mothers and cut federal support to local governments”, you score 1 for both parts true, 0.5 for one part true and 0 for neither part true, while for “Reagan will provide federal support for unwed mothers” you can only score 1 for true and 0 for false).
If—and it seems reasonable—the intuitive scoring system for a conjunctive statement is similar to this, then the predictions are wholly reasonable.
This means that when there is a real conjunction, we tend to misinterpret it. It seems reasonable then to guess that we don’t have an intuitive approach to a true conjunction. If that’s the case, then the approach to overcoming the bias is to analyse joint statements to see if a partial truth scores any points—if it does, then our intuition can be trusted more than when it does not.
Logically, if conjunctions made something more likely, disjunctions would make them less likely, which is surprisingly another similar fallacy people succumb to. The more times the word “or” appears in an explanation or theory, the more likely an onlooker will say “Now you’re just guessing” and lose confidence in the claim, even though it necessarily becomes more likely.
This is confusing for three reasons: (1) one takes the product of probabilities, not the average, to compute conjunct probabilities; (2) the sum of two logs is the log of a product, not an average; and (3) “absurdity” is not defined in this article beyond this inline define-it-as-you-go definition, the briefness of which is incommensurate with the profundity of the concept behind the term.
Did you mean “product” rather than “average”, or am I missing something?
tl;dr: internal validity doesn’t imply external validity
I would be interested to see whether computing P(A∩B) falsely to the average of P(A) and P(B|A) would model the error well. Like this any detail B that fits well to the very unlikely primary event A increases its perceived likelihood.
This got me thinking about making complicated plans in life. When you have fifteen steps laid out to get what you want, it sounds like a PLAN! It sounds like you covered your bases—look at all that detail! In reality a fifteen step plan has fifteen chances to go wrong, fifteen turning points where things can change. Every time you add a step the probability of your plan going awry increases. If you try to get 15 friends to the same brunch, some will be late and some won’t show up at all. Obviously complex plans can be necessary in life, but where they’re not, I’d like to avoid them.
I’m reminded of Eisenhower’s quote “Plans are worthless, but planning is everything.” You can’t convince a million men to row across the channel by just saying “we’re going to kill Hitler.” You have to provide them with a string of proposals that are not likely to succeed by themselves, like jumping out of airplanes, sending thousands of bombers to Germany, splitting the atom in a bomb, in order for the mountainous proposition of invading Germany to end the war seem not just possible but probable.
This seems to me to help explain 3 closely related, essentially fictional, phenomenons: futurism, conspiracy theories, and suspension of disbelief in narrative fiction.
Futurism, as mentioned, becomes more palatable as details are added. AI overlords seem dubious, but spin a tale of white coated researchers, misguided investments, out of control corporations, and it suddenly becomes imaginable.
Conspiracy theories need a rabbit hole to guide victims down. Carefully constructed stories nab readers by beginning with dubious but plausible claims (“this is a first person account of Epstein’s limo driver”), followed by possible but improbable (“I was then transferred to Utah to dispose of bodies for the Romney family’s out of control nephew”), followed by the ludicrous (“I personally witnessed Roger Stone raising bodies from the dead on Epstein’s private island”). The final claim made in isolation would be rejected by anyone still standing at the rabbit hole’s edge. We need to add many improbable details for the final point to stick.
Finally, fiction in general is constructed of unlikely or impossible scenarios that we somehow are able to integrate into our identity or understanding of the outside world. We can think of every line and paragraph as a conjunction to make the final theme, like love always triumphs or good overcomes evil, not only palatable but absorbing the reader with a feeling of conviction.
One of the reasons I was having trouble with the Reagan example when I was reading this for the first time was that I was interpreting it as
“Reagan will provide federal support for unwed mothers AND cut federal support to local governments” is more probable than “Reagan will provide federal support for unwed mothers AND NOT cut federal support to local governments”.
The fact that in one option one of the the sentences was present and in the other option it wasn’t made me think that the fact that it was not present made it implicit that it would NOT happen, when it wasn’t the case.
I wonder how common is that line of reasoning.
I like your attempt at this i think what is important is that both Biden and trump have both contributed.
I just don’t think that its an either question.
I had the thought about “future civilizations create blackholes” and thought i was the champion.
Wonderful that this example was included because it allowed me to learn part of the lesson :)
Thank you!
Hang on- I don’t think this structure is proof that the forecasters are irrational. As evidence, I will present you with a statement A, and then with a statement B. I promise you- if you are calibrated / rational, your estimate of P(A) will be less than your estimate of P(A and B)
Statement A: (please read alone and genuinely estimate its odds of being true before reading statement B)
186935342679883078818958499454860247034757454478117212334918063703479497521330502986697282422491143661306557875871389149793570945202572349516663409512462269850873506732176181157479 is composite, and one of its factors has a most significant digit of 2
Statement B:
186935342679883078818958499454860247034757454478117212334918063703479497521330502986697282422491143661306557875871389149793570945202572349516663409512462269850873506732176181157479 is divisible by 243186911309943090615130538345873011365641784159048202540125364916163071949891579992800729
I am not getting the intuition that this example is supposed to evoke. B is just one of the many ways (since I cannot factorise the number at sight) that A might be true, so whyever would I think P(A) < P(A & B), or even P(A) ≤ P(A & B) ?
Of course, P(A) < P(A | B), the latter term being 1. Strictly speaking, that is irrelevant, but perhaps the concept of P(A | B) is in the back of the mind of people who estimate P(A) < P(A & B) ?
Factoring is much harder than dividing, so you can verify A & B in a few seconds with python, but it would take several core-years to verify A on its own. Therefore, if you can perform these verifications, you should put both P(A) and P(A & B) as 1 (minus some epsilon of your tools being wrong). If you can’t perform the validations, then you should not put P(A) as 1- since there was a significant chance that I would put a false statement in A. (in this case, I rolled a D100 and was going to put a false statement of the same form in A if I rolled a 1)
I’m having trouble parsing your second paragraph: inarguably, P(A | B) = P( A & B) = 1, so surely P(A) < P(A | B) implies P(A) < P(A & B) ?
You’ve introduced a third piece of information C: observing the result of a computation that B indeed divides A. With that information, I have P(A|C) = P(A & B|C) = 1. I also have P(A) < P(A | C). But without C, I still have P(A) ≥ P(A & B), not the opposite.
I am having trouble with your second paragraph. Certainly, P(A | B&C) = P( A & B | C) = 1. But when I do not know C, P(A | B) = 1 and P(A & B) < 1.
Probability axioms say that P(A)>=P(A and B), not P(A)>P(A and B).
Before reading statement B, did you estimate the odds of A being true as 1?
I rolled a 100 sided die before generating statement A, and was going to claim that the first digit was 3 in A and then just put “I lied, sorry” in B if I rolled a 1. If, before reading B, you estimated the odds of A as genuinely 1 (not .99 or .999 or .9999) then you should either go claim some RSA factoring bounties, or you were dangerously miscalibrated.
I guess this is evidence that probability axioms apply to probabilities, but not necessarily to calibrated estimates of probabilities given finite computational resources- this is why in my post I was very careful to tale about calibrated estimates of P(...) and not the probabilities themselves.
The examples seem to assume that “and” and “or” as used in natural language work the same way as their logical counterpart. I think this is not the case and that it could bias the experiment’s results.
As a trivial example the question “Do you want to go to the beach or to the city?” is not just a yes or no question, as boolean logic would have it.
Not everyone learns about boolean logic, and those who do likely learn it long after learning how to talk, so it’s likely that natural language propositions that look somewhat logical are not interpreted as just logic problems.
I think that this is at play in the example about Russia. Say you are on holidays and presented with one these 2 statements:
1. “Going to the beach then to the city”
2. “Going to the city”
The second statement obviously means you are going only to the city, and not to the beach nor anywhere else before.
Now back to Russia:
1. “Russia invades Poland, followed by suspension of diplomatic relations between the USA and the USSR”
2. “Suspension of diplomatic relations between the USA and the USSR”
Taken together, the 2nd proposition strongly implies that Russia did not invade Poland: after all if Russia did invade Poland no one would have written the 2nd proposition because it would be the same as the 1st one.
And it also implies that there is no reason at all for suspending relations: the statements look like they were made by an objective know-it-all, a reason is given in the 1st statement, so in that context it is reasonable to assume that if there was a reason for the 2nd statement it would also be given, and the absence of further info means there is no reason.
Even if seeing only the 2nd proposition and not the 1st, it seems to me that humans have a need to attribute specific causes to effects (which might be a cognitive bias), and seeing no explanation for the event, it is natural to think “surely, there must be SOME reason, how likely is it that Russia suspends diplomatic relations for no reason?”, but confronted to the fact that no reason is given, the probability of the event is lowered.
It seems that the proposition is not evaluated as pure boolean logic, but perhaps parsed taking into account the broader social context, historical context and so on, which arguably makes more sense in real life.