Absence of Evidence Is Evidence of Absence
From Robyn Dawes’s Rational Choice in an Uncertain World:
In fact, this post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed . . . I believe we are just being lulled into a false sense of security.”
Consider Warren’s argument from a Bayesian perspective. When we see evidence, hypotheses that assigned a higher likelihood to that evidence gain probability, at the expense of hypotheses that assigned a lower likelihood to the evidence. This is a phenomenon of relative likelihoods and relative probabilities. You can assign a high likelihood to the evidence and still lose probability mass to some other hypothesis, if that other hypothesis assigns a likelihood that is even higher.
Warren seems to be arguing that, given that we see no sabotage, this confirms that a Fifth Column exists. You could argue that a Fifth Column might delay its sabotage. But the likelihood is still higher that the absence of a Fifth Column would perform an absence of sabotage.
Let E stand for the observation of sabotage, and ¬E for the observation of no sabotage. The symbol H1 stands for the hypothesis of a Japanese-American Fifth Column, and H2 for the hypothesis that no Fifth Column exists. The conditional probability P(E | H), or “E given H,” is how confidently we’d expect to see the evidence E if we assumed the hypothesis H were true.
Whatever the likelihood that a Fifth Column would do no sabotage, the probability P(¬E | H1), it won’t be as large as the likelihood that there’s no sabotage given that there’s no Fifth Column, the probability P(¬E | H2). So observing a lack of sabotage increases the probability that no Fifth Column exists.
A lack of sabotage doesn’t prove that no Fifth Column exists. Absence of proof is not proof of absence. In logic, (A ⇒ B), read “A implies B,” is not equivalent to (¬A ⇒ ¬B), read “not-A implies not-B .”
But in probability theory, absence of evidence is always evidence of absence. If E is a binary event and P(H | E) > P(H), i.e., seeing E increases the probability of H, then P(H | ¬ E) < P(H), i.e., failure to observe E decreases the probability of H . The probability P(H) is a weighted mix of P(H | E) and P(H | ¬ E), and necessarily lies between the two.1
Under the vast majority of real-life circumstances, a cause may not reliably produce signs of itself, but the absence of the cause is even less likely to produce the signs. The absence of an observation may be strong evidence of absence or very weak evidence of absence, depending on how likely the cause is to produce the observation. The absence of an observation that is only weakly permitted (even if the alternative hypothesis does not allow it at all) is very weak evidence of absence (though it is evidence nonetheless). This is the fallacy of “gaps in the fossil record”—fossils form only rarely; it is futile to trumpet the absence of a weakly permitted observation when many strong positive observations have already been recorded. But if there are no positive observations at all, it is time to worry; hence the Fermi Paradox.
Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.
1 If any of this sounds at all confusing, see my discussion of Bayesian updating toward the end of The Machine in the Ghost, the third volume of Rationality: From AI to Zombies.
- Predictable updating about AI risk by 8 May 2023 21:53 UTC; 288 points) (
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- Firewalling the Optimal from the Rational by 8 Oct 2012 8:01 UTC; 171 points) (
- Illegible impact is still impact by 13 Feb 2020 21:45 UTC; 134 points) (EA Forum;
- Predictable updating about AI risk by 8 May 2023 22:05 UTC; 130 points) (EA Forum;
- What Bayesianism taught me by 12 Aug 2013 6:59 UTC; 114 points) (
- Why CFAR? by 28 Dec 2013 23:25 UTC; 110 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Bayesianism for Humans by 29 Oct 2013 23:54 UTC; 91 points) (
- Fallacies as weak Bayesian evidence by 18 Mar 2012 3:53 UTC; 88 points) (
- Frequentist Statistics are Frequently Subjective by 4 Dec 2009 20:22 UTC; 87 points) (
- Problems in evolutionary psychology by 13 Aug 2010 18:57 UTC; 85 points) (
- When Science Can’t Help by 15 May 2008 7:24 UTC; 72 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 71 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Questioning the Value of Extinction Risk Reduction by 7 Jul 2022 4:44 UTC; 61 points) (EA Forum;
- Conjunction Controversy (Or, How They Nail It Down) by 20 Sep 2007 2:41 UTC; 59 points) (
- The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by 13 Dec 2009 4:16 UTC; 58 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- There is no No Evidence by 19 May 2021 2:44 UTC; 49 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- The role of neodeconstructive rationalism in the works of Less Wrong by 1 Apr 2010 14:17 UTC; 44 points) (
- Nuclear war tail risk has been exaggerated? by 25 Feb 2024 9:14 UTC; 41 points) (EA Forum;
- Elementary Infra-Bayesianism by 8 May 2022 12:23 UTC; 41 points) (
- Demands for Particular Proof: Appendices by 15 Feb 2010 7:58 UTC; 40 points) (
- The Use of Many Independent Lines of Evidence: The Basel Problem by 3 Jun 2013 4:42 UTC; 38 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Daniel Filan and I discuss suffering focused ethics and total utilitarianism [podcast + transcript] by 27 Aug 2023 3:51 UTC; 23 points) (EA Forum;
- Confusions and updates on STEM AI by 19 May 2023 21:34 UTC; 23 points) (
- My Heartbleed learning experience and alternative to poor quality Heartbleed instructions. by 15 Apr 2014 8:15 UTC; 21 points) (
- 9 Jul 2013 23:19 UTC; 19 points) 's comment on Rationality Quotes July 2013 by (
- 18 Aug 2011 22:16 UTC; 18 points) 's comment on Why We Can’t Take Expected Value Estimates Literally (Even When They’re Unbiased) by (
- 4 Feb 2011 21:51 UTC; 18 points) 's comment on Agreement button by (
- Zvi’s Law of No Evidence by 14 May 2021 7:27 UTC; 17 points) (
- 10 Sep 2023 20:59 UTC; 17 points) 's comment on Sharing Information About Nonlinear by (
- “Absence of Evidence is Not Evidence of Absence” As a Limit by 1 Oct 2023 8:15 UTC; 16 points) (
- Rationality Reading Group: Part C: Noticing Confusion by 18 Jun 2015 1:01 UTC; 15 points) (
- 14 Aug 2011 17:43 UTC; 15 points) 's comment on Take heed, for it is a trap by (
- 24 Jul 2012 4:02 UTC; 15 points) 's comment on Evolutionary psychology as “the truth-killer” by (
- Moral uncertainty: What kind of ‘should’ is involved? by 13 Jan 2020 12:13 UTC; 14 points) (
- [Link] Holistic learning ebook by 3 Aug 2012 0:29 UTC; 14 points) (
- [SEQ RERUN] Absence of Evidence Is Evidence of Absence by 17 Jul 2011 4:08 UTC; 12 points) (
- 13 Jul 2013 5:51 UTC; 11 points) 's comment on “Stupid” questions thread by (
- [SEQ RERUN] Conservation of Expected Evidence by 18 Jul 2011 2:27 UTC; 11 points) (
- 15 Jun 2010 4:23 UTC; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- 22 Aug 2021 2:39 UTC; 10 points) 's comment on Exploring Democratic Dialogue between Rationality, Silicon Valley, and the Wider World by (
- 3 Jun 2010 7:27 UTC; 10 points) 's comment on Open Thread: June 2010 by (
- What are your favorite examples of adults in and around this community publicly changing their minds? by 5 May 2021 19:26 UTC; 9 points) (
- 28 Jul 2011 20:12 UTC; 8 points) 's comment on How to Convince Me That 2 + 2 = 3 by (
- 21 Oct 2011 15:50 UTC; 8 points) 's comment on Amanda Knox: post mortem by (
- 8 Jun 2011 20:02 UTC; 8 points) 's comment on What does lack of evidence of a causal relationship tell you? by (
- Confusions and updates on STEM AI by 19 May 2023 21:34 UTC; 7 points) (EA Forum;
- 20 Jul 2013 11:46 UTC; 7 points) 's comment on Open thread, July 16-22, 2013 by (
- 26 Jan 2012 3:18 UTC; 7 points) 's comment on Shit Rationalists Say? by (
- 30 Sep 2015 6:10 UTC; 7 points) 's comment on Rationality Quotes Thread September 2015 by (
- 4 Jun 2010 19:13 UTC; 7 points) 's comment on Bayes’ Theorem Illustrated (My Way) by (
- 10 May 2013 17:39 UTC; 7 points) 's comment on Open Thread, May 1-14, 2013 by (
- 17 Nov 2014 19:19 UTC; 6 points) 's comment on A discussion of heroic responsibility by (
- 18 Sep 2011 10:21 UTC; 6 points) 's comment on Atheism & the autism spectrum by (
- 9 May 2013 17:14 UTC; 6 points) 's comment on Open Thread, May 1-14, 2013 by (
- 25 Jun 2009 2:04 UTC; 6 points) 's comment on Guilt by Association by (
- 25 Jun 2011 16:56 UTC; 6 points) 's comment on Exclude the supernatural? My worldview is up for grabs. by (
- 7 Aug 2011 15:13 UTC; 5 points) 's comment on Teaching Suggestions? by (
- 20 Sep 2011 13:40 UTC; 5 points) 's comment on Atheism & the autism spectrum by (
- 17 May 2012 20:17 UTC; 5 points) 's comment on The Wonder of Evolution by (
- 16 Feb 2015 10:54 UTC; 5 points) 's comment on Wisdom for Smart Teens—my talk at SPARC 2014 by (
- 7 Dec 2011 7:02 UTC; 5 points) 's comment on Reversed Stupidity Is Not Intelligence by (
- 19 Oct 2010 18:23 UTC; 5 points) 's comment on Rationality quotes: October 2010 by (
- 6 Nov 2012 17:47 UTC; 4 points) 's comment on Voting is like donating thousands of dollars to charity by (
- 8 Jul 2015 6:36 UTC; 4 points) 's comment on Stupid Questions July 2015 by (
- 4 Apr 2023 20:56 UTC; 4 points) 's comment on Giant (In)scrutable Matrices: (Maybe) the Best of All Possible Worlds by (
- 7 Nov 2010 15:53 UTC; 4 points) 's comment on Yet Another “Rational Approach To Morality & Friendly AI Sequence” by (
- 12 Dec 2021 8:38 UTC; 4 points) 's comment on “No evidence” as a Valley of Bad Rationality by (
- 17 Jan 2020 8:23 UTC; 3 points) 's comment on Opinion: Estimating Invertebrate Sentience by (EA Forum;
- Meetup : Atlanta—Filtering Ideas by 4 Aug 2014 2:11 UTC; 3 points) (
- 3 Apr 2011 0:32 UTC; 3 points) 's comment on Bayesianism versus Critical Rationalism by (
- Meetup : Washington, D.C.: Argument from Silence by 3 Jan 2015 5:13 UTC; 3 points) (
- 26 Jan 2015 20:15 UTC; 3 points) 's comment on Entropy and Temperature by (
- Global warming is a better test of irrationality that theism by 16 Mar 2012 17:10 UTC; 3 points) (
- 5 Aug 2009 19:03 UTC; 3 points) 's comment on The usefulness of correlations by (
- 16 May 2012 14:45 UTC; 3 points) 's comment on Open Thread, May 16-31, 2012 by (
- 4 Apr 2008 5:11 UTC; 3 points) 's comment on Reductive Reference by (
- 20 Oct 2010 4:17 UTC; 3 points) 's comment on Rationality quotes: October 2010 by (
- 15 Jun 2012 23:50 UTC; 2 points) 's comment on Advices needed for a presentation on rationality by (
- 14 Sep 2014 16:09 UTC; 2 points) 's comment on The Octopus, the Dolphin and Us: a Great Filter tale by (
- 20 Oct 2007 1:43 UTC; 2 points) 's comment on Pascal’s Mugging: Tiny Probabilities of Vast Utilities by (
- 15 Mar 2023 15:59 UTC; 2 points) 's comment on Contra Common Knowledge by (
- 12 Sep 2021 8:54 UTC; 2 points) 's comment on I read “White Fragility” so you don’t have to (but maybe you should) by (
- 26 Jul 2013 11:34 UTC; 1 point) 's comment on Why Eat Less Meat? by (
- 2 Jan 2010 10:10 UTC; 1 point) 's comment on New Year’s Predictions Thread by (
- 3 Jul 2014 12:59 UTC; 1 point) 's comment on A Parable of Elites and Takeoffs by (
- 20 May 2024 20:52 UTC; 1 point) 's comment on How I got so excited about HowTruthful by (
- 10 May 2013 14:09 UTC; 1 point) 's comment on Open Thread, May 1-14, 2013 by (
- 16 Jun 2023 0:07 UTC; 1 point) 's comment on I still think it’s very unlikely we’re observing alien aircraft by (
- 14 Dec 2010 13:55 UTC; 1 point) 's comment on A sense of logic by (
- 29 Oct 2019 2:27 UTC; 1 point) 's comment on I would like to try double crux. by (
- 26 Jan 2023 13:59 UTC; 0 points) 's comment on A general comment on discussions of genetic group differences by (
- 13 Nov 2011 4:34 UTC; 0 points) 's comment on Making History Available by (
- 30 Jan 2013 21:52 UTC; 0 points) 's comment on AI box: AI has one shot at avoiding destruction—what might it say? by (
- 29 Jun 2015 15:49 UTC; 0 points) 's comment on Open Thread, Jun. 29 - Jul. 5, 2015 by (
- 20 Jan 2011 14:24 UTC; 0 points) 's comment on Theists are wrong; is theism? by (
- 2 Aug 2010 1:14 UTC; 0 points) 's comment on Reductive Reference by (
- 24 Nov 2011 0:35 UTC; 0 points) 's comment on Criticisms of intelligence explosion by (
- 9 May 2013 16:40 UTC; 0 points) 's comment on Open Thread, May 1-14, 2013 by (
- 14 Mar 2013 8:50 UTC; 0 points) 's comment on You only need faith in two things by (
- A crazy hypothesis: GPT-4 already is agentic and is trying to take over the world! by 24 Mar 2023 1:19 UTC; -2 points) (
Perhaps this criticism of the California governor assumes an over-naive probabilistic modelling, with only two events (“no acts of espionage” ⇒ “fifth column exists [or not]”). In reality, there existed some non-public information about an existing japanese spy network (MAGIC decodes; informants) that is unlikely to have been mentioned in a public hearing.
Perhaps the reasoning was more like this: “We know that they are already here. We know that some fraction of the population sympathizes with the mother nation. If the fifth column did not exist in an organized form, we might have seen some sabotage already. Since there hasn’t been any, maybe they are holding back for a major strike.”
Frank: It is impossible for A and ~A to both be evidence for B. If a lack of sabotage is evidence for a fifth column, then an actual sabotage event must be evidence against a fifth column. Obviously, had there been an actual instance of sabotage, nobody would have thought that way- they would have used the sabotage as more “evidence” for keeping the Japanese locked up. It’s the Salem witch trials, only in a more modern form- if the woman/Japanese has committed crimes, this is obviously evidence for “guilty”; if they are innocent of any wrongdoing, this too is a proof, for criminals like to appear especially virtuous to gain sympathy.
As I understand it, there were at least three hypotheses under consideration: a) No members (or a negligibly small fraction) of the ethnic group in question will make any attempt at sabotage. b) There will be attempts at sabotage by members of the ethnic group in question, but without any particular organization or coordination. c) There is a well-disciplined covert organization which is capable of making strategic decisions about when and where to commit acts of sabotage.
The prior for A was very low, and any attempt by the Japanese government to communicate with saboteurs in the States could be considered evidence against it. Lack of sabotage is evidence for C over B.
BTW, what would you consider evidence for a genuine attempt to lull the government into a false sense of security (in an analagous situation)?
Lack of sabotage is obviously evidence for a fifth column trying to lull the government, given the fifth column exists, since the opposite—sabotage occuring—is very strong evidence against that.
However lack of sabotage is still much stronger evidence towards the fifth column not existing.
The takeaway is that if you are going to argue that X group is dangerous because they will commit Y act, you cannot use a lack of Y as weak evidence that X exists, because then Y would be strong evidence that X does not exist, and Y is what you are afraid X is going to do!
You would be much better off using the fact that no sabotage occurred as weak evidence that the 5th column was preventing sabotage.
If there is other evidence that suggests the 5th column exists and that they are dangerous, that is the evidence that should be used. Making up non-evidence (which is actually counter evidence) is not the way to go about it. There are ways of handling court cases that must remain confidential (though it would certainly make the court look bad, it is the right way to do it).
I think you’re right, but there’s an adjustment (an update, isn’t it called?) warranted in two directions.
The absence of sabotage decreases the likelihood of the fifth column existing at all.
But if there is a fifth column, it could be reasonably predicted that there would be evidence of sabotage unless there was an attempt to keep a low profile. If they were to favor this hypothesis for other reasons, as in the classified data mentioned by Frank, then the lack of apparent sabotage would also increase the probability that if the unlikely fifth column DID exist, it would be one which is keeping a low profile. I grant, of course, at the same time, the decreased probability of there being any kind of fifth column in the first place.
This is not correct.
One explanation (call it A) for why there fails to be sabotage is that the Fifth Column is trying to be sneaky and inflict maximum damage later on when no one expects it. The probability of that is greater than 0, so it is a legitimate potential explanation for the apparent absence of sabotage. But, on further thought, there is this other possible explanation (call it B): the absence of a Fifth Column will produce an absence of sabotage. The probability of this is also greater than 0.
So here we have the event (Fifth Column exists) constituting evidence for (absence of sabotage) (perhaps the probability is low, but not zero). Surely it is fair to take it for granted that ~(Fifth Column exists) also constitutes evidence for (absence of sabotage). So that’s an example where an event and its negation can potentially both be evidence for something.
I think what you really mean to say is that since P(no sabotage) = P(no sabotage | Fifth Column) P(Fifth Column) + P(no sabotage | no Fifth Column) P(no Fifth Column), and since no sabotage has been observed, making P(no sabotage) = 1, this must imply that P(no sabotage | Fifth Column) P(Fifth Column) = 1 - P(no sabotage | no Fifth Column) P(no Fifth Column).
If we then make the (perhaps unwarranted) assumption that the prior probabilities are equal, i.e. P(Fifth Column) = P(no Fifth Column), then when deciding via a maximum a posterior decision rule which hypothesis to believe, we wind up with P(no sabotage | Fifth Column) = 1 - P(no sabotage | no Fifth Column), and thus we simply select the hypothesis corresponding to whichever conditional probability is larger… and from this, our intuitions about basic logic would say it doesn’t make sense to assign probabilities in such a way that (no Fifth Column) is less likely to cause no sabotage than (Fifth Column), and this is what creates the effect you are noting that some event A and its negation ~A shouldn’t both be evidence for the same thing.
Compactly, it’s only fair to claim that A and ~A cannot both be evidence for B in some very special situations. In general, though, A and ~A definitely can both serve as supporting evidence for B, it’s just that they will corroborate B to different degrees and this may or may not be further offset by the prior distributions of A and ~A.
But it is important not to assert the incorrect generalization that “It is impossible for A and ~A to both be evidence for B.”
Did you take the other replies to Tom McCabe’s comment, which raise the same question you do but offer the opposite answer, into consideration? The appeal to intuition that a fifth column might be refraining from sabotage in order to create more effective sabotage later does not let you take both A and ~A as evidence for B. Any way you verbally justify it, you will still be dutch-bookable and incoherent.
Without losing the generality of the theorems of probability, let me address your particular narrative: If you believe that, if a fifth column exists, it is of the type that will assuredly refrain from sabotage now in order to prepare a more devastating strike later; then observing sabotage (or no sabotage) cannot alter your probability that a fifth column exists.
This is a fancy way of saying that if you assume that the fifth column’s intent is totally independent of the observance of sabotage. P(A | B ) = P(A). That is, no evidence can update your position along the lines of Bayes’ theorem.
This is not what I am saying. I am saying that P(A |B) and P(A | ~B) can both be nonzero, and in the Bayesian sense this is what is meant by evidence. Either observing sabotage or failing to observe sabotage can, strictly speaking, corroborate the belief that there is a secret Fifth Column. If you make the further assumption that the actions of the Fifth Column are independent from your observations about sabotage, then yes, everything you said is correct.
My only point is that, in general, you cannot say that it is a rule of probability that A and ~A cannot both be evidence for B. You must be talking about specific assumptions involving independence for that to hold.
It also makes sense to think orthogonally about A and ~A in the following sense: if these are my only two hypotheses, then if there is any best decision, it is because under some decision rule, either A or ~A maximizes the a posteriori probability, but not both. If the posterior was equi-probable (50/50) for the hypotheses, then observing or not observing sabotage would change nothing. This could happen if you make the independence assumption above, but even if you don’t, it could still happen that the priors and conditional probabilities just work out to that particular case, and there would be no optimal belief in the Bayesian sense.
For a concrete example, suppose I flip a coin and if it is Heads, I will eat a tuna sandwich with probability 3⁄4 and a chicken sandwich with probability 1⁄4, and if it is Tails I will eat a turkey sandwich with probability 3⁄4 and a chicken sandwich with probability 1⁄4. Now suppose you only get to see what sandwich I select and then must make your best guess about what the coin showed. If I select a chicken sandwich, then you would believe that either Heads or Tails could serve as evidence for this decision. Neither result would be surprising to you (i.e., neither result would change your model) if you learned of it after I selected a chicken sandwich.
In this case, both A and ~A can serve as evidence for chicken, to the tune of 1⁄4 in each case. A is much stronger evidence for tuna, ~A is much stronger evidence for turkey, but both, to some extent, are evidence of chicken.
I’m not disagreeing with your claim about probability theory at all. I’m just saying that we don’t know that Warren made the assumption that his observations about sabotage were independent from the existence of a Fifth Column. For all we know, it was just that he had such a strong prior belief (which may or may not have been rational in itself) that there was a Fifth Column, that even after observing no sabotage, his decision rule was still in favor of belief in the Fifth Column.
It’s not that he mistakenly thought that the Fifth Column would definitely act in one way or the other. It’s just that both no sabotage and sabotage were, to some degree, compatible with his strong prior that there was a Fifth Column… enough so that after converting it to a posterior it didn’t cause him to change his position.
Uh..
A is evidence for B if P(B|A) > P(B). That is to say, learning A increases your belief in B. It is a fact from probability theory that P(B) = P(B|A)P(A) + P(B|¬A)P(¬A). If P(B|A) > P(B) and P(B|¬A) > P(B) then that says that:
P(B) > P(B)P(A) + P(B)P(¬A)
P(B) > P(B)(P(A) + P(¬A))
P(B) > P(B)
SInce A and ¬A are exhaustive and exclusive (so P(A) + P(¬A) = 1) this is a contradiction.
On the other hand, P(B|A) and P(B|¬A) being nonzero just means both A and ¬A are consistent with B—that is, A and ¬A are not disproofs of B.
You definitions do not match mine, which come from here :
The evidence for the hypothesis M is Pr(D | M), regardless of whether or not Pr(D) > Pr(D | M), at least according to that page and this statistics book sitting here at my desk (pages 184-186), and perhaps other sources.
If it’s just a war over definitions, then it’s not worth arguing. My point is that it’s misleading to act like that attribute you call ‘consistency’ doesn’t play a role in what could fuel reasoning like Warren’s above. It’s not about independence assumptions or mistakes about what can be evidence (do you really think Warren cared about the technical, Bayesian definition of evidence in his thinking?). It’s about understanding a person’s formation of prior probabilities in addition to the method by which they convert them to posteriors.
Ah!
You’ve used “evidence” to refer to the probability P(D | M). We’re talking about the colloquial use of “evidence for the hypothesis” meaning an observation that increases the probability of the hypothesis. This is the sense in which we’ve been using “evidence” in the OP.
If you draw 5 balls from an urn, and they’re all red, that’s evidence for the hypothesis that the next ball will be red, and so you conclude that the next one could be red, with a bit more certainty than you had before. If you draw 5 balls from an urn, and they’re blue, that’s evidence against the hypothesis that the next one will be red, so you conclude that the next one is less likely to be red than you thought before.
Your thought processes are wrong by the bayesian proof, however, if every sequence of 5 balls leads you to increase your belief that the next one will be red.
This is essentially what Warren did. If he observed sabotage he would have increased his belief in the existence of a fifth column, and yet, observing no sabotage he also increased his belief in the existence of a fifth column. Clearly, somewhere he’s made a mistake.
I see your point and I think we mostly agree about everything. My only slight extra point is to suggest that perhaps Warren was trying to use his prior beliefs to predict an explanation for absence of sabotage, rather than trying to use absence of sabotage to intensify his prior beliefs. In retrospect, it’s likely that you’re right about Warren and the quote makes it seem that he did, in fact, think that absence of sabotage increased likelihood of Fifth Column. But in general, though, I think a lot of people make a mistake that has more to do with starting out with an unreasonable prior, or making assumptions that their prior belief is independent of observations, than it has to do with a logical fallacy about letting conditioning on both A and ~A increase the probability of B.
The reply that it “is impossible for A and ~A to both be evidence for B” is to ignore what Frank said in favor of insisting on the very overgeneralization I think he was trying to point out. It’s not impossible at all when we are being imprecise enough about the prior expectations involved, such as when we lump all moments in a sustained effort together.
Here’s an example to illustrate what I’m saying: Say you are a parent of a 10 year old boy who generally wants to stay up past his bedtime. His protests vary from occasional temper tantrums to the usual slumped-shoulders expression of disappointment that bedtime has finally arrived. Under normal circumstances, the expectation is that he would give at least some evidence of wanting to stay up later. We’ll call this resistance “A,” and A is evidence for “B”: his desire and motivation to stay up later. What shall we say when ~A happens? That is, what shall we say when the boy one day suddenly goes enthusiastically to bed? That he has given up his desire? That it is impossible for this to be evidence of his continued desire and motivation? Of course not. It is exactly what we would expect a motivated and reasonably intelligent person to do: try different and probably more effective strategies. If we generalize the ongoing experience of the little boy’s quest to say up later, A and ~A are both evidence of B. “It is impossible for A and ~A to both be evidence for B” is simply not narrow enough to be a true statement, and using it in that way can easily amount to a bad counterargument.
Rather, we need to be specific about each situation. What I think we should pay attention to here is the prior expectation of B. With a high enough prior, A and ~A could either (but not both) be evidence of B. But if we are not being specific to each precise situation, the generalization “it is impossible for A and ~A to both be evidence of B” can be a very subtle straw man, because the person being argued against may not be relying on the assumption that A and ~A are equal evidence for B at the same time and in the same situation.
Returning to the Japanese Fifth Column argument, unlike the little boy in my example the Japanese (and, in general, descendants from countries that go to war with their current country of citizenship) do not have a consistent track record of wartime sabotage. Also, there isn’t any reason to think they would not generally be more loyal to their country of citizenship than the country of their parents, grandparents, or even of their own childhood. So there is no particularly strong expectation that they would commit sabotage… and thus no such expectation that some mysterious lack of sabotage is itself a sign of a new strategic attempt as part of a sustained effort. The argument should come down to the prior expectation of Japanese sabotage. That seems to be the crux of it to me.
It seems to me the weakness in Frank’s argument also lies in the basic premise that we should expect the Japanese to commit sabotage. And I believe the governor would need to rely on that premise, or a similar one, in order to sustain his argument beyond what Eliezer presented.
But. The inside information premise seems nearly undefeatable to me. We can’t comment on information we don’t have. I think that is always a possibility with controversial official responses that most people would prefer to deny. If the person whose claims you are evaluating has secret but pertinent information you don’t have access to, then it can be very difficult to offer a fair analysis. For one, you will have very different yet subjectively valid prior expectations.
You seem to be using “evidence of X” to mean something along the lines of “consistent with X”. That’s not what it means in this context.
An event is evidence for or against a scenario insofar as it changes your subjective probability estimate for that scenario. Your example child going enthusiastically to bed is in fact evidence that he’s changed his mind about staying up past his bedtime: it makes that scenario subjectively more plausible, even though it’s still probably a long-shot option given what you know. It might simultaneously be evidence for some new bedtime-avoidance scheme, but that’s entirely consistent with it also pointing to a possible change of heart: the increased probability of both is made up for in the reduced probability of him continuing with his old behavior.
Subjective probabilities for either/or scenarios have to sum to unity, and so evidence for one such option has to be balanced out by evidence against one or more of the others. A and ~A cannot both be evidence for a given scenario; at best they can both leave it unaffected.
I think I understand that a little better now. So thank you for taking the time to explain that to me.
Even so, it seems all I must do is add to my counterexample a prior track record of the little boy changing strategies while pretending to go along with authority. Reconsidering my little boy example with that in mind, does that change your reply?
Also, I fail to see how your response ameliorates my objection to the claim “it is impossible for A and ~A to both be evidence for B.” By your own explanation, they are both evidence, albeit offering unequal relative probabilities (forgive me if I’m getting the password wrong there, but I think you can surmise what it is I’m getting at). Maybe if we say that “It is impossible for A and ~A to both offer the same relative probability for B at the same time and concerning the same situation and given the same subjective view of the facts, etc,” we have something that doesn’t lead us to claim things that are not true about someone else’s argument, as in the case above, that their argument depends on A and ~A at the same time and in the same way, when the precise claim in question is actually that A can be evidence for B in one situation; and based upon the expectation set upon the observance of subsequent facts, at some later date, ~A could also end up being evidence for B. I’m not sure if I’ve explained that clearly, but I’ll keep trying until either I get what I’m missing, or I manage to express clearly what may well be coming out as gibberish. Either way, I get a little slice of the self-improvement I’m looking for.
Thanks again, and I hope you can forgive my wet ears on this and bear with me. The benefits of our exchanges here will probably be pretty one sided; I have almost nothing to offer a more experienced rationalist here, and lots to gain… and I realize that, so bear with me, and please know I am grateful for the feedback.
Here’s a contradiction with A and ~A both being evidence for the same thing. You could tell your spouse “Go up and check if little Timmy went to bed”. Before ze comes back you already have an estimate of how likely Timmy is to go to bed on time (your prior belief). But then your spouse, who was too tired to climb the stairs, comes back and tells you “Little Timmy may or may not have gone to bed”. Now, if both of those possibilities would be evidence of Timmy’s staying up late then you should update your belief accordingly. But how can you do that without receiving any new information?
Yes. I get that. We cannot use A and ~A to update our estimates in the same way at the same time. That’s not the same as saying that it is impossible for A and ~A to be evidence of the same thing. One could work on Tuesday, and the other could work on Friday, depending on the situation. That was my only point: can’t generalize a timeline but need to operate at specific points on that timeline. That goes back to the justification for interning Japanese citizens. If we say ~A just can’t ever be evidence of B because at some previous time A was evidence for B, then we are making a mistake. At some later date, ~A could end up being better evidence, depending on the situation. My point was that a better counterargument to the governor’s justification is to point out that the prospect of naturalized citizens turning against their home country in favor of their country of ancestry presents a very low prior, because the Japanese (and other groups that polyglot nations have gone to war with) have not usually behaved that way in the past. I could be wrong, but it doesn’t have anything to do with updating estimates with a variable and its negation to reach the same probability at the same time. I pretty much agree with what you said, just not the implication that it conflicts in some way with what I said.
I think the brief answer to this point is that it is very important to define the hypothesis precisely, to avoid being confused in the way you describe.
Applying that lesson to Earl Warren, we can say that he failed to distinguish between motive for big-sabotage and motive for little-sabotage. Lack of little-sabotage events is evidence in favor of motive-for-big-sabotage (and for no-motive-to-sabotage: with the benefit of hindsight, we know this was the true state of the world). But unclear phrasing by Warren made it sound like he believed absence of little-sabotage was evidence of motive-to-little-sabotage, which is a nonsensical position.
Given the low probability of big-sabotage (even after incorporating the evidence Warren puts forth), it’s pretty clear that the argument for Warren’s suggested policy (Japanese-American internment) depended pretty heavily on the confused thinking created by this equivocation.
A and ~A are not each evidence for B, if B is “there is a fifth column active”. In some ways, as I said, they already knew B—it was true. There were questions of degree—how organized? how ready? how many? - for which A and ~A each provide some hints at.
Earl Warren tumbled headlong into the standard conspiracy theory attractor with, I might add, no deleterious effect on his career. This man was later the 14th Chief Justice of the US Supreme Court and has probably had more lasting effect on US society than any single figure of the 20th century. Thanks for the post.
Not sure if this reasoning applies to human factors. People can intentionally deceive. Therefore, reasoning that can be applied to natural phenomenon cannot be used to analyze social or political interactions. Game theory is probably a better theory for social or political interactions.
http://en.wikipedia.org/wiki/Japanese_American_internment#Was_the_internment_justified_by_military_necessity.3F
But that’s not the point. The point is that Earl Warren’s reasoning was invalid. It didn’t matter what other evidence he had (Warren certainly did not know about the ultra-classified MAGIC decodes). The particular observation of no sabotage was evidence against, and could not legitimately be worked into evidence for.
Can we be sure that he did not just assign a very strong prior distribution to the existence of Fifth Column? In that case, if we model Warren’s decision as binary hypothesis testing with a MAP rule, say, then maybe it occurred to Warren that the raw conditional probabilities satisfied this inequality P(no sabotage | imminent Fifth Column threat) < P(no sabotage | no imminent Fifth Column threat).
But perhaps, for Warren, P(imminent Fifth Column threat) >> P( no imminent Fifth Column threat).
In this scenario, he reasoned that it was so likely that there was a Fifth Column threat that it outweighed the ease with which (absence of Fifth Column) can account for (absence of sabotage), and led him to choose the hypothesis that a Fifth Column was a better explanation for lack of sabotage.
In that case, the issue becomes the strength in the prior belief. Similar reasoning can be applied to McCarthy, or to those suggesting we’re due for another terrorist attack.
I guess what I am saying is like this: maybe someone just believes we’re due for another terrorist attack very strongly (perhaps for irrational reasons, but reasons that have nothing to do with a witnessed lack of terrorist attacks). Then you present them with the evidence that no terrorist activity has been witnessed, say. Instead of this updating their prior to a better posterior that assigns less belief to imminence of terrorist attacks, they actually feel capable of explaining the absence of terrorist activities in light of their strong prior.
I do agree that it would then be nonsensical to take that conclusion and treat it like a new observation. As if: Fifth Column → they absolutely must exist and be planning something → invent a reason why strength of belief in prior is justified → Fifth Column’s existence explains absence of sabotage → further absence of sabotage now feeds back as ever-more-salient corroborating evidence of original prior.
Perhaps more focus should be placed on the role of the prior in all of this, rather than outright misinterpretations of evidence.
I suspect a part of the appeal of this saying comes from a mental unease with conflicting evidence. It is easier to think of the absence of evidence as not evidence at all, rather than as evidence against where the evidence in favor just happens to be much stronger. Perhaps it is a specific case of a general distaste for very small distinctions, especially those close to 0?
Ad hominem argumentation is another example of evidence which is usually weak, but is still evidence.
I am quite sure you’re onto something here. A similar effect occurs when people try to argue that a given intervention has no downsides at all; none at all? Really? It will be absolutely free and have beneficial effects on everyone in the world? Why aren’t we doing it already then?
People aren’t used to thinking in terms of cost-benefit analysis, where you say “Yes, it has downsides A, B, C; but it also has upsides W, X, Y, Z, and on balance it’s a good idea.” They think that merely by admitting that the downsides exist you have given up the game. (Politics is the mind-killer?)
We aren’t doing it already because the Bad People have power and if we did it, it would frustrate their Evil (or at least Morally Suspect) Purposes. Frustrating such purposes doesn’t count as a downside. Depending on who you are talking to, taking money from rich people, allowing people who make stupid choices to die, or preventing foreigners who want to from entering your country isn’t just a bad thing that’s outweighed by its good consequences; it simply doesn’t warrant an entry in the “costs” column at all.
An aside, but I always hated this argument. It proves way too much. (Cryonics, for example.)
The particular observation of no sabotage was evidence against, and could not legitimately be worked into evidence for.
You are assuming that there are only two types of evidence, sabotage v. no sabotage, but there can be much more differentiation in the actual facts.
Given Frank’s claim, there is a reasoning model for which your claim is inaccurate. Whether this is the model Earl Warren had in his head is an entirely different question, but here it is:
We have some weak independent evidence that some fifth column exists giving us a prior probability of >50%. We have good evidence that some japanese americans are disaffected with a prior of 90%+. We believe that a fifth column which is organized will attempt to make a significant coordinated sabotage event, possibly holding off on any/all sabotage until said event. We also believe that the disaffected who are here, if there is no fifth column would engage is small acts of sabotage on their own with a high probability.
Therefore, if there are small acts of sabotage that show no large scale organization, this is weak evidence of a lack of a fifth column. If there is a significant sabotage event, this is strong evidence of a fifth column. If there is no sabotage at all, this is weak evidence of a fifth column. Not all sabotage is alike, it’s not a binary question.
Now, this is a nice rationalization after the fact. The question is, if there had been rare small acts of sabotage, what is the likelihood that this would have been taken by Warren and others in power as evidence that there was no fifth column. I submit that it is very unlikely, and your criticism of their actual logic would thus be correct. But we can’t know for certain since they were never presented with that particular problem. And in fact, I wish that you, or someone like you, had been on hand at the hearing to ask the key question: “Precisely what would you consider to be evidence that the fifth column does not exist?”
Of course, whether widespread internment was a reasonable policy, even if the logic they were using were not flawed, is a completely separate question, on which I’d argue that very strong evidence should be required to adopt such a severe policy (if we are willing to consider it at all), not merely of a fifth column, but of widespread support for it. It is hard to come up with a plausible set of priors where “no sabotage” could possibly imply a high probability of that situation.
I would agree that the lack of sabotage cannot be argued as support for accepting an increase in the probability of the existence of a fifth column. But it may not be sufficient to lower the probability that there is a fifth column, and certainly may not be sufficient to lower a prior of greater than 50% to below 50%, even assuming that one is a Bayesian.
If sabotage increases the probability, lack of sabotage necessarily decreases the probability.
What’s special about 50%?
When you hear someone say “X is not evidence …”, remember that the Bayesian concept of evidence is not the only concept attached to that word. I know my understanding of the word evidence changed as I adopted the Bayesian worldview. My recollection of my prior use of the word is a bit hazy, but it was probably influenced a good deal by beliefs about what a court would admit as evidence.(This is a comment on the title of the post, not on Earl Warren’s rationalization).
That’s a good point. And clearly court standards for evidence are not the same as Bayesian standards; in court lots of things don’t count that should (like base rate probabilities), and some things count more than they should (like eyewitness testimony).
I think what is useful about this tendentious bit of logic is that if your prior is sufficiently strong, no evidence necessarily implies you are wrong. Brilliant!
If sabotage increases the probability, lack of sabotage necessarily decreases the probability.
That’s true in the averages, but different types of sabotage evidence may have different effects on the probability, some negative, some positive. It’s conceivable, though unlikely, for sabotage to on average decrease the probability.
This is all fine and good, but it does not address what “evidence” is. I cannot gather evidence of extra solar planets (either evidence for or against existence) with my naked eyes. So in this experiment, even though I see no “evidence” of extra solar planets by looking up into the sky, I still do not have evidence of absense, because in fact I have no evidence at all.
Evidence, from the aspect of probability theory, is only meaningful when the experiment is able to differential between existence and absence.
Then the real question becomes: do we have evidence that our experiment is able to yield evidence? And the only way to prove this to the affirmative, is to find something. You cannot know you experiment is designed correctly.
The fact that you can’t see them when you look outside is evidence against their presence, it’s just extremely weak evidence. See also the Raven paradox.
To be fair, it’s amazing how people will interpret “evidence” as “strong evidence”.
But it’s completely unamazing how many people will interpret “evidence” as “strong enough evidence to be worth taking notice of”, because that is how the word is actually used outside circumscribed mathematical contexts.
Right, and in fact the very idea of “extremely weak evidence” is really only worth paying attention to because it resolves various seeming paradoxes of evidence, such as the extrasolar planets and raven problems above.
Yup. To be honest, it’s not actually that amazing that it’s interpreted as “strong evidence”, or “this thing is probably true”, because arguments are soldiers and all that.
It’s not about arguments being soldiers, but basic Gricean maxims. In everyday talk you don’t call something “evidence” unless it actually matters that it is evidence, and it only matters if it is strong enough to be worth attending to.
Just because there is this other, mathematically defined concept called “evidence”, according to which every purple M&M is evidence for the blackness and whiteness of crows, you don’t get to say that everyone else is wrong for not using the word the way you redefined it. Instead, you must recognise that this is a different concept, called by the same name, and take care to distinguish the two meanings.
What next, insisting that black paint isn’t black?
It’s always better to rename overloaded terms, or at least to make clear which meaning (the colloquial or the technical) one defaults to. Quibbling over what to name which doesn’t solve any issues and is mostly just kicking the can down the road, but allow me to say that if there’s one place on which I always default to the technical definition, it’s LW. Where else if not here?
I understand that the LW fraction which aims to prioritize accessibility and strives to avoid jargon, may also strive to avoid counter-intuitive technical definitions for the sake of commonly used interpretations. I just don’t subscribe to their methods.
Sorry, I wasn’t clear. I agree that it’s reasonable, except when discussing prob. math, to assume “evidence” means “evidence worth mentioning”. I noted that, while not “reasonable” exactly, it’s even natural that it tends to be interpreted as “this is my side, I offer evidence in tribute”, from an evopsych perspective :/
There is another way: Look really really hard with tools that would be expected to work. If you find something? Yay, your hypothesis is confirmed. If you don’t? You’d better start doubting your hypothesis.
You already do this in many situations I’m sure. If someone said, “You have a million dollars!” and you looked in your pockets, your bank accounts, your stock accounts (if any), etc. and didn’t find a million dollars in them (or collectively in all of them put together), you would be pretty well convinced that the million dollars you allegedly have doesn’t exist. (In fact, depending on your current economic status you might have a very low prior in the first place; I know I would.)
Really, the issue here is whether evidence has to increase probability (of existence or nonexistence) by a positive amount or a non-negative amount. The difference between those two sets is the very important “zero.”
You are interested in the question: “Are there extra-solar planets?”, with possibilities “Yes” and “No”. You wonder how to answer the question, and decide to try the experiment “look with my naked eyes.” You sensibly decide that if you can see any extra-solar planets, then it’s not less likely that there are extra-solar planets, and if you can’t see extra-solar planets, then it’s not more likely that there are extra-solar planets. The strength of those effects is determined by the quality of the experiment; in this case, that strength is 0.
The specific fallacy in question is saying that all outcomes of an experiment make a claim more likely- that is inconsistent with how probability works. Similarly, one can argue that you should have a good estimate of the quality of an experiment before you get the results. That estimate doesn’t have to be perfect- you can look at the results and say “I’m going to doublecheck to make sure I didn’t screw up the experiment”- but changing your bet after you lose should not be allowed.
If all you have is some generic crime data, then more crime in a region can indicate that the Mafia is strong. On the other hand, Mafias keep their own neighborhoods, and the Mafia sometimes can suppress police activity through corruption, so a very low crime rate can indicate that the Mafia is strong.
Of course, background details would suggest which of these is indicated by the evidence
The crime rate has gone up. This means that everything is getting worse and the police are ineffective.
The crime rate has gone up. This means that the police are getting better at catching formerly clandestine criminal behavior.
It would be possible to distinguish between those hypotheses by looking at the ratio of crimes reported to crimes successfully prosecuted.
Seems reasonable to me; if there’s the expected amount of crime in an area, then it’s not too worthy of special attention. If there’s a higher than usual amount of crime, then it’s clearly worthy of special attention.
However, if there’s a lower than usual amount of crime, then it’s also worthy of special attention, because that indicates that something odd is happening there (or, it indicates that something has genuinely reduced the amount of crime and not just the metric, which is worth investigating and hopefully replicating).
OK, make a lot of fun of this. Let’s take it in context. 1) It is amazing that anyone would couch their argument in a logical manner at this point in civilization at all, even if the logic is wrong, so kudos. 2) This was not a logical action (the internment). It is a complicated human action which I imagine has a lot to do with the lack of trust between the Japanese (Americans) and Americans at the time. Evidently there were no Japanese (American) or American individuals who could broker a mutual trust at this time, so sad. 3) The determinants of whatever limited trust which did exist, if known,might be available to logical analysis, but I would think they are a very complex set of statements. These statements probably reflect all the (unknown) possibilities mentioned on this thread. One logical result might have been internment. We, now, far in the future believe that internment was if not wrong, at least unnecessary. Such is hindsight
Just to complicate the story a little, the Japanese Americans in Japan weren’t interned—there were so many of them it was considered to be impractical.
In the absence of evidence for their innocence, to say that the absence of evidence for their guilt is evidence of their innocence may not be applied. If you do not have evidence for their innocence, then you DO have evidence of their guilt (by virtue of the absence of evidence being evidence of absence) and therefore you do not have the required absence of evidence for their guilt that was used to draw the initial conclusion regarding innocence.
The converse is also true.
In the absence of evidence for their guilt, to say that the absence of evidence for their innocence is evidence of their guilt may not be applied. If you do not have evidence for their guilt, then you DO have evidence of their innocence (by virtue of the absence of evidence being evidence of absence) and therefore you do not have the required absence of evidence for their innocence that was used to draw the initial conclusion regarding guilt.
You may only say that, given the presence of evidence of innocence, a lack of evidence of guilt would provide further evidence of innocence. But if you have neither evidence of guilt nor evidence of innocence, the rule may not be applied.
In that case, the two cancel each other out, and both types of evidence are worthless.
You can run it through Bayes Theorom.
This is like a test that predicts an outcome 70% of the time, but has a 70% false positive rate. The end result is a coin toss that is weighted to be 70% on one side, it doesn’t help you find the truth in any way.
Can you create an example in which you would draw a different conclusion by your procedure than by the procedure outlined in the post?
Hi Eliezer, That’s another great post, I very much enjoyed reading even though there are gaps in my understanding. I’m new here so I have lots to learn. I wonder if you could kindly explain what you mean by: “Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. ” Thanks, Lou
Welcome to Less Wrong!
A belief is useless unless it makes predictions. Making Beliefs Pay Rent gives a couple examples of “beliefs” that are useless because they don’t make any predictions.
A belief from which you can derive any prediction is just as useless. Your Strength as a Rationalist and the beginning of A Technical Explanation of Technical Explanation give examples of people trying too hard to make their beliefs explain their observations; they fail to discover that their beliefs are incorrect.
This article makes a very good point very well. If E would be evidence for a hypothesis H, then ~E has to be evidence for ~H.
Unfortunately, I think that it is unfair to read Warren as violating this principle. (I say “Unfortunately” because it would be nice to have such an evocative real example of this fallacy.)
I think that Warren’s reasoning is more like the following: Based on theoretical considerations, there is a very high probability P(H) that there is a fifth column. The theoretical considerations have to do with the nature of the Japanese–American conflict and the opportunities available to the Japanese. Basically, there mere fact that the Japanese have both means and motive is enough to push P(H) up to a high value.
Sure, the lack of observed sabotage (~E) makes P(H|~E) < P(H). So the probability of a fifth column goes down a bit. But P(H) started out so high that H is still the only contingency that we should really worry about. The only important question left is, Given that there is a fifth column, is it competent or incompetent? Does the observation of ~E mean that we are in more danger or less danger? That is, letting C = “The fifth column is competent”, do we have that P(C | ~E & H) > P(C | H)?
Warren is arguing that ~E should lead us to anticipate a more dangerous fifth column. He is saying that an incompetent fifth column would probably have performed minor sabotage, which would have left evidence. A competent fifth column, on the other hand, would probably still be marshaling its forces to strike a major blow, which would be inconsistent with E. Hence, P(C | ~E & H) > P(C | H). That is why ~E is a greater cause for concern than E would have been.
Whether all of these prior probabilities are reasonable is another matter. But Warren’s remarks are consistent with correct Bayesian reasoning from those priors.
While I think your reading is consistent with a very generous application of the principle of charity, I’m not certain it’s appropriate in this case to so apply. Do you have any evidence that Warren was reasoning in this way rather than the less-charitable version, and if so, why didn’t he say so explicitly?
It really seems like the simpler explanation is fear plus poor thinking.
Sorry for taking so long to reply to this.
I think that a close and strict reading supports my interpretation. I don’t see the need for an unduly charitable reading.
First, I assume the following context for the quote: Warren had argued for (or maybe only claimed) a high probability for the proposition that there is a Japanese fifth column within the US. Let R be this italicized proposition. Then Warren has argued that p(R) >> 0.
Given that context, here is how I parse the quote, line-by-line:
I take the questioner to be asserting that there has been no observed sabotage or any other type of espionage by Japanese-Americans up to that time. Let E be this proposition.
Warren responds:
I take Warren to be saying that the expected cost of not interring Japanese-Americans is significantly higher after we update on E than it was before we updated on E. Letting D be the “default” action in which we don’t inter Japanese-Americans, Warren is asserting that EU(D | E) << EU(D).
The above assertion is the conclusion of Warren’s reasoning. If we can show that this conclusion follows from correct Bayesian reasoning from a psychologically realistic prior, plus whatever evidence he explicitly adduces, then the quote cannot serve as an example of the fallacy that Eliezer describes in this post.
Now, we may think that that “psychologically realistic prior” is very probably based in turn on “fear plus poor thinking”. But Warren doesn’t explicitly show us where his prior came from, so the quote in and of itself is not an example of an explicit error in Bayesian reasoning. Whatever fallacious reasoning occurred, it happened “behind the scenes”, prior to the reasoning on display in the quote.
Continuing with my parsing, Warren goes on to say:
Let Q be the proposition that there is a Japanese fifth column in America, and it will perform a timed attack, but right now it is lulling us into a false sense of security.
I take Warren to be claiming that p(Q | E) >> p(Q), and that p(Q | E) is sufficiently large to justify saying “I believe Q”.
It remains only to give a psychologically realistic prior distribution p such the claims above follow — that is, we need that
p(R) is sufficiently large to justify saying “R has high probability”,
p(E) = 1 - epsilon,
EU(D | E) << EU(D),
p(Q | E) >> p(Q),
p(Q | E) is sufficiently large to justify saying “I believe Q”.
This will suffice to invalidate the Warren quote as an example for this post.
It is a mathematical fact that such priors exist in an abstract sense. Do you think it unlikely that such a prior is psychologically realistic for someone in Warren’s position? I think that selection effects and standard human biases make it very plausible that someone in his position would have such a prior.
If you’re still skeptical, we can discuss which priors are psychologically realistic for someone in Warren’s position.
Warren stated in the quote that the lack of any subversive activity was the most convincing factor of all the evidence he has that the 5th Column would soon commit subversive activity.
The problem here should be pretty obvious.
As soon as any subversive activity occurs, the evidence that the 5th Column is going to commit subversive activity clearly just went down! And since the lack of evidence was the strongest evidence for this fact, the fact that the “lack of evidence” is now 0 (either evidence exists, or no evidence exists, there are no degrees for this type of evidence) makes it impossible for the 5th Column to have committed the subversive activity!
The absurdity of this reasoning should be obvious, and it should be thrown out immediately. The lack of subversive activity was clearly not evidence that the 5th Column was planning something. It could not be. You might think the 5th Column was planning something based on other evidence, and that is perfectly fine, but your reasoning for the risk of a subversive activity cannot be based on the lack of any subversive activity. It must be based on other evidence or it invalidates itself.
(Emphasis added.)
I just don’t see that in the quote. Here is the Warren quote from the OP:
His claim isn’t that subversive activity will start soon. The claim is that subversive activity will be “timed just like Pearl Harbor was timed”. I read this to mean that he anticipates a centrally-orchestrated, synchronized, large-scale attack, of the sort that could only be pulled off by a disciplined, highly-competent fifth column.
If they had seen small, piece-meal efforts at sabotage, then that would have been evidence against a competent fifth column. That is, P(there is a competent fifth column | there has been piece-meal sabotage) < P(there is a competent fifth column).
Therefore, not seeing such efforts is evidence for a competent fifth column: P(there is a competent fifth column | there has been no piece-meal sabotage) > P(there is a competent fifth column). This is a direct algebraic consequence of Bayes’s formula.
Of course, seeing no piece-meal sabotage is also evidence for there being no fifth column at all. But if your prior for “no fifth column” is sufficiently low, it still makes sense to spend most of your effort on interpreting what the no-sabotage evidence says about the nature of the fifth column, given that it exists. And what it says, given that there is a fifth column, is that the fifth column is probably marshaling its forces to strike a major blow. (Or at least, that’s what the no-sabotage evidence says under the right prior.)
Scattered and piecemeal acts of sabotage would show that the fifth column is incompetent. So such activity would make “our situation” less “ominous”. This is consistent with Warren’s view. Such sabotage wouldn’t make the probability of subversive activity go down, but Warren doesn’t say that it would. But such sabotage would make the probability of sabotage comparable to Pearl Harbor go down. That is Warren’s claim.
This is where we disagree. It’s not a matter of “sabotage” vs. “no sabotage”. Incompetent sabotage is different from competent sabotage. Warren has a prior that assigns a high prior probability to the existence of a fifth column. His priors about how fifth-columns work, as a function of their competence, are evidently such that our significantly-probable states, in increasing order of ominousness, are
having seen incompetent sabotage,
having seen no sabotage yet,
having seen competent sabotage.
Warren believed that we were in the middle state.
In my previous comment, I gave a Bayesian explanation of how the lack of subversive activity could be evidence that we are in a more dangerous situation than we would have been if we had seen evidence of subversion. That is, given the right priors, the lack of subversive activity could be “ominous”. Can you point to an error in my reasoning?
That makes no sense at all. How can a fact be both for and against the same thing? You can’t split the evidence. You can’t say 20% of the time it means there isn’t a 5th Column and 80% of the time it means there is.
What you can say is that 80% of the time when no sabotages occur it is because the 5th Column is biding its time. The rest other 20% of the time when no sabotages occur it is because the 5th Column is not biding its time. To make that claim though you need strong evidence that the 5th Column is planning sabotage at some point. There was none.
Lets go back to Pearl Harbor, which he references. Was the fact that Japan had never attacked the US strong evidence that Japan was going to commit a surprise attack at some point?
OF COURSE NOT!
The fact is we had very little evidence at all that Japan might want to attack—we had no reason to suspect them.
In the same way, the lack of sabotage could not possibly be evidence that the 5th Column was going to attack. It simply makes no sense.
There might be other things, like documents that Japan was planning war against the US, or perhaps a recon mission showed Japanese fleets crossing the Pacific toward Hawaii, but the fact that Japan had not attacked was not, and still is not, evidence that they would attack.
To take it further, is the fact that Japan hasn’t attacked us in the last 70 years evidence that Japan is planning a major strike against the US? Is it evidence enough to convince you that we should overthrow Japan and install a puppet government, just in case?
I hope not. This is Warren’s exact argument applied to an even more relevant case than he was making, and it is still completely absurd.
Again, the absurdity of this line of reasoning should be obvious.
It is possible to have two mutually exclusive propositions, Q and R, such that some observation E is both evidence for Q and evidence for R. That is, it is possible to have both p(Q|E) > p(Q) and p(R|E) > p(R), even though Q implies not-R.
Do you disagree?
In my argument above,
Q is “there is a competent fifth column”,
R is “there is no fifth column”, and
E is “there has been no piece-meal sabotage”.
Before moving on to the other issues, does this part make sense now?
ETA: It’s important to note that Q and R, while mutually exclusive, are not exhaustive. A third possibility is that there is fifth column, but it’s incompetent. Q is not equivalent to not-R.
Warren did not say anything about piece-meal sabotage. He called it a “lack of subversive activity”, and he didn’t put limits on what type of subversion.
The way you described E is also counter-intuitive and confusing, and is not the way Warren described it. E must be that there has been subversive activity, that’s what Warren described. He said the lack of E (~E) was evidence of Q (p(Q|~E)). That’s fine, as long as E must be evidence that Q is not true (p(~Q|E)).
Since ~E is weak evidence for Q, E is strong evidence for ~Q. The next subversive activity of any kind will be the first instance of E, and it will invalidate his entire reasoning for detaining the Japanese-American citizens.
Okay, replace my earlier definition of E with
E is “we have seen no subversive activity”.
Do you agree that, under some priors, you could have p(Q|E) > p(Q) and p(R|E) > p(R), even though Q implies not-R?
Set aside the question of whether these are reasonable priors. My point was only this; Warren didn’t make the simple mistake with the probability calculus that Eliezer thought Warren made. He wasn’t simultaneously asserting p(H|E) > p(H) and p(H|~E) > p(H). That would be wrong under any prior, no matter how bizarre. But it’s not what Warren was doing.
What Warren said is consistent with coherent Bayesian updating, even if he was updating on a bizarre prior. It might have been wrong to put a high prior probability on subversive activity, but the probability calculus doesn’t tell you how to pick your prior. All I am saying is that the Warren quote, in and of itself, does not constitute a violation of the rules of the probability calculus.
Maybe Warren committed such a violation earlier on. Maybe that’s how he arrived at such a high prior for the existence of subversive activity. But those earlier steps in his reasoning aren’t laid out before us here, so we can’t point to any specific misapplication of Bayes’s rule, as Eliezer tried to do.
I don’t like the way you describe that. It is confusing. The evidence is subversive activity. You cannot go out and look for no subversive activity, that makes no sense. You have to look for subversive activity. I’m not sure why you’re fighting so hard for this point, since not finding something suggests just as much as finding something does. The only reason I suggest a change is for clarity. I don’t want to think about no subversive activity and not no subversive activity, I want to think about subversive activity or no subversive activity. There is no difference, the second is simply less confusing.
E is subversive activity, and Warren’s position is p(Q|~E).
Absolutely. I said so two posts up. The question is not about Q and R, though, it’s about Q and ~Q.
But the whole argument is about the priors. The reason Warren’s position is nonsensical is not because he believes a lack of subversion suggests some fact, it’s that he argues that a lack of subversion suggests a fact, and then behaves in a manner counter to his argument. I’ve been arguing the fact that Warren argues p(Q|~E), But the reason he is locking up the Japanese-Americans is because he expects p(E|Q). The only way p(E|Q) makes any sense is if p(Q|E) is also true.
Warren’s fundamental fear is based on p(E|Q) - that is, the 5th Column is plotting and scheming, and this will lead to subversion. The argument he uses to support this, however, is that p(Q|~E). The two positions are inversely related. If p(E|Q) is strong, then p(Q|~E) must be weak.
In other words, if p(E|Q) is strong, and p(Q) is high, then p(E) should be very high (because Q implies E), and p(~E) should be very small. Yet a very high ~E is used as evidence of p(Q). That makes no sense. If p(E|Q) is high, then ~E can exist in spite of Q, but it cannot exist because of Q.
The only way this is at all tenable is if p(E|Q) and p(~E|Q) are both true. In which case, neither E nor ~E is evidence of Q.
That’s the whole point.
The whole point of this discussion is that his reasoning does not coincide with his actions. Thus one or the other is wrong.
Doesn’t his position make sense if he believes that:
if there’s no organized fifth column, we should see some intermittent, disorganized sabotage, and
if there is an organized fifth column, we should see NO sabotage before some date, at which there is a devastating attack
?
Of course, I agree that it’s likely he would have made a different argument if he had seen evidence of sabotage—but as presented it seems his position is at least potentially coherent.
But what is the date? Is it 2 months? 6 months? A year? 5 years? What if it never happens? If nothing happens in 200 years, does that mean we must be absolutely certain that a fifth column is planning an attack?
It’s not evidence, it’s a lack of evidence. That’s the point, and that’s the problem.
Warren states it is his most ominous evidence that they are planning something. What evidence? It’s not there, it doesn’t exist. His whole position is based on the idea that the lack of evidence indicates they are planning something, yet he has nothing to suggest that such a lack of evidence indicates anything. The only thing the fact that they haven’t attacked yet is evidence for is that they haven’t attacked yet. Nothing more, nothing less, unless you have a pattern of behavior to base that on. There was no such pattern for the fifth column.
That’s a different discussion. As you said,
I was simply arguing that your characterization of his argument as inherently self-contradictory was incorrect. Yeah, his supposed priors are probably wrong, but that’s a different issue.
Okay, say it’s 6 months. Does that make his argument non-contradictory?
If I predict it’s going to rain soon because of a long dry spell, when it rains that doesn’t prove me wrong.
Of course not, you have a pattern of weather to base that on, in which dry spells were consistently followed by rain.
Where is the basis for a lack of subversion? Historically, a lack of subversion has meant no subversion was ever planned, on what basis is this different for the 5th Column?
Yes, because now your evidence is that, if there is a 5th column, major subversion occurs every 6 months. This is testable.
His classification of a lack of subversion as evidence that the 5th Column is planning a major strike flies in the face of history—he has a small handful of anomalies to rely on. That’s all.
I’ll point to Eliezer’s example of mammograms in his “Intuitive Explanation of Bayes Theorem” to help describe what I mean, particularly since it’s pretty easy to find a very in-depth beysian analysis of this particular problem by Eliezer himself. In the example, 1% of women get breast cancer. 80% of the time a mammogram will test positive if a woman has breast cancer, 20% of the time it will test negative. 10% of the time a mammogram will test positive for someone who doesn’t have breast cancer. This works out to a 7.8% likelihood that a woman has cancer if she gets a positive result on a mammogram. Conversely, getting a negative result on a mammogram results in a 0.22% likelihood that a woman has cancer.
In the Warren scenario, the 5th Column planning an attack is like the 1% breast cancer rate, and finding evidence of subversion is the mammogram. Not finding any evidence of subversion is the exact same as getting a negative on a mammogram in the breast cancer scenario. It has happened, sure, but it is extremely rare and in the vast majority of cases no subversion means no planned subversion. The problem is you don’t have a history of major subversion without evidence of subversion. Throughout history it has been the exact opposite, therefore a lack of subversion must have a very low probability for preceding a major subversive attack.
Warren’s position is like saying he believes there is a high risk of breast cancer because the mammogram came up negative. The only reasonable response to that is WTF? Yes, it’s possible that the fifth column is planning something, but you cannot assume that because the evidence says otherwise, that’s not reasonable at all. You can come to the conclusion through other evidence, but not with that evidence.
What Warren managed to do is take evidence that did not support his fear and claim that it did. It doesn’t make any sense, it is an unreasonable position to take.
Now, if Warren had said “There is a very low likelihood that the 5th Column is planning a surprise attack, but I am not willing to take that risk” then it’s an entirely different situation, and that is a completely reasonable response. If breast cancer means being forced to fight through Dante’s 9 levels of hell, then it might be worth a double-mastectomy in spite of the 1 in 500 chance that it would happen.
I was wrong when I said that a single case of subversion falsifies his position. Obviously surprise attacks exist, so that was clearly incorrect, and I think it led to a lot of the disagreement in the discussion. I was looking at the problem too narrowly. However the reason surprise attacks are a surprise is because they are very rare, so the fact that nothing has happened must still overwhelmingly support the idea that nothing will happen. In other words, it is overwhelming evidence against an attack, not for it. That’s the only reason surprise attacks work at all, because you you have no evidence to suggest they are coming (and that they haven’t attacked is not such evidence).
Hopefully I’ve explained myself adequately now.
Our evidence is always only what we have observed. Maybe it is strange to say that you “looked for no subversive activity”. But you certainly can look for subversive activity and fail to find it. Not seeing subversive activity when you looked for it is Bayesian evidence. But it would be an error to condition on there being no subversive activity at all, even hidden activity. That would be going beyond your observations. You can only condition on what you saw or didn’t see when you looked.
Okay, I think that we’re homing in on the nub of the disagreement.
The propositions in question are
“There is a fifth column that will coordinate a Pearl-Harbor type attack”, and
“There is no fifth column”.
These are Q and R, respectively. They are not negations of each other. Do you agree?
Again, I cannot see how you can observe nothing and call it evidence. It is semantics, really, since it makes no difference for the equations, but it makes ~E a positive observation of something and E a negative observation, which is, to me, silly.
Yes. Though, again, I’d rather R be “There is a 5th column” to keep it from being confusing.
With Q there was no evidence that the fifth column was coordinating a timed attack, yet Warren’s strongest evidence for it was that there was no evidence for it.
Pearl harbor types of evidence are black swans. You can’t just pull them out of the air and add them to your reasoning when you have no solid justification for it. There are a billion other black swans he could have used—what if the Japanese are actually all vampires and had designs on draining the Americans dry? You’ve got no evidence they aren’t, so clearly they are just biding their time!
The former is slightly more reasonable, since something similar had happened recently (though in an entirely different context), but it is no more justified as evidence than the evidence in the vampire scenario.
You must look for other evidence that suggests the 5th column was planning an attack, the fact that you have not been attacked yet is not in any way evidence that they are planning an attack. It is only really evidence that, if they were planning something, they hadn’t done it yet. That’s all you can get from that—just a guess.
To that end, Warren had no evidence that an American chapter of the 5th Column even existed. There was secret evidence to that effect, but he was not privy to it. He was making the whole thing up because he was afraid.
It was completely unjustified.
Besides, it doesn’t make sense. Timed attacks are designed to catch you off guard. After Pearl Harbor, people were always on guard. It wouldn’t have had the same effect; a much more effective strategy would have been smaller, guerrilla-type sabotages from within, which they also had zero evidence of.
He didn’t observe “nothing.” He observed factories and shipyards and so forth, continuing to operate without apparent sabotage.
Think of ~E as meaning “We observed something, but that ‘something’ was something other than subversive activity. That is, what we observed was a member of the class of all things that aren’t subversive activity.”
Is this still “silly”?
This wouldn’t change what Warren is saying. It would only change the symbols that we use to restate what he is saying. We would now write ~R to mean “There is no fifth column”. So Warren’s claim, on my reading, would be
p(Q|E) > P(Q) and p(~R|E) > p(~R).
That is, I would just replace “R” everywhere with “~R”. Why is this less confusing? Not observing subversive activity is evidence for there being no fifth column. But it is also evidence for there being a fifth column that is marshaling its resources for a Pearl-Harbor type attack.
Maybe all of these double-negatives are confusing, but that is what the propositional calculus is for: it makes it easy to juggle the negatives just like negation signs in algebra.
My biggest problem with calling a lack of evidence evidence is that it is unnecessary in the first place, which makes it confusing when it comes to discussing it.
Also, I’m not arguing for or against the existence of the fifth column. I think I was unclear about that earlier, and I think we probably got a signal or two crossed. The fifth column was a fact, it existed in Japan, and it is the reason they were afraid of a fifth column in America.
Warren also never argued their existence, only their activity, so I don’t see why you have a Q and an R at all. Re-read the statement, he took the 5th column’s existence as a given.
What I’m arguing is the idea that a lack of evidence of subversive activity can be strong evidence that a plan similar to Pearl Harbor is being hatched.
To that end, I went ahead and made some calculations.
These are my assumptions, and I feel they are historically reasonable (I didn’t cite studies, so I can’t exactly call them accurate):
1% of all subversive plots are surprise plots (a-la Pearl Harbor). I call these p(subversion).
Evidence for such plots I call p(evidence).
90% of the time when there is such a plot, there is evidence of it before the fact. I call this p(evidence|subversion).
This is the critical part of Warren’s statement—he is essentially assuming the opposite of what I say here, and I assert this is not reasonable given what we know of such plots. There was even evidence of the Pearl Harbor plot before hand. An attack was expected and planned for; it was really only the location (and the lack of a prior declaration) and precise timing that was a surprise militarily. I’ve frankly never heard of a case of a surprise attack with absolutely no evidence that it would occur, so I believe I am being extremely generous with this number. I would not accept lowering this number much further than this.
Last, I assert that 5% of the time when evidence is found for subversion, no subversion actually occurs. Again, I think this is a reasonable number, and probably too low. I wouldn’t have a problem adjusting this number down as low as 1%.
Everything else is calculated based on these three assumptions.
p(subversion) = 1% p(~subversion) = 99%
p(evidence|subversion) = 90% p(~evidence|subversion) = 10% p(evidence|~subversion) = 4.95% p(~evidence|~subversion) = 94.05%
p(evidence) = 5.85% p(~evidence) = 94.15%
p(subversion|evidence) = 15.52% p(~subversion|evidence) = 35.35% p(subversion|~evidence) = 0.11% p(~subversion|~evidence) = 99.89%
So my conclusion on the question of how likely a lack of evidence implies a plot for subversion is drawn from the last two figures. Given my assumptions, which I believe are consistent with history, 99.89% of the time when there is no evidence of a plot for a surprise attack, there is no plot for a suprise attack. This means 0.11% of the time when there is no evidence of a plot, there actually is a plot.
Thus, likelihood of a Pearl Harbor style plot when there is no evidence to that fact is 0.11%.
It looks like our views have converged. What you wrote above seems to be in agreement with what I wrote here:
The priors that you use in your calculations look approximately right to me. Warren evidently arrived at different numbers prior to the reasoning that Eliezer quoted, so I agree that he probably made some kind of Bayesian error to get to that point. But I would be hard pressed to say exactly why your numbers seem right to me, so I can’t point to exactly where Warren made his mistake. Whatever his mistake was, it was made prior to the reasoning that Eliezer quoted.
The upshot is that we do not have this nice real-life single-paragraph encapsulation of mathematically fallacious Bayesian reasoning.
Doesn’t his position make sense if he believes that:
if there’s no organized fifth column, we should see some intermittent, disorganized sabotage, and
if there is an organized fifth column, we should see NO sabotage before some date, at which there is a devastating attack
?
Of course, I agree that it’s likely he would have made a different argument if he had seen evidence of sabotage—but as presented it seems his position is at least potentially coherent.
I have to think that there is another question to be considered: What are the odds that Japanese-Americans would commit sabotage we could detect as sabotage? If the odds are very high that detectable sabotage would occur, then the absence of sabotage would be evidence in favor of something preventing sabotage. A conspiracy which collaborates with potential saboteurs and encourages them to wait for the proper time to strike then becomes a reasonable hypothesis, if such a conspiracy would believe that an initial act of temporally focused sabotage would be effective enough to have greater utility than all the acts of sabotage which would otherwise occur before the time of the sabotage spree.
That is a good question, but it doesn’t help Warren’s reasoning.
His reasoning was not that there was a high probability that they had committed acts of subversion that were undectectible. His reasoning was that because there was no evidence of subversion, this was evidence of future subversion.
This line of reasoning invalidates itself as soon as the first evidence of subversion is discovered, since the reason subversion was imminent was because there was no evidence of subversion.
In its most simple form, Warren was saying: “Because there is no evidence that the ball is blue, the ball is blue.”
I don’t make any claims about undetected sabotage, I believe it to be statistically meaningless for these purposes. The detection clause was intended to make my statements more precise. Undetectable sabotage only modifies the odds of detectable sabotage, because it’s clearly preferable to strike unnoticed. The conditional statement “If the odds are very high...” eliminates all scenarios where those odds are not very high, which brings this down to Warren assuming an ordering factor in the absence of random events. If you’d like to include undetected sabotage, then you also need to consider the odds that untrained saboteurs would be capable of undetectable sabotage.
Warren wasn’t saying “Because there is no evidence that the ball is blue, the ball is blue.” He was saying “The sun should be in the sky. I cannot see the sun. Therefore, it has been eaten by a dragon.” He was wrong, as it turned out, the eclipse was caused by the moon, and the dragon he feared never existed. But if the dragon he predicted did exist, the world might look much like it did at the time of the predictions.
The problem with this scenario, as presented, is that it assumes that “sabotage” is a binary variable. If that were the case, the pool of possibilities would consist of: (1) Fifth Column exists & sabotage occurs, (2) Fifth Column exists & sabotage does not occur, and (3) Fifth Column does not exist & sabotage does not occur (presuming that sabotage, as defined in the scenario, could only be accomplished by Fifth Column). In that case, necessarily, lack of sabotage could only reduce the probability of (1), and therefore could only reduce the probability of the existence of Fifth Column.
However, sabotage is not a binary variable. Presumably, there is some intermediate level of sabotage (call it “amateur sabotage”) that one might expect to see in the absence of a well-organized Fifth Column. In this case, we need to add 2 possibilities to the pool: (4) Fifth Column exists & amateur sabotage occurs and (5) Fifth Column does not exist & amateur sabotage occurs.
Given the above, the complete absence of sabotage would reduce the probability not only of (1), but also of (4) and (5). Depending on the prior probability of (5) occurring relative to the probabilities of (1) and (4), Warren’s argument may be perfectly reasonable.
If absence of proof is not proof of absence, but absence of evidence is evidence of absence, what makes proof different from evidence?
Example: we currently have no evidence supporting the existence of planets orbiting stars in other galaxies, because our telescopes are not powerful enough to observe them. Should we take this as evidence that no galaxy except ours has planets around its stars?
Another example: before the invention of the microscope, there was no evidence supporting the existence of bacteria because there were no means to observe them. Should’ve this fact alone been interpreted as evidence of absence of bacteria (even though bacteria did exist before microscopes were invented)?
Hi DevilMaster, welcome to LessWrong!
Generally, the answer to your question is Bayes’ Theorem. This theorem is essentially the mathematical formulation of how evidence ought to be weighed when testing ideas. If the wikipedia article doesn’t help you much, Eliezer has written an in-depth explanation of what it is and why it works.
The specific answer to your question can be revealed by plugging into this equation, and defining “proof”. We say that nothing is ever “proven” to 100% certainty, because if it were (again, according to Bayes’ Theorem), no amount of new evidence against it could ever refute it. So “proof” should be interpreted as “really, really likely”. You can pick a number like “99.9% certain” if you like. But your best bet is to scrap the notion of absolute “proof” and start thinking in likelihoods.
You’ll notice that an integral part of Bayes’ Theorem is the idea of how strongly we would expect to see a certain piece of evidence. If the Hypothesis A is true, how likely is it that we’ll see Evidence B? And additionally, how likely would it be to see Evidence B regardless of Hypothesis A?
For a piece of evidence to be strong, it has to be something that we would expect to see with much greater probability if a hypothesis is true than if it is false. Otherwise there’s a good chance it’s a fluke. Furthermore, if that evidence is something that we wouldn’t expect to see much either way, than it’s not very informative when we don’t see it.
So you see how this bears on your examples. I’m not especially familiar with astronomy, so I don’t know whether it’s true that we haven’t seen other galaxies with planets, or how powerful our telescopes are. But let’s assume that what you’ve said is all true.
If we know our telescopes aren’t powerful enough to see other planets, then the fact that they don’t see any is virtually zero evidence. The probability of us seeing other planets is basically the same whether they’re out there or not (because we won’t see them either way), so our inability to see them doesn’t count as evidence at all. This test doesn’t actually tell us anything because we already know that it will tell us the same thing either way. It’s like counting how many fingers you have to determine if the stock market will go up or down. You’re gonna get “ten” no matter what, and this tells you nothing about the market.
The same reasoning applies to the bacteria example. If we’re not more likely to see them given that they’re real than we are given that they’re not real, then our inability to see them is not evidence in either direction. The test is a bad one because it fails to distinguish one possibility from the other.
But all this isn’t to say that it would be valid to reject these notions based on the absence of these evidences alone. There may be other tests we can run that would be more likely to come out one way or the other based on whether the hypothesis is true. So no, it wouldn’t make sense to reject the existence of planets or bacteria, because in both of your examples people are using tests that are known to be useless.
If we’re not more likely to see them given that they’re real than we are given that they’re not real, then our inability to see them is not evidence in either direction. The test is a bad one because it fails to distinguish one possibility from the other
Thank you. That’s what I did not understand.
For a sense of scale: the most distant extrasolar planet is 21,500 ± 3,300 light years away, and rather hypothetical—look at the size of the error bar on that distance.
The nearest dwarf satellite galaxy is 25,000 light years away, so I suppose we’ve got a chance of seeing planets there.
The nearest actual galaxy is Andromeda, at 2.5 million light years.
Yes we do. We have evidence about how physics (ie. gravity) works and about the formation phases of the universe. That earth and the other planets here exists is evidence. We just didn’t happen to have one particular kind of evidence (seeing them). And no, until we developed (recently) the ability to see evidence of them ourselves you would not have been entitled to that piece of evidence either. Because we should not have expected to see them. Seeing planets with tech that should not see them would have been evidence that something else was wrong.
Proof is absolute, evidence is probabilistic.
No, absence of evidence is not evidence of absence if evidence is impossible, but it is evidence of absence if evidence is possible but absent.
(try saying that quickly 3 times :)
Benelliot and others have explained this well, but note that we do have direct evidence for planets in other galaxies. We’ve had it for about two years.
The simple answer is that absence of proof towards a possibility is not proof that that the possibility cannot exist, merely that there is no actual proof either way. However, in this specific case, the absence of evidence pointing towards the existence of a fifth column that is engaging in sabotage is evidence that indicates that the fifth column does not exist. I agree that the specific terminology is a bit confusing, but that is the simple explanation as to your question.
Proof means “extremely strong evidence”. Absence of proof and absence of evidence are both evidence of absence. Their strength is determined by the probability with which we’d expect to see them, conditional on the thing existing and not existing.
There is more discussion of this post here as part of the Rerunning the Sequences series.
A quick proof: http://blog.sigfpe.com/2005/08/absence-of-evidence-is-evidence-of.html
Another proof & discussion: http://kim.oyhus.no/AbsenceOfEvidence.html
I’m pretty sure you just used this as an rhetoric tool, but by bayesian theory, isn’t it impossible to construct a hypothesis which allocates a probability of zero to an event? But don’t you say exactly that in your text?
I mean allocating a probability of zero to an event implies that it doesn’t matter what evidence is presented to you, the probability of that particular event will never become anything else than zero. And as it is impossible to disprove something in the same way it is impossible to prove something, a hypothesis which allocates a probability of zero to an event can not be true and is therefore not of use as a hypothesis in bayesian math. Someone please correct me if I’m wrong...
A simple counter example (hopefully shorter and more clear than the other more in depth criticism by michael sullivan) is the scenario where warren had exactly equal priors for organized fifth column, unorganized fifth column, and no fifth column.
p(organized) = .33
p(unorganized) = .33
p(none) = .33
If he was practically certain that an organized fifth column would wait to make a large attack, and a unorganized fifth column would make small attacks then seeing no small attacks his new probabilities would approximately be:
p(organized) = .5
p(none) = .5
So he would be correct in his statement of concern (assuming an organized fifth column would be very bad), even though the probability of no fifth column was also increased.
The video game Star Ocean: Til The End Of Time has a model of interstellar society that tries to solve Fermi’s conundrum. Planets capable of interstellar travel form an accord that treats less advanced civilizations as nature preserves and agree not to contact or help them. This model does have several problems, such as communication wavelengths would still be visible to us (they have some undiscovered form of communication?) and sufficiently advanced societies should have an ethical dilemma with allowing intelligent species to go through dark ages and protracted suffering for the sake of “not interfering in the development of their unique culture.” Most rationalists will likely agree that we would trade slightly more homogenized art and culture for cures to disease and death. an absence of evidence is still stronger evidence of absence, with the only alternative being a series of suspiciously convenient excuses.
Sounds similar to the Federation’s Prime Directive in Star Trek.
I’m uncomfortable with the resemblance of this to an argument by definition. It also ignores the more reasonable view that “slightly more homogenized art and culture” isn’t usually the worst consequence of more a powerful (I won’t say “advanced”) society trying to “help” a less powerful one. (Not that that increases the prior probability of space aliens.)
Glad you called me out. There are much worse possible outcomes for encountering advanced intelligence, or at least more varied possibilities, and I note that I need to work on adjusting my expectations down. Still, I suppose what I should have stated is, if there are benevolent aliens out there that are aware of us I’d sure like them to make with the “we come in peace” already and just cross my fingers that their first contact doesn’t play out like The Day The Earth Stood Still. But then I have to follow my own advice from the beginning of this comment and be more pessimistic, so it would be exactly like TDTESS, except that Gort would just follow through and blow us up. Hmm… Okay, the Prime Directive (or Underdeveloped Planet Preservation Pact as in my original example) makes much more sense to me now. Thank you for helping me notice my confusion, Document!
I disagree with the article for the following reason: if I have two hypotheses that both explain an “absence of evidence” occurrence equally well, then that occurrence does not give me reason to favor either hypothesis and is not “evidence of absence.”
Example: Vibrams are a brand of toe-shoes that recently settled a big suit because they couldn’t justify their claims of health benefits. We have two hypotheses (1) Vibrams work, (2) Vibrams don’t work. Now, if a well-executed experiment had been done and failed to show an effect, that would be evidence against a significant benefit from Vibrams. However, if the effect were small or nobody had completed a well-executed experiment, I see no reason that (2) would fit the evidence better than (1), so we are justified in saying this absence of evidence is not evidence of absence.
Although the original saying, I think, was meant in the absolute sense (evidence meaning proof), it is still fitting in the probabilistic sense. Absence of evidence is only evidence of absence when combined with one hypothesis explaining an occurrence better than the other, so the saying holds.
In the situation you describe, the settlement is weak evidence for the product not working. Weak evidence is still evidence. The flaw in “Absence of evidence is evidence of absence,” is that the saying omits the detailed description of how to correctly weight the evidence, but this omission does not make the simple statement untrue.
This statement is technically true, but not in the way you’re using it.
Suppose Vibrams had been around for a thousand years. For a thousand years, people had been challenging their claims to health benefits in court. For a thousand years, time and again, Vibrams had been unable to credibly defend their claims. Would that make you any more skeptical of the claims in question, at least a little bit? If the answer is “yes”, you are agreeing that some very large number of such events constitutes evidence against Vibrams. I don’t see any way around concluding, from there, that at least one individual instance provides some nonzero amount of evidence—perhaps very small, but not zero.
“Vibrams work, but the effect is small and/or the experiment was shoddy” and “Vibrams don’t work” explain the outcome nearly equally well. They cannot explain it precisely equally well: the first hypothesis would assign a higher P(claims defended) than the second, because even small effects are sometimes correctly detected, and even shoddy experiments sometimes aren’t fatally flawed. So the second necessarily has a higher P(~claims defended) than the first. This difference is precisely the thing that makes (~claims defended) evidence for the second hypothesis.
Evidence is not proof. Depending on the ratios involved, it may constitute very weak evidence, sometimes weak enough that it’s not even worth tracking for mere humans: a .0001% shift is lost in the noise when people aren’t even calibrated to the nearest 10%.
If you have two hypotheses that both explain an “absence of evidence” precisely equally well, then you’re looking at something completely uncorrelated: trying to deduce the existence of a Fifth Column from the result of a coin flip. And if they explain it only nearly, but not exactly equally well, then you have evidence of absence—although maybe not very much, and maybe not enough to actually push you into the other camp.
Alternately, you might have alternative hypothesis that explain the absence equally well, but with a much higher complexity cost.
Warren’s full speech is available at archive.org: “Unfortunately, however, many of our people and some of our authorities and, I am afraid, many of our people in other parts of the country are of the opinion that because we have had no sabotage and no fifth column activities in this State since the beginning of the war, that means that none have been planned for us. But I take the view that that is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage that we are to get, the fifth column activities that we are to get, are timed just like Pearl Harbor was timed and just like the invasion of France, and of Denmark, and of Norway, and all those other countries.” Hon. Earl Warren, pg 11011-11012, San Francisco Hearings, February 21 and 23, 1942, part 29, National Defense Migration https://archive.org/details/nationaldefensem29unit
So is there ever a time where you can use absence of evidence alone to disprove a theory, or do you always need other evidence as well? Because is some cases absence of evidence clearly does not disprove a theory, such as when quantum physics was first being discovered, there was not a lot of evidence for it, but can the inverse ever be true will lack of evidence alone proves the theory is false?
The idea of Bayesianism is that you think in terms of probability instead of true and false.
From the OP:
So, yes, absence of evidence can convincingly disprove a theory in some cases (although, as ChristianKI points out, Bayesians typically do not assign probabilities of 0 or 1 to any theory).
Didn’t you mean “the observation of no sabotage”?
Absence of evidence, actually DOES NOT mean evidence fo absence.
If we consider the simple correlation between the “5column” and the facts of sabotage—of course lack of these acts decrease the probability of existing any hidden power. And it was quite foolhardy of governor to build such case-effect relationship that he had made up, based on this only fact.
But in the wider concept—can we be sure, that ONLY acts of sabotage are the traits of the “5column”? In other words—we have to be more concerned about the information we don’t obtain. I really don’t know, what traits has “5column”, but in the times of war it would be wise to play it safe.
More acuratly, “absence of evidence you would expect to see if the statement is true” is evidence of absence.
If there’s no evidence you’d expect if the statement is true, absence of evidence is not evidence of absence.
For example, if I tell you I’ve eaten cornflakes for breakfast, no matter whether or not the statement is true, you won’t have any evidence in either direction (except for the statement itself) unless you’re willing to investigate the matter (like, asking my roommates). In this case, absence of evidence is not evidence of absence.
Now, suppose we meet in person and I tell you I’ve eaten garlic just an hour before. You’d expect evidence if that statement is true (bad breath), in this case, absence of evidence is evidence of absence.
So observing a lack of sabotage increases the probability that no Fifth Column exists.
No. Observing nothing carries no information. You can’t update the probability of the existence of a Fifth Column (that doesn’t sabotage or hasn’t sabotaged yet) based on the lack of sabotage. Sure, you can observe nothing and update belief and reduce the likelihood of possibly of the existence of a Fifth Column that would have (with some probability) sabotaged by now.
It’s a mistake to say that lack of evidence is evidence of anything (the governor’s position).
It’s a mistake to say that lack of evidence is evidence of anything (your position).
I found what you’re writing very self-contradicting. If “Observing nothing carries no information”, then you should not be able to use it to update belief. Any belief must be updated based on new information, I would say observing nothing carries the information that the action (sabotage) which your belief predicts to happen did not happen during the observation interval.
If “Observing nothing carries no information”, then you should not be able to use it to update belief. I agree.
Any belief must be updated based on new information I agree. Observing nothing carries no information, so you don’t use it to update belief.
I would say observing nothing carries the information that the action (sabotage) which your belief predicts to happen did not happen during the observation interval.
Yes, so if you observe no sabotage, then you do update about the existence of a fifth column that would have, with some probability, sabotaged (an infinite possibility). But you don’t update about the existence of the fifth column that doesn’t sabotage, or wouldn’t have sabotaged YET, which are also infinite possibilities. Possibilities aren’t probabilities, and you have no probability of what kind of fifth column you’re dealing with, so you can’t do any Bayesian reasoning. I guess it’s a general failure of Bayesian reasoning. You can’t update 1 confidence beliefs, you can’t update 0 confidence beliefs, and you can’t update undefined beliefs. So, for example, you can’t Bayesian reason about most of the important things in the universe, like whether the sun will rise tomorrow, because you have no idea what causes that’s based on. You have a pretty good model about what might cause the sun to rise tomorrow, but no idea, complete uncertainty (not 0 with certainty nor 1 with certainty, nor 50⁄50 uncertainty, just completely undefined certainty) about what would make the sun NOT rise tomorrow, so you can’t (rationally) Bayesian reason about it. You can bet on it, but you can’t rationally believe about it.
Sorry, I don’t feel like completely understanding your POV is worth the time. But I did read you reply 2-3 times. In roughly the same order as your writing.
I’m not sure why infinity matters here, many things have infinite possibilities (like any continuous random variable) and you can still apply a rough estimate on the probability distribution.
I think this is an argument similar to an infinite recursion of where do priors come from? But Bayesian updates usually produces better estimate than your prior (and always better than your prior if you can do perfect updates, but that’s impossible), and you can use many methods to guestimate a prior distribution.
Unknown Unknowns are indeed a thing. You can’t completely rationally Bayesian reason about it, and that doesn’t mean you can’t try to Bayesian reason about it. Eliezer didn’t say you can become a perfect Bayesian reasoner either, he always said you can attempt to reason better, and strive to approach Bayesian reasoning.
No, you cannot. For things you have no idea, there is no way to (rationally) estimate their probabilities.
No. There are many many things that have “priors” of 1, 0, or undefined. These are undefined. You can’t know anything about their “distribution” because they aren’t distributions. Everything is either true or false, 1 or 0. Probabilities only make sense when talking about human (or more generally, agent) expectations/uncertainties.
That’s not what I mean, and it’s not even what I wrote. I’m not saying “completely”. I said you can’t Bayesian reason about it. I mean you are completely irrational when you even try to Bayesian reason about undefined, 1, or 0 things. What would trying to Bayesian reason about an undefined thing even look like to you?
Do you admit that you have no idea (probability/certainty/confidence-wise) about what might cause the sun not to rise tomorrow? Like is that a good example to you of a completely undefined thing, for which there is no “prior”? It’s one of the best to me, because the sun rising tomorrow is such a cornerstone example for introducing Bayesian reasoning. But to me it’s a perfect example of why Bayesianism is utterly insane. You’re not getting more certain that anything will happen again just because something like it happened before. You can never prove a hypothesis/theory/belief right (because you can’t prove a negative). We can only disprove hypotheses/theories/beliefs. So, with the sun, we have no idea what might cause it to not-rise tomorrow, so we can’t Bayesian ourselves into any sort of “confidence” or “certainty” or “probability” about it. A Bayesian alive but isolated from things that have died would believe itself immortal. This is not rational. Rationality is to just fail to disprove the null hypothesis, not believe ever-stronger in the null just because disconfirming evidence hasn’t yet been encountered.
Back to the blog post, there are cases in which absence of evidence is evidence of absence, but this isn’t it. If you look for something expected by a theory/hypothesis/belief, and you fail to find what it predicts, then that is evidence against it. But, “The 5th column exists” doesn’t predict anything (different from “the 5th column doesn’t exist”), so “the 5th column hasn’t attacked (yet)” isn’t evidence against it.
The philosophy Stack Exchange agrees.
Hang on, the Japanese example is flawed. There IS an intelligence branch of the Japanese army; this would be well understood by any tactician. Seeing no evidence to their action, and inferring that this is due to their skill, not an irrational assumption.