What is Evidence?
The sentence “snow is white” is true if and only if snow is white.
—Alfred Tarski
To say of what is, that it is, or of what is not, that it is not, is true.
—Aristotle, Metaphysics IV
Walking along the street, your shoelaces come untied. Shortly thereafter, for some odd reason, you start believing your shoelaces are untied. Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace. There is a sequence of events, a chain of cause and effect, within the world and your brain, by which you end up believing what you believe. The final outcome of the process is a state of mind which mirrors the state of your actual shoelaces.
What is evidence? It is an event entangled, by links of cause and effect, with whatever you want to know about. If the target of your inquiry is your shoelaces, for example, then the light entering your pupils is evidence entangled with your shoelaces. This should not be confused with the technical sense of “entanglement” used in physics—here I’m just talking about “entanglement” in the sense of two things that end up in correlated states because of the links of cause and effect between them.
Not every influence creates the kind of “entanglement” required for evidence. It’s no help to have a machine that beeps when you enter winning lottery numbers, if the machine also beeps when you enter losing lottery numbers. The light reflected from your shoes would not be useful evidence about your shoelaces, if the photons ended up in the same physical state whether your shoelaces were tied or untied.
To say it abstractly: For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target. (To say it technically: There has to be Shannon mutual information between the evidential event and the target of inquiry, relative to your current state of uncertainty about both of them.)
Entanglement can be contagious when processed correctly, which is why you need eyes and a brain. If photons reflect off your shoelaces and hit a rock, the rock won’t change much. The rock won’t reflect the shoelaces in any helpful way; it won’t be detectably different depending on whether your shoelaces were tied or untied. This is why rocks are not useful witnesses in court. A photographic film will contract shoelace-entanglement from the incoming photons, so that the photo can itself act as evidence. If your eyes and brain work correctly, you will become tangled up with your own shoelaces.
This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise. If your retina ended up in the same state regardless of what light entered it, you would be blind. Some belief systems, in a rather obvious trick to reinforce themselves, say that certain beliefs are only really worthwhile if you believe them unconditionally—no matter what you see, no matter what you think. Your brain is supposed to end up in the same state regardless. Hence the phrase, “blind faith.” If what you believe doesn’t depend on what you see, you’ve been blinded as effectively as by poking out your eyeballs.
If your eyes and brain work correctly, your beliefs will end up entangled with the facts. Rational thought produces beliefs which are themselves evidence.
If your tongue speaks truly, your rational beliefs, which are themselves evidence, can act as evidence for someone else. Entanglement can be transmitted through chains of cause and effect—and if you speak, and another hears, that too is cause and effect. When you say “My shoelaces are untied” over a cellphone, you’re sharing your entanglement with your shoelaces with a friend.
Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are not contagious—that you believe for private reasons which are not transmissible—is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk.
If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality. You should apply a reflective correction, and stop believing.
Indeed, if you feel, on a gut level, what this all means, you will automatically stop believing. Because “my belief is not entangled with reality” means “my belief is not accurate.” As soon as you stop believing “ ‘snow is white’ is true,” you should (automatically!) stop believing “snow is white,” or something is very wrong.
So try to explain why the kind of thought processes you use systematically produce beliefs that mirror reality. Explain why you think you’re rational. Why you think that, using thought processes like the ones you use, minds will end up believing “snow is white” if and only if snow is white. If you don’t believe that the outputs of your thought processes are entangled with reality, why believe the outputs of your thought processes? It’s the same thing, or it should be.
- The Lens That Sees Its Flaws by 23 Sep 2007 0:10 UTC; 336 points) (
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by 24 Sep 2019 4:12 UTC; 299 points) (
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Crisis of Faith by 10 Oct 2008 22:08 UTC; 175 points) (
- Incorrect hypotheses point to correct observations by 20 Nov 2018 21:10 UTC; 167 points) (
- “Rationalist Discourse” Is Like “Physicist Motors” by 26 Feb 2023 5:58 UTC; 136 points) (
- What is Bayesianism? by 26 Feb 2010 7:43 UTC; 117 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 114 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Building Phenomenological Bridges by 23 Dec 2013 19:57 UTC; 95 points) (
- Don’t Double-Crux With Suicide Rock by 1 Jan 2020 19:02 UTC; 91 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 89 points) (
- Fallacies as weak Bayesian evidence by 18 Mar 2012 3:53 UTC; 88 points) (
- Frequentist Statistics are Frequently Subjective by 4 Dec 2009 20:22 UTC; 87 points) (
- New User’s Guide to LessWrong by 17 May 2023 0:55 UTC; 87 points) (
- A Priori by 8 Oct 2007 21:02 UTC; 86 points) (
- Novum Organum: Introduction by 19 Sep 2019 22:34 UTC; 86 points) (
- The Urgent Meta-Ethics of Friendly Artificial Intelligence by 1 Feb 2011 14:15 UTC; 75 points) (
- “Justice, Cherryl.” by 23 Jul 2023 16:16 UTC; 75 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 72 points) (
- Eliezer’s example on Bayesian statistics is wr… oops! by 17 Oct 2023 18:38 UTC; 70 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Memetic Tribalism by 14 Feb 2013 3:03 UTC; 62 points) (
- The Fundamental Question—Rationality computer game design by 13 Feb 2013 13:45 UTC; 61 points) (
- Aiming for Convergence Is Like Discouraging Betting by 1 Feb 2023 0:03 UTC; 60 points) (
- Dreams with Damaged Priors by 8 Aug 2009 22:31 UTC; 59 points) (
- The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by 13 Dec 2009 4:16 UTC; 58 points) (
- Kelly Bet or Update? by 2 Nov 2020 20:26 UTC; 50 points) (
- Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle by 14 Jul 2020 6:03 UTC; 50 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- Don’t Believe You’ll Self-Deceive by 9 Mar 2009 8:03 UTC; 43 points) (
- What data generated that thought? by 26 Apr 2011 12:54 UTC; 42 points) (
- Communication Requires Common Interests or Differential Signal Costs by 26 Mar 2021 6:41 UTC; 40 points) (
- [Feedback please] New User’s Guide to LessWrong by 25 Apr 2023 18:54 UTC; 38 points) (
- Applied Bayes’ Theorem: Reading People by 30 Jun 2010 17:21 UTC; 37 points) (
- Optimization and Adequacy in Five Bullets by 6 Jun 2022 5:48 UTC; 35 points) (
- Rationalists should beware rationalism by 6 Apr 2009 14:16 UTC; 32 points) (
- Zombie Responses by 5 Apr 2008 0:42 UTC; 31 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Rational vs Reasonable by 11 Jul 2015 3:31 UTC; 27 points) (
- how has this forum changed your life? by 30 Jan 2020 21:54 UTC; 26 points) (
- Entangled with Reality: The Shoelace Example by 25 Jun 2011 4:50 UTC; 25 points) (
- On Being Decoherent by 27 Apr 2008 4:59 UTC; 25 points) (
- 25 Jul 2013 7:20 UTC; 24 points) 's comment on Making Rationality General-Interest by (
- Algorithms of Deception! by 19 Oct 2019 18:04 UTC; 23 points) (
- How to Be Oversurprised by 7 Jan 2013 4:02 UTC; 20 points) (
- 9 Jan 2012 6:19 UTC; 20 points) 's comment on Q&A with experts on risks from AI #1 by (
- What reacts would you like to be able to give on posts? (emoticons, cognicons, and more) by 4 Oct 2020 18:31 UTC; 17 points) (
- 18 May 2012 19:10 UTC; 16 points) 's comment on Be careful with thought experiments by (
- Rationality Reading Group: Part C: Noticing Confusion by 18 Jun 2015 1:01 UTC; 15 points) (
- 26 Apr 2010 18:06 UTC; 15 points) 's comment on Priors as Mathematical Objects by (
- The Mechanics of Disagreement by 10 Dec 2008 14:01 UTC; 14 points) (
- Brief Response to Suspended Reason on Parallels Between Skyrms on Signaling and Yudkowsky on Language and Evidence by 16 Apr 2020 3:44 UTC; 13 points) (
- 18 May 2012 15:05 UTC; 12 points) 's comment on Thoughts on the Singularity Institute (SI) by (
- 4 Dec 2019 4:38 UTC; 12 points) 's comment on Dialogue on Appeals to Consequences by (
- 20 Jul 2019 15:08 UTC; 11 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- Lighthaven Sequences Reading Group #10 (Tuesday 11/12) by 6 Nov 2024 3:43 UTC; 11 points) (
- 23 May 2022 10:19 UTC; 11 points) 's comment on PSA: The Sequences don’t need to be read in sequence by (
- 16 May 2011 15:11 UTC; 10 points) 's comment on Conceptual Analysis and Moral Theory by (
- 22 Aug 2011 2:39 UTC; 10 points) 's comment on The basic questions of rationality by (
- Summarizing the Sequences Proposal by 4 Aug 2011 21:15 UTC; 9 points) (
- An Ontology for Strategic Epistemology by 28 Dec 2023 22:11 UTC; 9 points) (
- 12 May 2023 18:56 UTC; 8 points) 's comment on Thoughts on LessWrong norms, the Art of Discourse, and moderator mandate by (
- [SEQ RERUN] What is Evidence? by 5 Sep 2011 5:57 UTC; 8 points) (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 15 Aug 2010 14:41 UTC; 8 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- Baseline of my opinion on LW topics by 2 Sep 2013 12:13 UTC; 8 points) (
- 23 Aug 2010 11:25 UTC; 7 points) 's comment on The Importance of Self-Doubt by (
- 20 May 2023 14:00 UTC; 7 points) 's comment on TAG’s Shortform by (
- 21 May 2019 2:41 UTC; 7 points) 's comment on Comment section from 05/19/2019 by (
- 16 Apr 2010 19:09 UTC; 7 points) 's comment on Self-indication assumption is wrong for interesting reasons by (
- 9 Aug 2010 21:21 UTC; 6 points) 's comment on Your Strength as a Rationalist by (
- 13 Sep 2019 22:00 UTC; 6 points) 's comment on How Can People Evaluate Complex Questions Consistently? by (
- 7 Jan 2012 17:57 UTC; 6 points) 's comment on Welcome to Less Wrong! by (
- 25 Feb 2014 9:24 UTC; 6 points) 's comment on How to teach to magical thinkers? by (
- 6 Jun 2022 19:24 UTC; 6 points) 's comment on Optimization and Adequacy in Five Bullets by (
- Rational lies by 23 Nov 2009 3:32 UTC; 6 points) (
- Asking for a name for a symptom of rationalization by 7 Jan 2023 18:34 UTC; 6 points) (
- Rationality Book Club: Week 3 by 17 Jan 2024 4:15 UTC; 5 points) (EA Forum;
- 9 Sep 2011 6:00 UTC; 5 points) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- 18 Feb 2010 13:17 UTC; 5 points) 's comment on Open Thread: February 2010 by (
- 15 Jan 2010 21:01 UTC; 5 points) 's comment on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by (
- 25 Oct 2011 15:38 UTC; 5 points) 's comment on Practicing what you preach by (
- 24 Aug 2010 9:39 UTC; 5 points) 's comment on Minimum computation and data requirements for consciousness. by (
- 11 Aug 2011 5:07 UTC; 5 points) 's comment on Raise the Age Demographic by (
- 22 Jul 2011 20:03 UTC; 4 points) 's comment on [SEQ RERUN] One Argument Against An Army by (
- 21 May 2009 5:05 UTC; 4 points) 's comment on Positive Bias Test (C++ program) by (
- 25 Feb 2014 12:03 UTC; 4 points) 's comment on How to teach to magical thinkers? by (
- 7 Apr 2011 0:25 UTC; 4 points) 's comment on Bayesian Epistemology vs Popper by (
- 30 Mar 2012 4:58 UTC; 4 points) 's comment on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 by (
- 29 Jul 2022 17:53 UTC; 4 points) 's comment on Which singularity schools plus the no singularity school was right? by (
- 21 Jan 2022 7:20 UTC; 4 points) 's comment on Open Thread—Jan 2022 [Vote Experiment!] by (
- 14 Dec 2007 6:26 UTC; 3 points) 's comment on Argument Screens Off Authority by (
- 13 Dec 2007 19:30 UTC; 3 points) 's comment on Reversed Stupidity Is Not Intelligence by (
- 30 Aug 2013 22:19 UTC; 3 points) 's comment on Rewriting the sequences? by (
- 21 Jul 2016 5:37 UTC; 2 points) 's comment on Zombies Redacted by (
- 8 Jan 2023 19:38 UTC; 2 points) 's comment on Slider’s Shortform by (
- 17 Oct 2011 14:39 UTC; 2 points) 's comment on Open thread, October 2011 by (
- 27 Sep 2021 2:25 UTC; 2 points) 's comment on Petrov Day 2021: Mutually Assured Destruction? by (
- Meetup : Perth, Australia: More Wrong by 17 Aug 2014 9:58 UTC; 2 points) (
- 20 May 2023 13:38 UTC; 2 points) 's comment on TAG’s Shortform by (
- 7 Apr 2020 19:00 UTC; 2 points) 's comment on Core Tag Examples [temporary] by (
- 5 Jun 2012 0:24 UTC; 2 points) 's comment on Boltzmann Brains and Anthropic Reference Classes (Updated) by (
- 13 Dec 2007 13:26 UTC; 2 points) 's comment on Reversed Stupidity Is Not Intelligence by (
- 14 Dec 2007 2:09 UTC; 2 points) 's comment on Reversed Stupidity Is Not Intelligence by (
- Against Belief-Labels by 9 Mar 2017 20:01 UTC; 2 points) (
- 11 Nov 2014 2:05 UTC; 1 point) 's comment on A List of Nuances by (
- 27 Jul 2008 16:35 UTC; 1 point) 's comment on Changing Your Metaethics by (
- 3 Sep 2021 21:35 UTC; 1 point) 's comment on Can you control the past? by (
- 1 May 2018 22:43 UTC; 1 point) 's comment on Sentience by (
- 2 May 2018 22:24 UTC; 1 point) 's comment on Sentience by (
- 10 Jul 2011 7:12 UTC; 1 point) 's comment on Rationality Quotes July 2011 by (
- 3 Sep 2015 16:11 UTC; 0 points) 's comment on Why people want to die by (
- 11 Jun 2014 15:14 UTC; 0 points) 's comment on Rationality Quotes June 2014 by (
- 14 Dec 2011 0:44 UTC; 0 points) 's comment on How to Not Lose an Argument by (
- 15 Mar 2009 23:47 UTC; 0 points) 's comment on Taboo “rationality,” please. by (
- 17 Jan 2011 0:24 UTC; 0 points) 's comment on Welcome to Less Wrong! by (
- 10 Jan 2010 22:24 UTC; 0 points) 's comment on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by (
- 17 Nov 2011 13:56 UTC; 0 points) 's comment on Justification Through Pragmatism by (
- 13 Aug 2011 3:55 UTC; 0 points) 's comment on Theory of Knowledge (rationality outreach) by (
- 15 Jun 2015 18:29 UTC; 0 points) 's comment on Epistemic Trust: Clarification by (
- 4 Aug 2011 19:59 UTC; 0 points) 's comment on Do Humans Want Things? by (
- 26 Jan 2014 2:56 UTC; 0 points) 's comment on Polling Thread by (
- I universally trying to reject the Mind Projection Fallacy—consequences by 30 Aug 2024 17:42 UTC; -4 points) (
- Reason and Intuition in science by 20 Dec 2019 1:35 UTC; -6 points) (
Why not just say e is evidence for X if P(X) is not equal to P(X|e)?
Incidentally, I don’t really see the difference between probabilistic dependence (as above) and entanglement. Entanglement is dependence in the quantum setting.
“This should not be confused with the technical sense of “entanglement” used in physics—here I’m just talking about “entanglement” in the sense of two things that end up in correlated states because of the links of cause and effect between them.”
That’s literally in the third paragraph.
I think you mean, if P(x)<P(x|e) then e is evidence for x. That is a good definition for evidence, but it doesn’t function on the same level as Yudkowsky’s above. Yudkowsky is explaining not just what function evidence has in truth finding, he is also explaining how evidence is built into a physical system, e.g., camera, human, or other entanglement device. The Bayesian def of evidence you gave tells us what evidence is, but it doesn’t tell us how evidence works, which Yudkowsky’s does.
X : precence of flower A in certain area e : there are bees on that area then you would possibly have that P(X) < P(X|e), given that bees help doing pollinization. Then should we phrase “probability of having flower A in an area is greater if we have bees, therefore e is evidence for X (bees are evidence for flower A)” and what if X is “having presents brought by santa claus”, and e is “we are in USA instead of cambodia” (which increases the probability of having presents because that date is more commonly celebrated with presents in USA).
Trivially, because P(X|e) could be less than P(X)
Note to self: do not post when tired, which leads to asking embarassingly trivial questions.
Quantum wave amplitudes behave in some ways like probabilities and in other ways unlike probabilities. Because of this, some concepts have analogues, while others don’t.
But no concepts are exactly equivalent. For example, evidence isn’t integrally linked to complex numbers, while entanglement is.
Nonetheless, it is instructive (imho) to consider how (assigned) probability is a property of the observer, and not an inherent property of the system. If a qubit is (|0> + |1>)/sqrt(2), and I measure it and observe 0, then I’m entangled with it so relative to me it’s now |0>. But what’s really happened is that I became (|observed 0> + |observed 1>)/sqrt(2), or rather, that the whole system became (|0,observed 0> + |1,observed 1>)/sqrt(2). This is closely analogous to the Law of Conservation of Probability; if you take Expectations conditional on the observation, then take Expectation of the whole thing, you get the original expectation back. This is because observing the system doesn’t change the system, it just changes you. This is obvious in Bayesian probability in the classical-mechanics world; the only reason it doesn’t seem obvious in the quantum realm is that we’ve been told over and over that “observing a quantum system changes it”.
Quite honestly, I don’t see how a Bayesian can possibly be a Copenhagenist. Quantum probability is Bayesian probability, because quantum entanglement is just the territory updating itself on an observation, in the same way that Bayesian ‘evidence entanglement’ is updating one’s map on an observation.
Classical probability preserves amplitude, quantum preserves |amplitude|^2.
They’re different things, and they could, potentially, be even more different.
Um, but isn’t that just a convention? Why should we treat the “amplitude” of a classical probability as being the probability?
Does the problem have something to do with the extra directionality quantum probabilities have by virtue of the amplitude being in C? (so that |0> and (-1*|0>) can cancel each other out)
Classical probability transformations preserve amplitude and quantum ones preserve |amplitude|^2. That’s not a whole reason, but it’s part of one.
Yes, that’s part of the difference. Quantum transformations are linear in a two-dimensional wave amplitude but preserve a 1-dimensional |amplitude|^2. Classical transformations are linear in one-dimensional probability and preserve 1-dimensional probability.
Ah, I get it now, thanks!
(Copenhagen is still wrong though ;)
That definition does not always coincide with what is described in the article; something can be evidence even if P(X|e) = P(X).
Imagine that two cards from a shuffled deck are placed face-down on a table, one on the left and one on the right. Omega has promised to put a monument on the moon iff they are the same color.
Omega looks at the left card, and then the right, and then disappears in a puff of smoke.
What he does when he’s out of sight is entangled with the identity of the card on the right. Change the card to one of a different color and, all else being equal, Omega’s action changes.
But, if you flip over the card on the right and see that it’s red, that doesn’t change the degree to which you expect to see the monument when you look through your telescope. P(monument|right card is red) = P(monument) = 25⁄51
It does change your conditional beliefs, though, such as what the world would be like if the left card turned out to also be red: P(monument|left is red & right is red) > P(monument|left is red)
Of course e can be evidence even if P(X|e)=P(X) -- it just cannot be evidence for X. It can be evidence for Y if P(Y|e)>P(Y), and this is exactly the case you describe. If Y is “there is a monument and left is red or there is no monument and left is black”, then e is (infinite, if Omega is truthful with probability 1) evidence for Y, even though it is 0 evidence for X.
Similarly, you watching your shoelace untied is zero evidence for my shoelaces...
Hi Eliezer,
I like the word entanglement, because it’s a messy concept. Reality, whatever else it might be, is messy. That’s why statements like the preceding sentence can’t ever be completely true. The messiness makes it hard to talk about anything real in any absolutely definitive sort of way.
I can be definitive about artificial constructs in an artificial world, yes. Hence, mathematics. But when you or I try to capture the real world with that comforting clarity, we are doomed. Well, mostly doomed. 85.27% doomed, plus or minus an unknown set of unknowns.
That’s the problem I have with your otherwise (as usual) thought provoking post: YES, our perceptions are entangled with the state of the world and that often influences our beliefs which then may entangle our utterances and therefore eventually entangle other people’s beliefs. BUT what is the nature of that entanglement? You can’t know for sure. What specifically are the beliefs that you intend to refer to? You can’t know for sure.
The factor I expected to see in your essay, but did not, is interpretation based on mental models. There are many models I might have in my mind that could influence what counts as evidence.
You wrote: “For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target.”
If we put the missing material about interpretation in there this might read:
“For me to consider an event to be evidence about a target of inquiry, I must first possess or construct a model of that event and that target and also a model of the world that contains and relates the event and target with each other. Then, for the event to be evidence CORROBORATING a particular theory about the target, I must imagine plausible alternative events that would that would CONTRADICT that theory.”
Unfortunately, our models can be wrong, and are often wrong in interesting ways. So, we can satisfy your version of the statement, or my version, and still be counting as evidence things that may be no evidence at all. Example: “I was about to go for a car ride and a black cat crossed my path, which I interpret as a portent of evil, so I went back into my house. The black cat was evidence of evil in that particular situation because a black cat crossing my path is a rare event; it is possible for the cat not to have crossed my path; and in my culture, which is the collective product of successful experience staying alive and procreating, it is considered a portent of evil for a black cat to cross one’s path. Had a black cat not crossed my path, I would consider that evidence (weak evidence) that I was not about to experience misfortune.”
Seems to me that you can in principle rationally believe (1) that your beliefs are entangled with reality but (2) that you don’t have any more effective way of persuading others than to say “see, I believe this”. Specifically, imagine that every now and then you find yourself acquiring a belief in a particular, weird, internal way (say, you have the strong impression that God speaks to you, accompanied by a mysterious smell of apricots), and that several times this has happened and you’ve checked out the belief and it’s turned out to be true. (And you’ve never checked it and found it to be false, and the instances you checked were surprising, etc.)
I think you’d be entitled, in this situation, to believe that your weirdly acquired beliefs are entangled with reality; but I can’t see any way you could be very convincing to someone who didn’t know the history (barring further such episodes in the future, of which there is no guarantee); and even in the best-possible case where whenever this thing happens to you you immediately tell someone else of the belief you’ve acquired and get them to check it, it could be very difficult for them to rule out hoaxing well enough to make them trust you.
Now, the standard case of incommunicably grounded beliefs—which I suspect Eliezer had in mind here—is of some sorts of religious belief; and they share at least some features with my semi-silly example. They generally lack the really important one (namely, repeated testing), and that’s a big strike against them; but the big strike is the poor quality of the evidence, not its incommunicability as such.
So yes, incommunicability is suspicious, and a warning sign, but I think Eliezer goes too far when he says that a model that says your beliefs aren’t evidence for others is ipso facto saying that you don’t yourself have reason to believe. Unless he really means literally absolutely no evidence at all for others, but I don’t think anyone really believes that.
You can tell them that your impressions have previously always been correct and surprising. To the extent that they trust you, the evidence will be just as good for them as it was for you.
The extent to which they trust you may not be very great, especially given that what you’re telling them is that sometimes God speaks to you with an aura of apricots and reveals surprising but mundane truths. In any case, telling them this doesn’t make your evidence any less incommunicable, except in so far as it makes all evidence communicable.
(Note: old “g” = newer “gjm”.)
In this case, they’ll trust you less than if you told them that your shoelaces were untied, but it’s not fundamentally different. Your shoelaces being untied is only communicable in the sense that you can tell someone, unless you count telling them to look at your shoes, but that doesn’t seem to be what this is talking about.
Unless I misunderstood Eliezer, he seemed to be saying that all evidence is communicable in exactly this way.
I don’t know if it is just semantics but it seems to me that you are conflating evidence and our perception of that evidence, since you write:
Take the following thought experiment. Suppose Alan has untied shoelaces that he can see. Suppose that also Alan’s shoelaces produce a barely audible sound when they are untied and suppose that Barbara can and does hear this sound, while Alan can’t and doesn’t.
Now if I interpret you correctly, your definition of evidence amounts to saying that Barbara and Alan have different evidence with regards to Alan’s untied shoelaces. However, it seems more intuitive to say that there is the a single state of things, Alan’s untied shoelaces, that constitutes the only evidence that’s perceived differently by Barbara and Alan.
You also think that evidence is a type of event—of course, this would be true if evidence really was someone’s perception of some state of affairs that led them to form true beliefs. But I believe that there are many types of evidence that simply are not events. What about mathematical evidence for some belief? Godel’s incompleteness theorem is conclusive evidence for the fact that you can’t derive all the true theorems of mathematics from a formal system. (Please don’t boil me too much if I am like not totally correct.) Nevertheless, that theorem is not an event in time—it doesn’t cause anything. Metaphorically, we might say a certain mathematical theorem might “cause” another one—or one theorem might be the immediate “consequence” of the other—but mathematical entailment relations are different from natural causation and all this talk is just metaphorical.
Lastly, you write that:
However, I can think of some instances in which perhaps “blind faith” is warranted. For instance, I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional.
Yes, it is conditional. For example, I guess, if you had put two stones next to other two, then calculated and found that there is _five stones in total, that would be a proof that 2+2 not equal to 4. This is how your belief “2+2=4” could be falsified.
I know this is Eliezer’s line but it still looks like nonsense to me. This experience would be evidence stones have a tendency to spontaneously appear when four stones are put next to each other.
I have a simpler reason that the belief 2+2 = 4 is not blind. When he says he has blind faith because “I can not conceive of a situation that would make 2+2 = 4 false.” it is not blind because he is trying to find an alternative rather than entirely avoiding questioning his belief.
Joke counterargument:
Two cups (of sugar) + two cups (of water) = 2 cups (of sugar water)
Therefore, 2 + 2 = 2. ;)
to be very anal and nit-picky with your joke (cuz i feel like it):
You’re mixing equal volumes with inconsistent densities (and thus mass) and trying to compute a final volume. Either way you’d get more than 2 cups.
Back on topic:
i have a very simple definition of evidence.
Anything that modifies my mental probabilities about certain beliefs i hold to be true or false is considered evidence by me.
Whether or not the evidence is weak, strong, or even reliable in the first place is irrelevant if we’re trying to define what evidence is.
I disagree with evidence being an event. It is rather an attribute. the event is the observation of evidence. The event (the observation -hearing, seeing, smelling, whatever) is only useful for determining if the evidence (attribute) is reliable (true).
The evidence itself does not change. It is a static thing. if you see different evidence next time, that’s different evidence (a different static).
I DO agree with the entanglement though. evidence is entangled with both your map and (hopefully) the territory. after all, the whole point of evidence is to modify your map to better fit the territory. The nature of its entanglement is simple though. As stated above it simply shifts your probabilities (confidences in beliefs).
First time poster, noob in rationality so have some mercy folks ;)
Is there any decent literature on the extent to which the fact of knowing that my shoelaces are untied is a real property of the universe? Clearly it has measurable consequences—it will result in a predictable action taking place with a high probability. Saying ‘I predict that when someone will tie his shoelaces when he sees they’re undone’ is based not on the shoelaces being untied, nor on the photons bouncing, but on this abstract concept of them knowing. Is there a mathematical basis for stating that the universe has measurably changed in a nonrandom way once those photons’ effects are analysed? I’d love to read more on this.
Also (closely related question), I know that overall entropy would increase in the whole system, but does this entanglement represent a small local increase in order?
“belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise”
Such a great way to put it! I wish I had read this page a few years ago, when I was arguing with my dad about religion. I wasn’t able to coherently put this thought, though in retrospect I believe I was trying to get there. I ended up asking a hypothetical situation about advanced aliens visiting and telling him that his beliefs were wrong, and explaining why. He disappointed me with his answer: that he would like to believe he is strong enough in his faith to ignore the aliens. This is when I realized it would be fruitless to attempt to persuade him away from religion.
“If you don’t believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes? ”
I don’t. Well not like Believe. Some few of them I will give 40 or even 60 deciBels.
But I’m clear that my brain lies to me. Even my visual processor lies. (Have you ever been looking for your keys, looked right at them, and gone on looking?)
I hold my beliefs loosely. I’m coachable. Maybe even gullible. You can get me to believe some untruth, but I’ll let go of that easily when evidence appears.
I think I would nominate this as the most important post on LessWrong. I keep referring people to it.
Great article, and it helps me explain to my friend that my faith is not, in fact, blind.
One problem: communication via the spirit from God to an individual is an epiphenomenon. So it can’t be proven externally? That’s one instance of a rational belief that isn’t contagious, though I suppose that’s why there are people who doubt the existence of epiphenomena altogether.
Communication is a physical process. Unless you can put forward a coherent, testable model for non-physical communication, then talking about communication from a non-physical entity to a physical entity has no semantic meaning. If no experiment can be performed to distinguish two hypotheses (e.g. that there is or is not such a thing as an epiphenomenon) then that thing is irrelevant given that human minds are purely physical objects and human thought, as far as all evidence is concerned, obeys our best models of computation (Church-Turing thesis, etc.).
Epiphenomenal hypotheses are still required to pass Occam’s razor. If there is a simpler explanation (e.g. purely physical) that accounts for the evidence, then intellectual integrity demands you take that view. Positing epiphenomena is no different than positing unicorns, unless you have quantifiable evidence for the phenomena and hence they would not be epiphenomenal.
Wikipedia says:
I’m not sure what to call non-physical things changing the physical world, but it seems the communication you describe, if possible, would be non-physical to physical, right?
You are, of course, correct. :3 I used the wrong term.
Is that because there isn’t a right term? I don’t know it if there is.
Perhaps there ought to be. Let’s invent one!
Great article, I have only this one comment:
“If your beliefs are entangled with reality, they should be contagious among honest folk.”
Haven’t true and false beliefs both proven to be contagious among honest folk? Just as we should not use a machine that beeps for all numbers as evidence for winning lottery numbers, we should not use whether or not a belief is contagious as evidence of its truth.
I don’t think that Eliezer suggested using a belief’s contagiousness as strong evidence of its truth. Rather, a belief’s lack of contagiousness is strong evidence of its untruth.
It depends on how likely the respective explanations are.
I think it depends on that, and only that, and should be completely disconnected from any social criteria such as “being contagious.”
Also, Eliezer writes, “If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality.”
This seems false. Should LW thinkers take it as a problem that our methods are usually completely lost on, for example, fundamentalist scientologists? In fact, I don’t think it’s a stretch to claim that most people do not subscribe to LW methods, does that suggest a problem with LW methods? Do LW methods fail the test of being contagious and therefore fail the test of being reliable methods for acquiring evidence?
I think this should be more like “then your model offers weak evidence that your beliefs are not themselves evidence”.
If you’re Galileo and find yourself incapable of convincing the church about heliocentrism, this doesn’t mean you’re wrong.
Edit: g addresses this nicely.
“Should LW thinkers take it as a problem...”
Yes to all of that. There are many problems with LW methods and beliefs, and those problems impede other people from seeing the parts that are right.
Scientologists believe that any method that wasn’t invented/used by Ron Hubbard is bad. As such they are not open to evaluate a method on their merits and failure to convince them isn’t a sign of failure of a method for acquiring evidence.
Sure. Scientologists are not close to being the only ones who disagree with LW’s mistakes.
I think “should” here means “justified,” not necessarily “likely.”
Your (rational) beliefs should be considered evidence by the irrational, even though they likely won’t be.
No, correct beliefs should only be contagious among honest folk who believe each other to be rational and honest. If I make the claim that The FSM is dictating these words to me, you would probably think me lunatic or liar. But if I truly can correctly recognize when I have been Touched by His Noodly Appendage, then my beliefs are entangled with reality but, understandably, not contagious. Furthermore, it would be perfectly rational for me to believe this revelation and at the same time not to consider it evidence for others. The point is that some beliefs, certainly the more extraordinary of them, should not be contagious, except through evidence as raw and unprocessed as possible.
Also, entanglement is necessary but not sufficient for correct beliefs. The fact that my beliefs contain information about the world is not enough for them to be correct. For example, if I misread the photon pattern, I could think that my shoelaces are tied when they are not, and untied when they are tied. This still has the same amount of entanglement, the same amount of information, yet the beliefs are incorrect.
I’m not sure that this terminology about entanglement and such forth actually helps understanding. Reading this post unlikely to cause me to win more bets (make better predictions).
I’m a newcomer working through the sequences for the first time, so I apologize if this has been more fully discussed or explained elsewhere, but I’ve hit a sticking point here. I was in agreement up until:
This works very well for claims like ‘snow is white’ but not so well for abstract concepts. In order for the evidence-based belief to transmit well, the listener must have definitions of ‘snow’ and ‘white’ that are compatible enough with the speaker’s definitions for the belief to fit logically into their frame of reference—their map of the territory, if you will. Take out ‘snow’ and ‘white’ and plug in some more abstract concepts there and you’ll see how quickly divergence can occur.
Two people may observe the same objective evidence and use it to reach different conclusions because their frames of reference, definitions, and prior understandings differ. Therefore, the section above doesn’t seem to hold true for any beliefs bar the most simplistic and concrete.
That is, of course, unless the operative word in the quoted paragraph is claim, since anyone who outright states their beliefs are intransmissible is probably engaging in self-deception at one level or another. That seems something of an overly literal interpretation of the piece, though. Am I missing something?
It’s definitely harder to reconcile two sets of conflicting beliefs when you’re dealing with abstractions—maybe even intractable in some cases—but I don’t think it’s impossible in principle. In order for an abstraction to be meaningful, it has to say something about the sensory world; that is, it has to be part of a network of beliefs grounded in sensory evidence. That has straightforward consequences when you’re dealing with physical evidence for an abstraction; when dealing with abstract evidence, though, you need to reconstruct what that evidence means in terms of experience in order to fit it into a new set of conceptual priors. We do similar things all the time, although we might not realize we’re doing them: knowing that several languages conflate parts of the color space that English describes with “green” and “blue”, for example, might help you deal with a machine translation saying that grass is blue.
This only becomes problematic when dealing with conceptually isolated abstractions. Smells are a good example: it’s next to impossible to describe a scent well enough for it to then be recognizable without prior experience of it. Similarly, descriptions of high-level meditation often include experiences which aren’t easily transmissible to non-practitioners—not because of some ill-defined privileges attached to personal gnosis, but because they’re grounded in very unusual mental states.
Thank you for your reply! It’s certainly helped to clarify the matter. I wonder now if a language used in a hypothetical culture where people placed a much higher value on sense of smell or meditative states might have a far broader and more detailed vocabulary to describe them, resolving the problems with reconstructing the evidence. It’s almost Sapir-Whorf—regardless of whether or not language influences thought, it certainly influences the transmission of thought.
I think on reflection that most of my other objections relate to cases where the evidence isn’t in dispute but the conclusions drawn from it are (see: much of politics!) Those could, in principle, be resolved with a proper discussion of priors and a focus on the actual objective evidence as opposed to simply the parts of it that fit with one’s chosen argument. That people in most cases don’t (and don’t want to) reconcile the beliefs and view the situation as more complex than ‘cheering for the right team’ is a fault in their thinking, not the principle itself.
Um… “There has to be Shannon mutual information between the evidential event and the target of inquiry”?
So, cause-and-effect chains would be pretty useful I would think. A you-must-think-through-every-step kind of problem’s solver would benefit greatly for example.
If aliens with no concept of human math landed on earth then 2+2 would only equal 3 separate images IE = “3”
Ifyoureyesandbrainworkcorrectly,yourbeliefswillendupentangledwiththefacts.
I remember spending hours agonizing over this idea. How do I know if my eyes and brain are working correctly? Any thought process that might lead me to a conclusion would be taking place in my brain. The same brain that I want to prove works correctly. The best I could come up with was that if my brain works correctly, I stand to gain by operating under the assumption that it does, and I stand to lose by operating under the assumption that it doesn’t. If my brain does not work correctly, then I have no basis for any conclusion so it makes no difference what my operating assumptions are.
I don’t get this inference. seems like the belief itself is the evidence—and you entangle your friend with the object of your belief just by telling them your belief—regardless if you can explain the reasons? (private beliefs seem to me suspicious on other grounds)
If your friend trusts that you arrived at your belief through rational means, you are correct. But often when someone can’t give a reason, it’s because there is no good reason. Hence “suspicious”.
I struggle with comprehending this sentence:
To say it abstractly: For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target.
That means if I want to show evidence that waters changes its solid form by melting (target of inquiry), there must be evidence that I can freeze water (different possible state)? And on top of that there must be evidence that gas can condense to a fluid and the fluid can vapourware into gas?
Is my rewritten interpretation correct?
I’m very sorry I have a hard time wrapping my head around this concept.
It seems to me you confused by overlap in meaning of word “state”.
In this context, it is “state of target of inquiry”—water either changes its solid form by melting or not. So state refers to difference between “yes, waters changes its solid form by melting” and “no, waters does not change its solid form by melting”. Those are your 2 possible states, and water itself having unrelated set of states(solid/liquid/gas) to be in is just coincidence.
Thank you, your explanation of state made it easier for me to understand the meaning.
Your example seems still confused to me. Maybe try something simpler like “Will it rain tomorrow? ” because you want to pack for a trip. There’s lots things you can inquire to figure out if this is likely. For example if it’s cloudy now that probably has some bearing on whether it will rain. You can look up past weather records for your region. More recently we have detailed models informing forecasts that you can access through the internet to inform you about the weather tomorrow. All of these are evidence.
There is also lots of observations you can make that are for all you know uncorrelated with whether it will rain tomorrow. Like the outcome of a dice throw you do. These do not constitute evidence toward your question or at least not very informative evidence.
Thank you for your reply it helped me a lot.
Popper, Yudkowsky, and virtually all scientists, certainly all who endorse the doctrine of physicalism, particularly the false metaphysical assumption of physical closure, fall into the merciless jaws of fatal logical contradiction; reification and infinite regress.
What they all miss is natural a-priori, specifically, natural a-priori axioms.
If you read Yudkowsky’s statement carefully, I believe you will notice several errors, at least once I point them out for you.
See my (Al Link) Substack posts for an expanded discussion:
Less Wrong platform and author Yudkowsky, on Rationality and Justified Knowledge Certainty
https://allink.substack.com/p/justified-knowledge-certainty-and-f4b
Natural a-priori Axioms
https://allink.substack.com/p/justified-knowledge-certainty-and
There’s a point to be made here about why ‘unconditional love’ is unsatisfying to the extent the description as ‘unconditional’ is accurate.