Your Strength as a Rationalist
The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.
So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?
I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.1 So I didn’t quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.
And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”
Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . .
Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.
I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.2
So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.
I should have paid more attention to that sensation of still feels a little forced. It’s one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading:
Either Your Model Is False Or This Story Is Wrong.
1 And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms . . . It makes you wonder what’s the point of having economists if we’re just going to ignore them.
2 From McCluskey (2007), “Truth Bias”: “[P]eople are more likely to correctly judge that a truthful statement is true than that a lie is false. This appears to be a fairly robust result that is not just a function of truth being the correct guess where the evidence is weak—it shows up in controlled experiments where subjects have good reason not to assume truth[.]” http://www.overcomingbias.com/2007/08/truth-bias.html .
And from Gilbert et al. (1993), “You Can’t Not Believe Everything You Read”: “Can people comprehend assertions without believing them? [...] Three experiments support the hypothesis that comprehension includes an initial belief in the information comprehended.”
- Simulators by 2 Sep 2022 12:45 UTC; 609 points) (
- The Intelligent Social Web by 22 Feb 2018 18:55 UTC; 229 points) (
- An Alien God by 2 Nov 2007 6:57 UTC; 213 points) (
- Gears in understanding by 12 May 2017 0:36 UTC; 193 points) (
- Double Crux — A Strategy for Mutual Understanding by 2 Jan 2017 4:37 UTC; 191 points) (
- If a tree falls on Sleeping Beauty... by 12 Nov 2010 1:14 UTC; 145 points) (
- Dissolving the Question by 8 Mar 2008 3:17 UTC; 144 points) (
- What is Bayesianism? by 26 Feb 2010 7:43 UTC; 117 points) (
- How to Seem (and Be) Deep by 14 Oct 2007 18:13 UTC; 115 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 115 points) (
- Measuring aversion and habit strength by 27 May 2011 1:05 UTC; 111 points) (
- Belief in Intelligence by 25 Oct 2008 15:00 UTC; 111 points) (
- That other kind of status by 29 Dec 2009 2:45 UTC; 108 points) (
- A Dialogue On Doublethink by 11 May 2014 19:38 UTC; 103 points) (
- Simulacra and Subjectivity by 5 Mar 2020 16:25 UTC; 97 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Recommending Understand, a Game about Discerning the Rules by 28 Oct 2021 14:53 UTC; 96 points) (
- Toward a New Technical Explanation of Technical Explanation by 16 Feb 2018 0:44 UTC; 86 points) (
- I’m mildly skeptical that blindness prevents schizophrenia by 15 Aug 2022 23:36 UTC; 83 points) (
- Taking Ideas Seriously by 13 Aug 2010 16:50 UTC; 81 points) (
- The Stable State is Broken by 12 Mar 2012 18:31 UTC; 80 points) (
- Faster Than Science by 20 May 2008 0:19 UTC; 73 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 72 points) (
- What It’s Like to Notice Things by 17 Sep 2014 14:19 UTC; 72 points) (
- Prune by 12 Jan 2018 22:50 UTC; 71 points) (
- Predictive history classes by 17 Jul 2023 20:48 UTC; 68 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- A Second Year of Spaced Repetition Software in the Classroom by 1 May 2016 22:14 UTC; 64 points) (
- A self-experiment in training “noticing confusion” by 20 Feb 2014 1:55 UTC; 63 points) (
- The “Spot the Fakes” Test by 17 Mar 2009 0:52 UTC; 62 points) (
- Meditation Trains Metacognition by 20 Oct 2013 0:47 UTC; 59 points) (
- Is this viable physics? by 14 Apr 2020 19:29 UTC; 59 points) (
- Reality is weirdly normal by 25 Aug 2013 19:29 UTC; 55 points) (
- Acting Wholesomely by 26 Feb 2024 21:49 UTC; 53 points) (
- Are You a Solar Deity? by 15 Mar 2009 19:30 UTC; 52 points) (
- Leaving LessWrong for a more rational life by 21 May 2015 19:24 UTC; 51 points) (
- Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle by 14 Jul 2020 6:03 UTC; 50 points) (
- A Brief Excursion Into Molecular Neuroscience by 10 Apr 2022 17:55 UTC; 48 points) (
- Is “gears-level” just a synonym for “mechanistic”? by 13 Dec 2021 4:11 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- First Lighthaven Sequences Reading Group by 28 Aug 2024 4:56 UTC; 45 points) (
- Rational Repentance by 14 Jan 2011 9:37 UTC; 45 points) (
- Magic Tricks Revealed: Test Your Rationality by 13 Aug 2011 5:23 UTC; 42 points) (
- Where Experience Confuses Physicists by 26 Apr 2008 5:05 UTC; 41 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- 3 Apr 2013 20:34 UTC; 38 points) 's comment on Open Thread, April 1-15, 2013 by (
- Blanchard’s Dangerous Idea and the Plight of the Lucid Crossdreamer by 8 Jul 2023 18:03 UTC; 38 points) (
- Eric Drexler on Learning About Everything by 27 May 2009 12:57 UTC; 38 points) (
- 5-second level case study: Value of information by 22 Nov 2011 13:44 UTC; 36 points) (
- (Summary) Sequence Highlights—Thinking Better on Purpose by 2 Aug 2022 17:45 UTC; 33 points) (
- In Praise of Maximizing – With Some Caveats by 15 Mar 2015 19:40 UTC; 32 points) (
- 1 Apr 2013 21:35 UTC; 30 points) 's comment on Open Thread, April 1-15, 2013 by (
- Happy Notice Your Surprise Day! by 1 Apr 2016 13:02 UTC; 30 points) (
- How to enjoy being wrong by 27 Jul 2011 5:48 UTC; 30 points) (
- What precisely do we mean by AI alignment? by 9 Dec 2018 2:23 UTC; 29 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- Acting Wholesomely by 26 Feb 2024 21:49 UTC; 26 points) (EA Forum;
- A LessWrong “rationality workbook” idea by 9 Jan 2011 17:52 UTC; 26 points) (
- Priors and Surprise by 3 Mar 2010 8:27 UTC; 23 points) (
- Why Are Transgender People Immune To Optical Illusions? by 28 Jun 2017 19:00 UTC; 23 points) (
- 1 Jun 2010 22:53 UTC; 21 points) 's comment on Open Thread: June 2010 by (
- Two Weeks of Meditation can Reduce Mind Wandering and Improve Mental Performance. by 1 Jun 2013 9:58 UTC; 21 points) (
- Gell-Mann checks by 26 Sep 2024 22:45 UTC; 20 points) (
- 30 Jan 2013 16:28 UTC; 19 points) 's comment on The Zeroth Skillset by (
- Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events by 21 Jan 2024 15:58 UTC; 18 points) (
- 1 Jun 2010 20:16 UTC; 17 points) 's comment on Rationality quotes: June 2010 by (
- Minding Our Way – Deliberate Once by 23 May 2016 13:47 UTC; 16 points) (EA Forum;
- Map and Territory: Summary and Thoughts by 5 Dec 2020 8:21 UTC; 16 points) (
- 9 Apr 2013 5:11 UTC; 16 points) 's comment on Problems in Education by (
- Fake Amnesia by 3 Apr 2016 21:23 UTC; 16 points) (
- 7 Feb 2023 15:33 UTC; 16 points) 's comment on the gears to ascenscion’s Shortform by (
- 19 Oct 2019 17:41 UTC; 15 points) 's comment on A simple sketch of how realism became unpopular by (
- Rationality Reading Group: Part C: Noticing Confusion by 18 Jun 2015 1:01 UTC; 15 points) (
- 13 May 2011 0:35 UTC; 15 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- Noticing the Value of Noticing Confusion by 21 Oct 2021 18:04 UTC; 15 points) (
- Better Rationality Through Lucid Dreaming by 18 Oct 2013 20:48 UTC; 14 points) (
- What are the top 1-10 posts / sequences / articles / etc. that you’ve found most useful for yourself for becoming “less wrong”? by 27 Mar 2022 0:37 UTC; 14 points) (
- Lessons from Failed Attempts to Model Sleeping Beauty Problem by 20 Feb 2024 6:43 UTC; 13 points) (
- 30 May 2016 17:26 UTC; 13 points) 's comment on Cognitive Biases Affecting Self-Perception of Beauty by (
- 2 Mar 2010 18:11 UTC; 13 points) 's comment on Rationality quotes: March 2010 by (
- London rationalish meetup: Feedbackloop-first rationality by 4 Feb 2024 23:47 UTC; 12 points) (
- 11 Jun 2019 20:02 UTC; 12 points) 's comment on Naked mole-rats: A case study in biological weirdness by (
- 4 Aug 2013 17:58 UTC; 11 points) 's comment on Rationality Quotes August 2013 by (
- 28 Apr 2015 15:40 UTC; 11 points) 's comment on Open Thread, Apr. 27 - May 3, 2015 by (
- A Cruciverbalist’s Introduction to Bayesian reasoning by 4 Apr 2021 8:50 UTC; 11 points) (
- 12 Apr 2012 0:58 UTC; 11 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 by (
- Learning from counterfactuals by 25 Nov 2020 23:07 UTC; 11 points) (
- 28 Jul 2014 0:52 UTC; 10 points) 's comment on Why the tails come apart by (
- 27 May 2008 11:22 UTC; 10 points) 's comment on Timeless Physics by (
- 1 Jan 2012 0:41 UTC; 9 points) 's comment on Rationality quotes January 2012 by (
- 4 Oct 2013 11:44 UTC; 9 points) 's comment on Open Thread, September 30 - October 6, 2013 by (
- 12 Mar 2016 15:01 UTC; 8 points) 's comment on Cross-Cultural maps and Asch’s Conformity Experiment by (
- 8 Jun 2012 5:55 UTC; 8 points) 's comment on Debate between 80,000 hours and a socialist by (
- The Reality of Emergence by 19 Aug 2017 21:58 UTC; 8 points) (
- 9 Oct 2010 21:45 UTC; 7 points) 's comment on Discuss: How to learn math? by (
- [SEQ RERUN] Your Strength as a Rationalist by 7 Jul 2011 23:46 UTC; 7 points) (
- Quantum Suicide, Decision Theory, and The Multiverse by 22 Jan 2023 8:44 UTC; 7 points) (
- 15 Apr 2012 20:14 UTC; 6 points) 's comment on Does anyone know any kid geniuses? by (
- 24 Jul 2009 0:32 UTC; 6 points) 's comment on It’s all in your head-land by (
- 2 Aug 2011 18:44 UTC; 6 points) 's comment on The elephant in the room, AMA by (
- 30 Sep 2010 19:37 UTC; 6 points) 's comment on Anti-akrasia remote monitoring experiment by (
- 27 Apr 2011 3:42 UTC; 6 points) 's comment on [SEQ RERUN] Some Claims Are Just Too Extraordinary by (
- 16 Mar 2018 22:43 UTC; 6 points) 's comment on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms by (
- 31 Dec 2009 21:54 UTC; 6 points) 's comment on Two Truths and a Lie by (
- 6 Aug 2012 16:05 UTC; 6 points) 's comment on The Doubling Box by (
- 20 Nov 2013 16:28 UTC; 6 points) 's comment on Q: Correlation often does imply Causation, but does not specify which kind? by (
- Rationality Book Club: Week 3 by 17 Jan 2024 4:15 UTC; 5 points) (EA Forum;
- 24 Mar 2010 15:11 UTC; 5 points) 's comment on There just has to be something more, you know? by (
- 22 Oct 2017 23:06 UTC; 5 points) 's comment on Seek Fair Expectations of Others’ Models by (
- 22 Oct 2013 21:08 UTC; 5 points) 's comment on The Skeptic’s Trilemma by (
- 12 Sep 2011 1:09 UTC; 5 points) 's comment on Heaven+Earth+Joe Davis Documentary by (
- 28 Mar 2009 15:51 UTC; 5 points) 's comment on Terrorism is not about Terror by (
- 23 Oct 2011 14:09 UTC; 5 points) 's comment on [LINK] Loss of local knowledge affecting intellectual trends by (
- (Confusion Phrases) AKA: Things You Might Say or Think When You’re Confused to Use as Triggers for Internal TAPs by 17 Sep 2023 1:06 UTC; 5 points) (
- 17 May 2011 17:20 UTC; 5 points) 's comment on Rationality Boot Camp by (
- 7 Aug 2010 0:37 UTC; 5 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 2 by (
- 19 Jun 2010 2:49 UTC; 4 points) 's comment on Harry Potter and the Methods of Rationality discussion thread by (
- Norfolk Social—VA Rationalists by 10 Oct 2022 0:04 UTC; 4 points) (
- 16 Oct 2009 9:30 UTC; 4 points) 's comment on How to think like a quantum monadologist by (
- 18 Apr 2013 13:45 UTC; 4 points) 's comment on Cold fusion: real after all? by (
- 1 Apr 2010 19:22 UTC; 4 points) 's comment on Open Thread: April 2010 by (
- 13 Jan 2012 17:23 UTC; 4 points) 's comment on Can the Chain Still Hold You? by (
- 15 Nov 2016 22:09 UTC; 4 points) 's comment on Open thread, Nov. 7 - Nov. 13, 2016 by (
- 22 Oct 2008 21:56 UTC; 4 points) 's comment on Which Parts Are “Me”? by (
- 8 Aug 2009 23:11 UTC; 3 points) 's comment on Dreams with Damaged Priors by (
- 9 Sep 2011 18:31 UTC; 3 points) 's comment on Human consciousness as a tractable scientific problem by (
- 26 Jan 2012 14:11 UTC; 3 points) 's comment on I’ve had it with those dark rumours about our culture rigorously suppressing opinions by (
- 29 Dec 2012 14:58 UTC; 3 points) 's comment on Closet survey #1 by (
- 17 Apr 2024 6:37 UTC; 3 points) 's comment on The Solution to Sleeping Beauty by (
- 3 Jul 2011 4:18 UTC; 3 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 31 Oct 2007 18:59 UTC; 3 points) 's comment on A Case Study of Motivated Continuation by (
- 14 Mar 2010 5:23 UTC; 3 points) 's comment on Infinite Certainty by (
- 25 Oct 2012 17:08 UTC; 3 points) 's comment on Final cause is epistemologically primary, but efficient cause is metaphysically primary by (
- Meetup : Brussels—Mindfulness and mental habits by 6 Jan 2015 13:39 UTC; 3 points) (
- 8 Aug 2012 23:00 UTC; 2 points) 's comment on Natural Laws Are Descriptions, not Rules by (
- 15 Jun 2012 23:50 UTC; 2 points) 's comment on Advices needed for a presentation on rationality by (
- 15 May 2017 23:18 UTC; 2 points) 's comment on Gears in understanding by (
- 11 Apr 2012 7:44 UTC; 2 points) 's comment on Left-wing Alarmism vs. Right-wing Optimism: evidence on which is correct? by (
- 17 Mar 2021 23:36 UTC; 2 points) 's comment on “Objective vs Social Reality” vs “Simulacra 1/3″ by (
- 7 Apr 2012 19:11 UTC; 2 points) 's comment on Rationally Irrational by (
- 7 Jan 2012 21:55 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- 24 Aug 2019 19:53 UTC; 2 points) 's comment on Actually updating by (
- 29 May 2009 12:07 UTC; 2 points) 's comment on A social norm against unjustified opinions? by (
- 7 May 2011 16:20 UTC; 2 points) 's comment on Your Evolved Intuitions by (
- 12 Mar 2010 2:35 UTC; 2 points) 's comment on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity’s Future by (
- 8 Oct 2010 1:59 UTC; 2 points) 's comment on Rationality quotes: October 2010 by (
- 19 Jun 2018 22:26 UTC; 1 point) 's comment on Effective Advertising and Animal Charity Evaluators by (EA Forum;
- 15 Jan 2008 9:37 UTC; 1 point) 's comment on Trust in Math by (
- 8 Jan 2011 6:05 UTC; 1 point) 's comment on Rationality Quotes: January 2011 by (
- 23 Jun 2009 20:49 UTC; 1 point) 's comment on The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It by (
- 22 Feb 2024 20:25 UTC; 1 point) 's comment on mike_hawke’s Shortform by (
- 5 Oct 2010 20:21 UTC; 1 point) 's comment on The Most Frequently Useful Thing by (
- 19 Oct 2007 16:06 UTC; 1 point) 's comment on Congratulations to Paris Hilton by (
- 2 Nov 2012 11:29 UTC; 1 point) 's comment on Causal Reference by (
- 10 Jul 2009 19:47 UTC; 1 point) 's comment on Causation as Bias (sort of) by (
- 26 Jun 2013 23:27 UTC; 1 point) 's comment on Open Thread, June 16-30, 2013 by (
- Are Cognitive Biases Design Flaws? by 25 Feb 2015 21:02 UTC; 1 point) (
- 16 Aug 2012 9:05 UTC; 1 point) 's comment on Why Are Individual IQ Differences OK? by (
- 8 Mar 2023 3:41 UTC; 1 point) 's comment on Transcript: Yudkowsky on Bankless follow-up Q&A by (
- 20 Apr 2016 16:23 UTC; 1 point) 's comment on Open thread, Apr. 18 - Apr. 24, 2016 by (
- 16 Oct 2019 0:38 UTC; 1 point) 's comment on I would like to try double crux. by (
- Bad Religion by 15 Apr 2017 5:53 UTC; 1 point) (
- 16 Jan 2012 4:48 UTC; 0 points) 's comment on What Curiosity Looks Like by (
- 30 Jun 2011 19:20 UTC; 0 points) 's comment on I’m becoming intolerant. Help. by (
- 27 Jul 2012 21:31 UTC; 0 points) 's comment on Rationality Quotes July 2012 by (
- 22 Nov 2010 1:17 UTC; 0 points) 's comment on Goals for which Less Wrong does (and doesn’t) help by (
- 12 Oct 2010 22:05 UTC; 0 points) 's comment on Discuss: Meta-Thinking Skills by (
- 9 Dec 2022 18:28 UTC; 0 points) 's comment on Where’s the economic incentive for wokism coming from? by (
- 16 Feb 2014 20:56 UTC; 0 points) 's comment on A defense of Senexism (Deathism) by (
- The Reality of Emergence by 4 Oct 2017 8:11 UTC; 0 points) (
- 8 May 2011 5:12 UTC; 0 points) 's comment on The 5-Second Level by (
- 7 Dec 2010 23:32 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 8 Dec 2010 2:49 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 27 Mar 2011 20:30 UTC; 0 points) 's comment on Put Yourself in Manual Mode (aka Shut Up and Multiply) by (
- 27 Mar 2011 20:34 UTC; 0 points) 's comment on Put Yourself in Manual Mode (aka Shut Up and Multiply) by (
- 9 Jan 2012 12:28 UTC; 0 points) 's comment on Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning by (
- 6 Nov 2011 17:37 UTC; 0 points) 's comment on Open Thread: September 2011 by (
- 7 Dec 2009 5:22 UTC; 0 points) 's comment on Parapsychology: the control group for science by (
- 22 Nov 2010 1:50 UTC; 0 points) 's comment on What I’ve learned from Less Wrong by (
- 26 Aug 2009 17:50 UTC; 0 points) 's comment on Rationality Quotes—August 2009 by (
- 21 Aug 2010 20:27 UTC; 0 points) 's comment on Rationality Lessons in the Game of Go by (
- 30 Mar 2012 5:06 UTC; 0 points) 's comment on New front page by (
- 11 Jan 2015 21:53 UTC; 0 points) 's comment on 2015 Repository Reruns—Boring Advice Repository by (
- 1 Oct 2011 10:02 UTC; 0 points) 's comment on Examples of mysteries explained *away* by (
- Consciousness of simulations & uploads: a reductio by 21 Aug 2010 20:02 UTC; -1 points) (
- 23 Dec 2009 20:03 UTC; -4 points) 's comment on The 9/11 Meta-Truther Conspiracy Theory by (
- The Likelihood Ratio and the Problem of Evil by 13 Dec 2011 16:28 UTC; -5 points) (
- The Futility of Intelligence by 15 Mar 2012 14:25 UTC; -10 points) (
- Where “the Sequences” Are Wrong by 7 May 2023 20:21 UTC; -15 points) (
- Actually Rational & Kind Sequences Reading Group by 31 Aug 2024 4:21 UTC; -55 points) (
It’s strange that it sounds like a rationalist is saying that he should have listened to his instincts. A true rationalist should be able to examine all the evidence without having to rely on feelings to make a judgment, or would be able to truly understand the source of his feelings, in which case it’s more than just a feeling. The unfortunate thing is that people are more likely to remember the cases when they didn’t listen to their feelings which ended up being correct in the end, than all the times when they were wrong.
The “quiet strain in the back of your mind” is what drives some people to always expect the worst to happen, and every so often they are right which reinforces their confidence in their intuitions more than their confidence diminishes each time they are wrong.
In some cases, it might be possible for someone to have a rational response to a stimulus only to think that it is intuition because they don’t quite understand or aren’t able to fully rationalize the source of the feeling. From my own experiences, it seems that some people don’t make a hard enough effort to search for the source… they either don’t seem to think that there is a rational source, or don’t care to take the effort.… as long as they are able to ascertain what their feelings suggest they do, they really don’t seem to care whether or not the source is rational or irrational.
A true rationalist would be able to determine the source and rationality of the feeling. The interesting question is if he fails to rationally explain the feeling, should he ignore the feeling, chalking it up to his weakness as a perfect rationalist.
Since we are all human and cannot be perfectly rational, shouldn’t a rationalist decide that a seemingly irrational feeling is just that, irrational. Is it not more rational to believe that a seemingly irrational feeling is the result of our own imperfection as a human?
a rationalist should acknowledge their irrationality, to do otherwise would be to irrational.
Sorry, since when does “quiet strain in the back of your mind” automatically translate to “irrational”? This particular quiet voice is usually _right_; surely that makes it rational?
To my mind, this question relates to the accuracy of intuitions and the problems that arises while relying on it.
In the original post, my take is that the “quiet strain in the back of your mind” refers to the observation that people whom’s opinion you value in a chatroom are discarding your opinion which is based on a single “anec-data” ; which, for a rationalist in his right mind, taking a step back and on a good day, would automatically discard as the sole model through which reality ought to be interpreted.
While this answers your question, my broader take is that untrained intuition is just a mashup feeling of what feels right or wrong according to a situation, and feelings are not to be confounded with reality. Unless… in the rare occurrence that these feelings have been thoroughly trained to be right, and by that I mean conditioning of the mind through repetition to the point that, for example, a veteran mathematician would “feel” or “intuit” something is wrong with a mathematical proof with just a glance and without going through the details.
Yet there ought to be human limits about relying on such trained intuition and feeling, thus, by default, relying on them must be a last resort or a matter of physical survival—which is what intuition is better used for (to me) - rather than to be extrapolated as a proxy for rationality.
Anon, see Why Truth?:
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2 - fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren’t always true, and perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality”. Both systems can serve the goal of truth, or defeat it, according to how they are used.
“I should have paid more attention to that sensation of still feels a little forced.”
The force that you would have had to counter was the impetus to be polite. In order to boldly follow your models, you would have had to tell the person on the other end of the chat that you didn’t believe his friend. You could have less boldly held your tongue, but that wouldn’t have satisfied your drive to understand what was going on. Perhaps a compromise action would have been to point out the unlikelihood, (which you did: “they’d have hauled him off if there was the tiniest chance of serious trouble”), and ask for a report on the eventual outcome.
Given the constraints of politeness, I don’t know how you can do better. If you were talking to people who knew you better, and understood your viewpoint on rationality, you might expect to be forgiven for giving your bald assessment of the unlikeliness of the report.
Not necessarily.
You can assume the paramedics did not follow the proper procedure, and that his friend aught to go to the emergency room himself to verify that he is OK. People do make mistakes.
The paramedics are potentially unreliable as well, though given the litigious nature of our society I would fully expect the paramedics to be extremely reliable in taking people to the emergency room, which would still cast doubt on the friend.
Still, if you want to be polite, just say “if you are concerned, you should go to the emergency room anyway” and keep your doubts about the man’s veracity to yourself. No doubt the truth would have come out at that point as well.
I saw someone on FB reposting this post today.
Makes an interesting point about not doubting your own models in certain circumstances I guess, but the original post leaves out relevant issues of trust and pragmatism.
Sure people probably gullibly believe untrue stories more often than they should, but biases also often cause us to discount anecdotes that are actually representative of real, lived experiences (such as the subtle experiences of those who suffer from racism and sexism). - http://ntrsctn.com/science-tech/2015/12/tech-guys-allies/
Just because a bug is unusual or difficult to locally replicate/experience doesn’t mean you should discount the bug reports.
Also (obviously) faith in even medical experts/institutions should be absolute.
Finally there’s nothing wrong with offering someone good advice even if you think they may have lied to you/are trolling… there’s still a chance they were not trolling, and arming them with good information might be good for them in the short term or long term.
That article is written as though “are you sure that was sexism” literally means “you had better prove it is sexism with 100% certainty, or I won’t believe you”.
That is not what it means. It’s not a demand for 100% certainty, it’s a demand for better evidence. You don’t have to be treating the world like a computer in order to think that you should try to rule out innocent explanations before proclaiming someone guilty.
Also, while the author claims that the standard he quotes makes it impossible to prove sexism, his own standard has the opposite problem: according to it it’s impossible to prove anyone innocent of sexism. People don’t favor uncertainty over assumption because they’re computer geeks; people favor uncertainty over assumption because there are such things as false positives, and they have enough of a cost that avoiding them is worthwhile.
Reminds me of a family dinner where the topic of the credit union my grandparents had started came up.
According to my grandmother, the state auditor was a horribly sexist fellow. He came and audited their books every single month, telling everyone who would listen that it was because he “didn’t think a woman could be a successful credit union manager.”
This, of course, got my new-agey aunts and cousins all up-in-arms about how horrible it was that that kind of sexism was allowed back in the 60s and 70s. They really wanted to make sure everyone knew they didn’t approve, so the conversation dragged on and on...
And about the time everyone was all thoroughly riled up and angry from the stories of the mean, vindictive things this auditor had done because the credit union was run by a woman my grandfather decided to get in on the ruckus and told his story about the auditor...
Seems like the very first time the auditor had come through, the auditor spent several hours going over the books and couldn’t make it all balance correctly. He was all-fired sure this brand new credit union was up to something shady. Finally, my grandfather (who was the credit union accountant) leaned over his shoulder and pointed out the rookie math mistake the auditor had been making… repeatedly… until an hour past closing time and “could we please go home now?”
The auditor was horribly embarrassed, and stormed out in a huff. And then proceeded to come back every single month for over twenty years trying to catch them in a mistake somewhere.
I don’t know if my cousins learned anything from that story. My grandfather’s a quiet fellow. They might not even have heard his side of it. But I sure did. See, in the 60s and 70s, the auditor coming out and saying, “I’m harassing you because you humiliated me and I want revenge” would have been totally unacceptable and likely would have gotten him dismissed. But saying it was because he didn’t trust a female manager? That was a lie, but it was a socially acceptable reason for doing what he wanted to do for personal reasons anyway.
Makes me wonder just how much historic racism and sexism was simply people looking for a socially acceptable excuse to be jerks. And since I don’t think people’s overall level of desire to be spiteful has changed much, I wonder what the excuses are today now that the “traditional” ones are no longer acceptable.
In it’s strongest form, not believing system 1 amounts to not believing perceptions, hence not believing in empiricism. This is possibly the oldest of philosophical mistakes, made by Plato, possibly Siddhartha, and probably others even earlier.
there are always the empirical observation of prior situations that really didnt match the appropriate system 1. to always believe that system 1 is infallible is perhaps contradictory of the system itself.
Sounds like good old cognitive dissonance. Your mental model was not matching the information being presented.
That feeling of cognitive dissonance is a piece of information to be considered in arriving at your decision. If something doesn’t feel right, usually either te model or the facts are wrong or incomplete.
T
“And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”″
My own “hold on a second” detector is pinging mildly at that particular bit. Specifically, isn’t there a touch of an observer selection effect there? If the docs had been wrong and you ended up dying as a result, you wouldn’t have been around to make that deduction, so you’re (Well, anyone is) effectively biased to retroactively observe outcomes in which if the doctor did say you’re not in a life threatening situation, you’re genuinely not?
Or am I way off here?
i seem to recall a link on another page entitled “hindsight bias”.
Huh? What about hindsight bias?
If you read his other posts, I think you’ll find he wasn’t offering any sort of constructive contribution. He was probably laboring under some confusion, if not outright trolling.
A valid point, Psy-Kosh, but I’ve seen this happen to a friend too. She was walking along the streets one night when a strange blur appeared across her vision, with bright floating objects. Then she was struck by a massive headache. I had her write down what the blur looked like, and she put down strange half-circles missing their left sides.
That point was when I really started to get worried, because it looked like lateral neglect—something that I’d heard a lot about, in my studies of neurology, as a symptom of lateralized brain damage from strokes.
The funny thing was, nobody in the medical profession seemed to think this was a problem. The medical advice line from her health insurance said it was a “yellow light” for which she should see a doctor in the next day or two. Yellow light?! With a stroke, you have to get the right medication within the first three hours to prevent permanent brain damage! So we went to the emergency room—reluctantly, because California has enormously overloaded emergency rooms—and the nurse who signed us in certainly didn’t seem to think those symptoms were very alarming.
The thing is, of course, that non-doctors are legally prohibited from making diagnoses. So neither the nurse on the advice line, or the nurse who signed us into the emergency room, were allowed to say: “It’s a migraine headache, you idiots.”
You see, I’d heard the phrase “migraine headache”, but I’d had no idea of what the symptoms of a “migraine headache” were. My studies in neurology told me about strokes and lateral brain damage, because those are very important to the study of functional neuroanatomy. So I knew about these super dangerous and rare killer events that seemed sort of like the symptoms we were encountering, but I didn’t know about the common events that a doctor sees every day.
When you see symptoms, you think of lethal zebras, because those are what you read about in the newspapers. The doctor thinks of much less exciting horses. This is why the Medical Establishment has always been right, in my experience, every single time I’m alarmed and they’re not.
But in answer to your question about selection effects, Psy-Kosh, I think I’d have noticed if my friend had actually had a stroke. In fact, it would have been much more likely to have been reported and repeated than the reverse case.
I had a similar experience with my girlfriend, except the symptoms were significantly more alarming. She was, among other things, unable to remember many common nouns. I would point and say ’What is that swinging room separator?” and she would be unable to figure out “door”.
I was aware from the start that the symptoms might have been due to a migraine aura, having looked up the symptoms on Wikipedia, but was advised by 811 to take her to the hospital immediately. The symptoms were gone before we arrived. Five hours later (a strong hint that at least the triage people thought it wasn’t an emergency), a doctor had diagnosed it as a silent migraine.
Okie, and yeah, I imagine you would have noticed.
Also, of course, docs that habitually misdiagnose would presumably be sued or worse to oblivion by friends and family of the deceased. I was just unsure about the actual strength of that one thing I mentioned.
I think one would be the closest to truth by replying: “I don’t quite believe that your story is true, but if it is, you should… etc” because there is no way for you to surely know whether he was bluffing or not. You have to admit both cases are possible even if one of them is highly improbable.
Doesn’t any model contain the possibility, however slight, of seeing the unexpected? Sure this didn’t fit with your model perfectly — and as I read the story and placed myself in your supposed mental state while trying to understand the situation, I felt a great deal of similar surprise — but jumping to the conclusion that someone was just totally fabricating is something that deserves to be weighed against other explanations for this deviation from your model.
Your model states that pretty much under all circumstances an ambulance is going to pick up a patient. This is true to my knowledge as well, but what happens if the friend didn’t report to you that once the ambulance he called it off and refused to be transported. Or perhaps at the same time his chest pains were being judged as not-so-severe the ambulance got another call in that a massive car pileup required their immediate presence.
Your strength as a rationalist must not be the rejection of things unlikely in your model but instead the act of providing appropriate levels of concern. Perhaps the best response is something along the lines of “Sounds like a pretty strange occurrence. Are you sure your friend told you everything?” Now we’re starting to judge our level of confidence in the new information being valid.
Which is honestly a pretty difficult model to shake as well. So much of every bit of information you build your world with comes from other people that I think it pretty decent to trust with some amount of abandon.
See antiprediction.
That’s certainly sensible, and in But There’s Still a Chance Eleiezer makes examples where this seems strong. In the above example, it depends a whole lot on how much belief you have in people (or, rather, lines of IRC chat).
I think then that your strength as a rationalist comes in balancing that uncertainty against some your prior trust in people. At which point, instead of predicting the negative, I’d seek more information.
The level of “trust” you have in a person should be inversely proportional to the sensationalism of the claim that he’s making.
If a person tells you he was abducted by a UFO, you demand evidence.
If a person tells you that on the way to work he slipped and fell down, and you have no concrete reason to doubt the story in particular or the person in general, you take that at face value. It is a reasonable assumption that a perfect stranger in all likelihood will NOT be delusional or a compulsive liar.
DP
That makes sense if you’re only evaluating complete strangers. In other words, your uncertainty about the population-inferred trustworthiness of a person is pretty high and so instead the mere (Occam Factor style) complexity of their statement is the overruling component of your decision.
In the stated case, this isn’t a totally random stranger. I feel quite justified in having a less-than uninformative prior about trusting IRC ghosts. In this case, my rationally acquired prejudice overrules in inference about the truth of even somewhat ordinary tales.
The author did not mention anything about an exceptionally high percentage of liars in IRC relative to the general population (which would be quite relevant to his statement) therefore there’s no reason to believe that such had been HIS experience in the past.
Given that, there is no reason for HIM to presume that the percentage of compulsive liars in IRC would different from the general population. YOUR experiences may, of course, be drastically different, but they are not the subject of discussion here.
DP
and there’s always the prisoners dilemna to consider.
I don’t see that you did anything at all irrational. You’re talking to a complete stranger on the internet. He doesn’t know you, and cannot have any possible interest in deceiving you. He tells you a fairly detailed story and asks for you advice. For him to make the whole thing up just for kicks is an example of highly irrational and fairly unlikely behavior.
Conversely, a person’s panicking over chest pains and calling the ambulance is a comparatively frequent occurrence. Your having read somewhere something about ambulance policies does not amount to having concrete, irrefutable knowledge that an ambulance crew cannot make an on-site determination that there’s no need to take a person to the hospital. To a person without extensive medical knowledge there is nothing particularly unlikely about the story you were told.
Therefore, the situation is this—you are told by a complete stranger that has no reason to lie to you a perfectly believable story. You have no concrete reason (“read something somewhere” does not qualify) to doubt either the story or the man’s sanity. Thus there is nothing illogical about taking the story at face value. You did the perfectly rational thing.
Since there was no irrationality in your initial behavior, the conclusions that you arrive at further in your post are unfounded.
DP
You’re talking to a complete stranger on the internet. He doesn’t know you, and cannot have any possible interest in deceiving you.
There’s plenty of evidence that some people (a smallish minority, I think) will deceive strangers for the fun of it.
Which, as I said later on in the same paragraph, is irrational and unlikely behavior. Therefore, when lacking any factual evidence, the reasonable presumption is that that’s not the case.
DP
I think many of us have actually encountered liars on the Internet. I’m not sure what you mean when you say “lacking any factual evidence”.
I presume that you have encountered liars in the real world as well. Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar?
My point is that pathological liars are a small minority. So if you’re dealing with a person that you know absolutely nothing about, and who does not have any conceivable reason to lie to you, there is nothing unreasonable in assuming that he’s telling you the truth, unless you have factual evidence (i.e. you have accurate, verifiable knowledge of ambulance policies) that contradicts what he’s saying.
DP
I think at this point the questions have become (a) “how many bits of evidence does it take to raise ‘someone is lying’ to prominence as a hypothesis?” and (b) “how many bits of evidence can I assign to ‘someone is lying’ after evaluating the probability of this story based on what I know?”
I believe your argument is that a > b (specifically, that a is large and b is small), where the post asserts that a < b. I’m not going to say that’s unreasonable, given that all we know is what Eliezer Yudkowsky wrote, but often actual experience has much more detail than any feasible summary—I’m willing to grant him the benefit of the doubt, given that his tiny note of discord got the right answer in this instance.
My argument is what I stated, nothing more. Namely that there is nothing unreasonable about assuming that a perfect stranger that you’re having a casual conversation with is not trying to deceive you. I already laid out my reasoning for it. I’m not sure what more I can add.
DP
“Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar?”
Yes. Absolutely. Almost /everyone/ lies to complete strangers sometimes. Who among us has never given an enhanced and glamourfied story about who they are to a stranger they struck up a conversation with on a train?
Never? Really? Not even /once/?
If everyone regularly talked to strangers on trains, and exactly once lied to such a stranger, it would still be pretty safe to assume that any given train-stranger is being honest with you.
Actually, yes, you’re entirely right.
In conversations I’ve had about this with friends—good grief, there’s a giant flashing anecdata alert if ever I did see one, but it’s the best we’ve got to go off here—I would suspect that people do it often enough that it’s a reasonable thing to consider in a situation like the one being discussed here, though.
Not that I think it’s a bad thing that the person in question didn’t, mind you. It would be a very easy option not to consider.
Yes, they deceive strangers in particular ways that have the potiential to bring enjoyment to the deceiver. The story here doesn’t strike me as one of those cases—would it bring the deceiver any mirth to hear people’s medical advice about chest pains? Probably not. That would be more likely if the story were something like, “um, I’ve got these strange warts on my...”
(And I say this as someone who’s trolled IRC with similar requests for advice.)
Wait, why not?
I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist?
DP
That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.
N.B. You do not need to sign your comments; your username appears above every one.
And not just because clicking the heels three times is more canonically (and more often) said to be way to return to Kansas from Oz. and not to Oz.
So the fact that something was written somewhere is sufficient to meet your criteria for considering it evidence? I take it you have actually tried clicking your heels to check whether or not you would be teleported to Oz then?
Also, does my signing my comments offend you?
DP
It hurts aesthetically by disrupting uniformity of standard style.
Fair enough. It’s a habit of mine that I’m not married to. If members of this board take issue with it, I can stop.
Yes. It’s really sucky evidence.
This doesn’t remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions.
It makes you look like an outsider who isn’t able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)
“This doesn’t remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions.”
Let’s not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE. Do you mean to tell that when you read a fairy tale you truly consider whether or not what’s written there is true? That you don’t just dismiss it offhand without giving it a second thought?
“It makes you look like an outsider who isn’t able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)”
Like I said above to Vladimir, it’s not a big deal, but you’re reading quite a bit into a simple habit.
No, you can acknowledge that something is evidence while also believing that it’s arbitrarily weak. Let’s not confuse the practical question of how strong evidence has to be before it becomes worth the effort to use it (“standard of evidence”) with the epistemic question of what things are evidence at all. Something being written down, even in a fairy tale, is evidence for its truth; it’s just many orders of magnitude short of the evidential strength necessary for us to consider it likely.
Vladimir, Cyan, and jimrandomh, since you essentially said the same thing, consider this reply to be addressed to all three of you.
Answer me honestly, when reading a fairy tale, do you really stop to consider what’s written there, qualify its worth as evidence, and compare it to everything else you know that might contradict it, before making the decision that the probability of the fairy tale being true is extremely low? Do you really not just dismiss it offhand as not true without a second thought?
No, but only because that would be cognitively burdensome. We’re boundedly rational.
When I pick up a work of fiction, I do not spend time assessing its veracity. If I read a book of equally fantastic claims which purports to be true, I do spend a little time. You might want to peruse bounded rationality for an overview.
So you would then agree that merely the fact that something is written SOMEWHERE, does not automatically qualify it as evidence?
(Incidentally that is my original point, which in spite of seeming as common sense as common sense can be, has attracted a surprising amount of disagreement.)
You have to specify what it purports to be evidence of before I can give you an answer that isn’t a tangent.
Edited to add: Maybe I can do better than the above sentence. I affirm that the existence of this book is negligible but not strictly zero evidence for the claims detailed therein.
At this point I’m not sure what we can do other than agree to disagree. I do not consider a random article from an obscure source on the internet to be evidence of anything.
There may be sense in which this is common sense, but you were purposely using it tendentiously, which is why people responded in the technical way that they did.
Eliezer said that he read something “somewhere”, obviously intending to say that he read it somewhere that he considered trustworthy at the time, not in a fairy tale.
Well, what can I say? I simply don’t consider the vague recollection of reading something somewhere credible evidence of anything, and I stand by that. However, the amount of people that took issue with this statement did open my eyes to the fact that the definition of word “evidence” is not as clear cut as I thought it to be. Not sure if there’s any way to resolve this difference of opinion though.
As noted by jimrandomh, saying ‘credible evidence’ does make an effort to differentiate between different sorts of evidence. If your claim was simply that reading something was not evidence, then you should not have to qualify the word when you use it now. I imagine for those of us who seem to be disagreeing with you, we would agree that that does not constitute ‘credible evidence’ for some values of ‘credible’.
That’s really clever. I always thought that “credible evidence” was a bit redundant actually. I just used as a figure of speech without thinking about, but according to my definition of evidence that it has to be credible is pretty much implicit. It has been made abundantly clear to me, however, that this community’s definition differs substantially, so that’s the definition I will use when posting here going forward.
The easy solution is to stop arguing about the definition of evidence. This community uses it to mean one thing, you’re using it to mean something else, and any sort of conflict goes away as soon as people make clear which definition they’re using. Since this community already has an accepted definition, you would be safe in assuming that that definition is what other posters here have in mind when they use the word “evidence”. By the same token, you should probably find a more precise way to refer to the definition of evidence that you are using in order to avoid being misinterpreted.
Sticking an adjective in front of the word evidence seems to work. “Evidence” includes things that give you 10^-15 bits of information; on the other hand “good evidence”, “usable evidence” and “credible evidence” all imply that the strength of the evidence is at least not exponentially tiny.
I thought that “evidence”, unmodified, would mean non-trivial evidence; otherwise, everything has to count as evidence because it will have some connection to the hypothesis, however weak. To specify a kind of evidence that includes the 1e-15 bit case, I think you would need to say “weak evidence” or “very weak evidence”.
But I’m not the authority on this: How do others here interpret the term “evidence” (in the Bayesian sense) when it’s unmodified?
I would if I were talking to a Bayesian, interpret it as meaning something where a “B is evidence for A” if rough calculation shows that P(A|B) > P(A). I don’t generally expect rationalists to even mention individual data points unless P(A|B)/P(A) is large, but if someone else gave the data as an example, then I wouldn’t expect it to be necessarily large if a Bayesian referred to as evidence. So for example, I could see a Bayesian asserting that the writing of the Bible is evidence for a global flood some 5000 years ago, but I’d be deeply surprised if a Bayesian brought this up in almost any context because the evidence is so weak (in this case P(A|B)>P(A) but P(A|B)/P(A) is very close to 1).
I agree, this sounds exactly right to me. Unfortunately, I remember that in a lot of Robin Hanson’s earlier OvercomingBias posts, my reaction to them would be, “Yes, B is technically evidence in favor of A, but it’s extremely weak—why even mention it?” For example, Suicide Rock.
(I think I have a picture of one of those somewhere...)
Hmm. Maybe the strength of the evidence isn’t the right thing to use, but rather the confidence with which we know the sign of the correlation.
I’m sympathetic to both views.
I have encountered a number of disputes that revolve around using these two different senses of the word, and am nonetheless blindsided by them consistently.
I try to always specify the strength of evidence in some sense when using the word. I think when I do use it unmodified I tend to use it in the technical sense (including even weak evidence).
It would be odd if ‘evidence’ excluded weak evidence, since then ‘weak evidence’ would be a contradiction in terms, or you could see people arguing things like “When I said ‘weak evidence’ I didn’t mean the 1e-15 bit case, since that’s not evidence at all!”
That’s fair enough. However, judging by what I’ve read, this community’s definition of evidence seems to constitute just about anything ever written about anything. How would you then differentiate evidence, from rumor, hearsay, speculation, etc.?
The wiki should be a good starting point for answering this question. What is Evidence? may also be helpful.
Short version: rumor, hearsay, and speculation are evidence, albeit of a very weak variety.
Well that clarifies things quite a bit. I find this definition of evidence surprising, especially in this community, but very interesting. I’ll have to sleep on it. Thank you for the references.
Rumor, hearsay, etc. falls under our definition of evidence, just weak evidence, or probably very indirect (for example, if there is a rumor that A, it might constitute evidence against A being true, given other things you know).
Immediate observation is only that something is written. That it’s also true is a theoretical hypothesis about that immediate observation. That what you are reading is a fairy tale is evidence against the things written there being true, so the theory that what’s written in a fairy tale is true is weak. On the other hand, the fact that you observe the words of a given fairy tale is strong evidence that the person (author) whose name is printed on the cover really existed.
All that is indisputably true. But you didn’t really answer my question on whether or not you give enough consideration to what’s written in a fairy tale (not whether or not it’s written, not who it’s written by, but the actual claims made therein) to truly consider it evidence to be incorporated into or excluded from your model of the world.
That is because it is a bad question and one of a form for which you have already received responses.
Evidence isn’t usually something you “include” in your model of the world, it’s something you use to categorize models of the world into correct and incorrect ones. Evidence is usually something not interesting in itself, but interesting instrumentally because of the things it’s connected to (caused by).
The fact that something is really written is true; whether it implies that the written statements themselves are true is a separate theoretical question. Yes, ideally you’d want to take into account everything you observe in order to form an accurate idea of future expected events (observable or not). Of course, it’s not quite possible, but not for the want of motivation.
Well I didn’t think I needed to clarify that I’m not questioning whether or not something that’s written is really written. Of course, I’m questioning the truthfulness of the actual statement.
Or not so much it’s truthfulness, but rather whether or not it can be considered evidence. Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.
Around these parts, a claim that B is evidence for A is a taken to be equivalent to claiming that B is more probable if A is true than if not-A is true. Something can be negligible evidence without being strictly zero evidence, as in your example of a fairy story.
The fact that something is written, or not written, is evidence about the way world is, and hence to some extent evidence about any hypothesis about the world. Whether it’s strong evidence about a given hypothesis is a different question, and whether the statement written/not written is correct is yet another question.
(See also the links from this page.)
This doesn’t remotely follow either. Go and research the concept of evidence more.
I care little about your signature. I merely describe the social behaviour of humans. What actually does annoy me is if people refuse to use markdown syntax for quotes once they have been prompted. Click the help link below the comment box—consider yourself prompted.
Duly noted. God forbid I do something that annoys you. Won’t be able to live with myself.
As always, I recommend against sarcasm, which can hide errors in reasoning that would be more obvious when you speak straightforwardly.
It was a comment on wedrifid’s implicit assumption that I should care about what annoys him and bizarre expectation that I would adjust my behavior because I was “prompted” (not asked politely mind you) by him. Not sure what part of that is not obvious to you.
Generally, when some minor formatting issue annoys a long-standing member of an internet community it is a good idea to listen to what they have to say. Many internet fora have standard rules about formatting and style that aren’t explicitly expressed. These rules are convenient because they make reading easier for everyone. There’s also a status/signaling aspect in that not using standard formatting signals someone is an outsider. Refusing to adopt standard format and styling signals an implicit lack of identification with a community. Even if one doesn’t identify with a group, the effort it takes to conform to formatting norms is generally small enough that the overall gain is positive.
You’re absolutely right. I have no problem using indentation for quotes, as a matter of fact I was wondering how to do that, it’s his condescending tone that I took issue with. In retrospect though, I should have just ignored it, but let my temper get the best of me. I’ll try to keep counter-productive comments to a minimum in the future.
Indentation happens by putting a greater-than sign at the beginning of the line. Thus:
becomes
I’m not sure of the particulars of your situation, but I personally encounter people lying on the internet orders of magnitude more times than I do people having chest pains.
An alternative explanation? You put your energy into solving a practical problem with a large downside (minimizing the loss function in nerdese). Yes, to be perfectly rational you should have said: “the guy is probably lying, but if he is not then...”.
I wouldn’t call it a flaw; blaring alarms can be a nuisance. Ideally you could adjust the sensitivity settings . . . hence the popularity of alcohol.
Thank you, Eliezer. Now I know how to dissolve Newcomb type problems. (http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/)
I simply recite, “I just do not believe what you have told me about this intergalactic superintelligence Omega”.
And of course, since I do not believe, the hypothetical questions asked by Newcomb problem enthusiasts become beneath my notice; my forming a belief about how to act rationally in this contrary-to-fact hypothetical situation cannot pay the rent.
Fair enough (upvoted); but I’m pretty sure Parfit’s Hitchhiker is analogous to Newcomb’s Problem, and that’s an absolutely possible real-world scenario. Eliezer presents it in chapter 7 of his TDT document.
This sort of brings to my mind Pirsig’s discussions about problem solving in ZATAOMM. You get that feeling of confusion when you are looking at a new problem, but that feeling is actually a really natural, important part of the process. I think the strangest thing to me is that this feeling tends to occur in a kind of painful way—there is some stress associated with the confusion. But as you say, and as Pirsig says, that stress is really a positive indication of the maturation of an understanding.
I’m not sure that listening to ones intuitions is enough to cause accurate model changes. Perhaps it is not rational to hold a single model in your head, as your information is incomplete. Instead one can consciously examine the situation from multiple perspectives, in this way the nicer (simpler, more consistent, whatever your metric is) model response can be applied. Alternatively you could legitimately assume that all the models you hold have merit and produce a response that balances their outcomes e.g. if your model of the medical profession is wrong and they die from your advice it is much worse than the unnecessary calling of the ambulance (letting the medical profession address the balance of resources). This would lead a rational person to simultaneously believe many contradictory perspectives and act as if they were all potentially true. Does anyone know of any theory in this area? the modelling of models (and efficiently resolving multiple models) would be very useful in AI.
Considering that medical errors apparently kill more people than car accidents each year in the United States, I suspect the establishment is not in fact infallible.
Citation needed? I know I’m coming to this rather late, but a quick check of the 2010 CDC report on deaths in the US gives “Complications of medical and surgical care” as causing 2,490 deaths whereas transport accidents causing 37,961 deaths (35,332 of which were classified a ‘motor vehicle deaths’). The only other thing I can see that might be medical errors put under a different heading is “Accidental poisoning and exposure to noxious substances” at 33,041 which combines to still fewer deaths than transport accidents even without removing those poisonings which are not medical errors. (This poisoning category appears to have a lot of recreational drug overdoses judging by the way it sharply increases in the 15-24 age group then drops off after 54 whereas time-spent-in-hospital is presumably increasing with age.)
On the other hand, a 2012 New York Times Op-Ed claims 98,000 deaths from medical errors a year. This number is so much larger than what the CDC reports that I must be misreading something. That would be about 1 in 20 people who die in the US die due to medical error. Original source from 1999). Actually checking that source, 98,000 deaths/year is the upper bound number given (lower bound of 44,000 deaths/year). The report also recommends a 50% reduction in these deaths within 5 years (so by 2004) - and Wikipedia mentions a 2006 study claiming that they successfully preventing 120,000 deaths in an 18 month time period but I can’t find this study. A 2001 followup here appears to focus on suggestions for improvements rather than on giving new data to our question. 3 minutes on Google Scholar didn’t turn up any recent estimates. This entire sub-field appears to rely very heavily upon that one source—at least in the US.
Also of interest is “Actual Causes of Death in the US” which classifies deaths by ‘mistake made’ (so to speak) - the top killer being tobacco use, then poor diet/low exercise, alcohol, microbial agents, toxic agents, car accidents, firearms, sexual behaviors, and illicit drug use. Medical errors didn’t show high up on this list, despite it being the only source in the Wikipedia article on the original article.
Edit: also some places that cite the 1999 study accuse the CDC of not reporting these deaths as their own category. This appears to have changed given the category I reported above. The fact that there has been substantial uproar about medical error since the 1999 article and a corresponding increase in funding for studying it makes me unsurprised that the CDC would start reporting.
If a doctor makes a mistake treating a patient from a vehicle accident, what heading does it get reported under?
(I ask the question in earnest, to anybody who might know the answer—because depending on what the answer is, it could explain the discrepancy.)
From TvTropes:
How do you face this situation as a rationalist?
I believe the evidence is that the initial urge of A is more credible than the rationalization of B. That is, when students change answers on multiple choice tests, they are more likely to turn a right answer to a wrong answer than a wrong answer to a right answer. (I don’t know if that generalizes to a true-false setting.)
It matters why “B sounds more plausible to your mind.” If it’s because you remembered a new fact, or if you reworked the problem and came out with B, change the answer (after checking that your work was correct and everything.) The many multiple choice tests are written so that there is one right answer, one wrong answer, and two plausible-sounding answers, so you shouldn’t change an answer just because B is starting to sound plausible.
There are two modes of reasoning that are useful that I’d like to briefly discuss: inside view, and outside view.
Inside view uses models with small reach / high specificity. Outside view uses models with large reach / high generality. Inside view arguments are typically easier to articulate, and thus often more convincing, but there are often many reasons to prefer outside view arguments. (Generally speaking, there are classes of decisions where inside view estimates are likely to be systematically biased, and so using the outside view is better.)
When wondering whether to switch an answer, the inside view recommends estimating which answer is better. The outside view recommends looking at the situation you’re in- “when people have switched answers in the past, has it generally helped or hurt?”.
There are times when switching leads to the better result. But the trouble is that you need to know that ahead of time- and so, as you suggest, there may be reasons to switch that you can identify as strong reasons. But the decision whether to apply the inside or outside view (or whether you collect enough data to increase the specificity of your outside view approach) is itself a decision you have to make correctly, which you probably want to use the outside view to track, rather than just trusting your internal assessment at the time.
I think more context is necessary. Sorry.
I feel really uncomfortable with this idea: “EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG.”
I think this statement suffers from the same limitations of propositional logic; consequently, it is not applicable to many real life situations.
Most of the times, our model contains rules of this type (at least if we are rationalists): Event A occurs in situation B with probability C, where C is not 0 or 1. Also, life experiences teach us that we should update the probabilities in our model over time. So beside the uncertainty caused by the probability C, there is also uncertainty resulted from our degree of belief in the correctness of the rule itself. The situation becomes more complicated when the problem is cost sensitive.
I got your point (I hope so) and I’m definitely not trying to say “IT IS WORNG” but I think it is true to some degree.
This post frustrated me for a while, because it seems right but not helpful. Saying to myself, “I should be confused by fiction” doesn’t influence my present decision.
First concertize. Let’s say I have a high level world model. A few of them perhaps, to reduce the chance that one bad example results in a bad principle.
“My shower produces hot water in the morning.” “I have fresh milk to last the next two days.” “The roads are no longer slippery.”
What do these models exclude? “The water will be cold”, “the milk will be spoiled”, “I’ll see someone sliding at an intersection” are easy ones. Then there are weirder ones like “I don’t even own a shower”, “Someone drank all my milk in the middle of the night”, and “the roads are closed off due to an earthquake”.
I could say, “My model as stated technically disallows all these things, so if I see any, I should have a huge update”, but that’s unrealistic. The use of “easy” and “weird” implicitly show that I’m already thinking about hypotheses not as strictly allowing and disallowing, but as resulting in greater and lesser probabilistic gains/hits to my confidence.
Even if I do give up entirely on “I have fresh milk”, I usually replace it with something that is consistent with the old reasoning (not just the old observations). Perhaps I reason “The milk should have been fresh but spoiled because of a temporary power outage last night”. That’s actually a bad example because it’s not something I’d jump to if I didn’t have other observation indicating a power outage. Let’s try again. “The milk should have been fresh, but oh dang, it wasn’t.” Yes, that looks like something I’d think. What about the others? My first explanations would probably be “The roads are a little slippery some places” and “The water heater is acting up”.
So what did we just see in this totally fictional but mildly plausible-sounding anecdote? Sometimes a failed hypothesis—becomes-> failed hypothesis + some noise. Other times it’s like the water heater explanation which look pretty different. Let’s think about the first type. Is this small model-distance update heuristic justified? The new model clearly gives more probability mass to our actual observations, but that’s the representative heuristic, totally insufficient to judge whether the theory is acceptable. For that we look to Bayes:
P(H|E) = P(H) * P(E|H) / P(E)
P(E) will be the same for all hypotheses we consider, so just ignore that. P(E|H) is pretty high, since we added noise to make sure the hypothesis would predict evidence. What about P(H)? How do I practically compare the prior for different hypotheses? How do I know when adding noise to my model is good enough vs. when I need to search for new hypotheses?
Let’s think of six different methods to guess whether our new hypothesis will be good enough.
Outside view. Think of five times you’ve been confused in the past four months when dealing with spoiled foods you purchased. If you can’t, you’re well calibrated enough in the spoiled food domain and the hypothesis is fine.
Solomonoff induction says that complexity penalizes the prior probability of hypotheses. Let’s try some dirty things that look like that. Count the words in the hypothesis. Counts the nouns and verbs. Count the number of conjunctions and subtract the number of disjunctions. Count syllables or time how long it takes you to say it. Do this a lot in your daily life so you know how big your theories generally are. Guess what your average has been. Compare to milk explanation.
Consequential salience. Think of four things your theory predicts and four things that your theory disallows. If any of those eight things makes you squick, count a point. Two squicks or more means your theory is weird and you need to look for a better hypothesis. If you spend long enough trying to think of a consequence that you notice the time, your hypothesis isn’t paying rent in expectation and you need to look for a better hypotheses.
Remember your day looking for two pieces of confirming evidence. Remember your day looking for three pieces of disconfirming evidence. Arbitrarily decide whether the hypothesis continues to jive with new evidence. If not, new hypothesis formation.
Imagine a wizard told you your new hypothesis before you tasted the spoiled milk. Imagine his clothes. Is the hypothesis sensible enough that you can trust him? Would you let him borrow a cup of sugar?
You’re wrong. Your hypothesis is simply wrong. Say it to yourself. Say that the milk is still fine. Imagine whether you could go about your day believing this. Can you drink the milk? If not, your hypothesis changed by a large amount and it’s sensible to look for alternatives rather than sticking with your old reasoning by mental inertia.
Now the critical stage!
You don’t have time to remember the last four months. Don’t even think about hypothesis priors unless you’ve already spent more than a minute trying to decide something. Milk is not a big deal, save your cognitive energy for the higher order bits of your life. Also, four months is kind of food-spoiling specific. Time frames would have to be adapted for different problems.
That is not Solomonoff induction in any way. We don’t even have a language for formally expressing high level concepts like “spoiled milk” unless you look at brain architecture to figure out how they classify reality. Also “compare” is not concrete enough.
Emotional salience fails us badly in abstract situations. Thinking of disconfirming evidence is painful; our brains won’t easily present squicky things.
Arbitrarily decide is not an actionable procedure.
This one actually seems kind of okay, unless you’re just as likely to give sugar to senseless wizards.
I’m not sure small updates have small changes in consequence value, but doing more thinking when costs are high generally doesn’t seem horrible. Maybe we should add in something to keep us from thinking longer just to procrastinate though.
Conclusions! Priors over explanations are -hard-. Sometimes we naturally make new hypotheses, sometimes we just add some noise. Maybe take the outside view of yourself if you have time! Maybe take the outside view of the hypothesis by having a wizard tell it to you if not. Your strength as a rationalist? Not drinking spoiled milk, not wasting time thinking about spoiled milk, noticing squicks, successfully doubting yourself when you feel a squick, believing some things because they work really well even if they sound crazy when a wizard says them.
Yet, when a person of even moderate cleverness wishes to deceive you, this “strength” can be turned against you. Context is everything.
As Donald DeMarco asks in “Are Your Lights On?”, WHO is it that is bringing me this problem?
Looking through Google Scholar for citations of Gilbert 1990 and Gilbert 1993, I see 2 replications which question the original effect:
Hasson, U., Simmons, J. P. and Todorov, A. 2005: Believe it or not: on the possibility of suspending belief. Psychological Science, 16, 566–71 http://mba.yale.edu/faculty/pdf/Simmonsj_Believe.pdf
Richter, T., Schroeder, S. and Wohrmann, B. 2009: You don’t have to believe everything you read: background knowledge permits fast and efficient validation of information. Journal of Personality and Social Psychology, 96, 538–58 http://cms.uni-kassel.de/unicms/fileadmin/groups/w_270518/pub_richter/Richter_Schroeder_Woehrmann_JPSP.pdf
(While looking for those, I found some good citations for my fiction-biases section, though.)
Eliezer’s model:
The Medical Establishment is always right.
Information given:
Person is feeling chest pain.
Paramedics say hospitalization is unnecessary.
Possible scenarios mentioned in the story:
Person is feeling chest pain and is having a heart attack.
Person is feeling chest pain but does not need to be hospitalized.
Person is lying.
Between the model and the information given, only Scenario 1 can be ruled false; Scenarios 2 and 3 are both possible. If Eliezer is going to beat himself up for not knowing better, it should be because Scenario 3 did not occur to him—not that Scenario 3 is the logical reality.
The way you phrase it hides the crucial part of the story. Rephrasing:
Person is telling the truth a.) They are having a heart attack, but the paramedics judged wrongly, dismissed it, and didn’t take him to the hospital. b.) They are not having a heart attack, the paramedics judged rightly, and the paramedics dismissed it and didn’t take him to the hospital.
Person is lying.
Eliezer is saying that he should have known scenario 1 is wrong because regardless of whether or not the paramedics think it’s legit, they would have taken the person to the hospital anyway. So, 1a and 1b must be wrong, leaving 2.
Or, if I were going to add to your model, I would add “The Medical Establishment always takes in the ambulance if they call for a medical reason.” Then, when the information given is “Paramedics say hospitalization is unnecessary,” that would have been a direct conflict between model and information, where Eliezer had to choose between rejecting the model and rejecting the information.
I see two senses (or perhaps not-actually-qualiatively-different-but-still-useful-to-distinguish cases?) of ‘I notice I’m confused’:
(1) Noticing factual confusion, as in the example in this post. (2) Noticing confusion when trying to understand a concept or phenomenon, or to apply a concept.
Example of (2): (A) “Hrm, I thought I understood what, “Colorless green ideas sleep furiously” means when I first heard it; the words seemed to form a meaningful whole based on the way they fell together. But when I actually try to concretise what that could possibly mean, I find myself unable to, and notice that characteristic pang of confusion.”
Example of (2): (B) “Hrm, I thought I understood how flight works because I could form words into intelligent-sounding sentences about things like ‘lift’ and ‘Newton’s third law’. But then when I tried to explain why a plane goes up instead of down, my word soup explained both equally well, and I noticed I was confused.” (Compare, from the post: “I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.”)
It might be useful to identify a third type:
(3) Noticing argumentative confusion. Example of (3): “Hrm, those fringe ideas seem convincing after reading the arguments for them on this LessWrong website. But I still feel a lingering hesitation to adopt the ideas as strongly as lots of these people seem to have, though I’m not sure why.” (Confusion as pointer to epistemic learned helplessness)
As in the parent to this comment, (3) is not necessarily qualitatively distinct (e.g. argumentative confusion could be recast as factual confusion: “Hrm, I’m confused by this hesitation I observe in myself to fully endorse these fringe ideas after seeing such seemingly-decisive arguments. Maybe this means something.” (Observations of internal reaction are still observations about which one can be factually confused).
Was a mistake really made in this instance? Is it not correct to conclude ‘there was no problem’? Yes, the author did not realise the story was fictional; but what of what he concluded implied the story was not fictional?
Furthermore, is it good to berate oneself because one does not immediately realise something? In this case, the author did not immediately realise the story was fictional. But evidently the author was already working toward that conclusion by throwing doubt on parts of the story. And the evidence the author had was obviously inconclusive; the story could have been fictional (and the lie could have been invented at several stages), or the complainant could perhaps have simply misinterpreted chest pains as something else, or perhaps the doctors could have in fact made a mistake etc. Given all that, it seems rather after-the-fact to conclude the “Rational” conclusion one should have reached was that the story was a fiction. Surely the “Rational” conclusion would be to suspend judgement pending further investigation; or perhaps judge, but judge lightly. In any case, the self-flagellation at the end of the article seemed unnecessary. Humans are not capable of permanently being “Rational” thinkers; to get to the “Rational” conclusions, it is often best to proceed in baby-steps we can take rather than “Rational” leaps prescribed by whatever vision of “Rationality” is imagined by the author.
This looks like an instance of the Dunning-Kruger effect to me. Despite your own previous failures in diagnosis, you still felt competent to give medical advice to a stranger in a potentially life-threatening situation.
In this case, the “right answer” is not an analysis of the reliability of your friend’s account, it is “get a second opinion, stat”. This is especially true seeing as how you believed the description you gave above.
If a paramedic tells me “it’s nothing”, I complain to his or her superiors, because that is not a diagnosis. Furthermore, I don’t see in your description a claim that the paramedics said there’s no need to worry even if the pain becomes worse later on, so it seems sensible for you to presume they didn’t. So, even if the first assessment is presumed correct, it is not admissible to think that it extends to different evidence.
And if that doesn’t convince you, compute the expectation value of probably being right in a chat vs. the small chance of being sued for wrongful death times everything you own and will ever earn.
Of course, it’s also possible to overdo it. If you hear something odd or confusing, and it conflicts with belief that you are emotionally attached to, the natural reaction is to ignore the evidence that doesn’t fit your worldview, thus missing an opportunity to correct a mistaken belief.
On the other hand, if you hear something odd or confusing, and it conflicts with belief or assumption that you aren’t emotionally attached to, then you shouldn’t forget about the prior evidence in light of new evidence. The state of confusion should act as a trigger mechanism telling you to tally up all the evidence, and decide which piece doesn’t fit.
Since I think evolution makes us quite fit to our current environment I don’t think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.
My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not important anyway “so move on and stop wasting resources in this discussion” was maybe the “biological” objective and as such it should be correct, not a flaw.
If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good since we have conducted our lives for 200,000 years according to these and we are alive and kicking.
Rationality and our search to be LessWrong, which I support, may be tools we are developing to evolve in our competitive ability within our species, but not a “correction” of something that is wrong in our design.
See, this is why it’s a bad idea to use the language of design when talking about evolution. Evolution doesn’t have a design. It optimizes locally according to a complex landscape of physical and sexual incentives, and in the EEA that usually would have favored fast and frugal heuristics. Often it still does; if you’re driving a car or running away from a bear, you don’t want to drop what you’re doing and work out the globally optimal path before taking action. That’s all well and good.
But things have changed in the last 12,000 years; we spend more time doing long-range planning and optimization work, for example, and less time running away from tigers and hitting each other on the head with clubs. Evolution works slowly, and we haven’t reached a local maximum for our environment yet, nor are we likely to in the near future as we continue to reshape it; we’re left with a set of cognitive tools, therefore, that are often poorly adapted to our goals. It’s these that we seek to compensate for, when and where doing so is appropriate.
While our goals are informed by biology, though, their biological influences are no “truer”, no more “correct”, than any other. We certainly shouldn’t treat them as gospel; if they turn out to be in tension with the environment, as in many cases they have, evolution will be quite happy to select against them.
They’re design flaws insofar as that there are far better possibilities. Just because something doesn’t fail entirely, doesn’t mean its design is any good.
This is the same as above. This might also be relevant.
Many of us do not (consciously) want to gain competitive advantages compared to other people but rather raise the sanity waterline.
Good for survival, but not for truth seeking. Epistemic and instrumental rationality are difference things.
And even in terms of survival, human neurology isn’t that great. It was good enough to get our species to survive until now, but it’s nowhere close to optimal.
Is EY saying that if something doesn’t feel right, it isn’t? I’ve been working on this rationalist koan for weeks and can’t figure out something more believable! I feel like a doofus!
No. Two possibilities, not just one:
″ we believe instinctively, but disbelief requires a conscious effort” link not working
This article actually made me question „Wait, is this even true?“ when I read an article with weird claims; then I research whether the source is trustworthy and sometimes, it turns out that it isn‘t
Trying to understand this.
I think what Yud means there is that a good model will break quickly. It only explains a very small set of things because the universe is very specific. So it’s good that it doesn’t explain many many things.
It’s a bit like David Deutsch arguing that models should be sensitive to small changes. All of their elements should be important.