Belief in Belief
Carl Sagan once told a parable of someone who comes to us and claims: “There is a dragon in my garage.” Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! “But wait,” the claimant says to us, “it is an invisible dragon.”
Now as Sagan points out, this doesn’t make the hypothesis unfalsifiable. Perhaps we go to the claimant’s garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide.
But now suppose that we say to the claimant, “Okay, we’ll visit the garage and see if we can hear heavy breathing,” and the claimant quickly says no, it’s an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, “The dragon is permeable to flour.”
Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in their mind, because they can anticipate, in advance, exactly which experimental results they’ll need to excuse.
Some philosophers have been much confused by such scenarios, asking, “Does the claimant really believe there’s a dragon present, or not?” As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that. There are different types of belief; not all beliefs are direct anticipations. The claimant clearly does not anticipate seeing anything unusual upon opening the garage door. Otherwise they wouldn’t make advance excuses. It may also be that the claimant’s pool of propositional beliefs contains the free-floating statement There is a dragon in my garage. It may seem, to a rationalist, that these two beliefs should collide and conflict even though they are of different types. Yet it is a physical fact that you can write “The sky is green!” next to a picture of a blue sky without the paper bursting into flames.
The rationalist virtue of empiricism is supposed to prevent us from making this class of mistake. We’re supposed to constantly ask our beliefs which experiences they predict, make them pay rent in anticipation. But the dragon-claimant’s problem runs deeper, and cannot be cured with such simple advice. It’s not exactly difficult to connect belief in a dragon to anticipated experience of the garage. If you believe there’s a dragon in your garage, then you can expect to open up the door and see a dragon. If you don’t see a dragon, then that means there’s no dragon in your garage. This is pretty straightforward. You can even try it with your own garage.
No, this invisibility business is a symptom of something much worse.
Depending on how your childhood went, you may remember a time period when you first began to doubt Santa Claus’s existence, but you still believed that you were supposed to believe in Santa Claus, so you tried to deny the doubts. As Daniel Dennett observes, where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. What does it mean to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green? The statement is confusing; it’s not even clear what it would mean to believe it—what exactly would be believed, if you believed. You can much more easily believe that it is proper, that it is good and virtuous and beneficial, to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green. Dennett calls this “belief in belief.”1
And here things become complicated, as human minds are wont to do—I think even Dennett oversimplifies how this psychology works in practice. For one thing, if you believe in belief, you cannot admit to yourself that you merely believe in belief. What’s virtuous is to believe, not to believe in believing; and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit to themselves, “I don’t believe the Ultimate Cosmic Sky is blue and green, but I believe I ought to believe it”—not unless they are unusually capable of acknowledging their own lack of virtue. People don’t believe in belief in belief, they just believe in belief.
(Those who find this confusing may find it helpful to study mathematical logic, which trains one to make very sharp distinctions between the proposition P, a proof of P, and a proof that P is provable. There are similarly sharp distinctions between P, wanting P, believing P, wanting to believe P, and believing that you believe P.)
There are different kinds of belief in belief. You may believe in belief explicitly; you may recite in your deliberate stream of consciousness the verbal sentence “It is virtuous to believe that the Ultimate Cosmic Sky is perfectly blue and perfectly green.” (While also believing that you believe this, unless you are unusually capable of acknowledging your own lack of virtue.) But there are also less explicit forms of belief in belief. Maybe the dragon-claimant fears the public ridicule that they imagine will result if they publicly confess they were wrong.2 Maybe the dragon-claimant flinches away from the prospect of admitting to themselves that there is no dragon, because it conflicts with their self-image as the glorious discoverer of the dragon, who saw in their garage what all others had failed to see.
If all our thoughts were deliberate verbal sentences like philosophers manipulate, the human mind would be a great deal easier for humans to understand. Fleeting mental images, unspoken flinches, desires acted upon without acknowledgement—these account for as much of ourselves as words.
While I disagree with Dennett on some details and complications, I still think that Dennett’s notion of belief in belief is the key insight necessary to understand the dragon-claimant. But we need a wider concept of belief, not limited to verbal sentences. “Belief” should include unspoken anticipation-controllers. “Belief in belief” should include unspoken cognitive-behavior-guiders. It is not psychologically realistic to say, “The dragon-claimant does not believe there is a dragon in their garage; they believe it is beneficial to believe there is a dragon in their garage.” But it is realistic to say the dragon-claimant anticipates as if there is no dragon in their garage, and makes excuses as if they believed in the belief.
You can possess an ordinary mental picture of your garage, with no dragons in it, which correctly predicts your experiences on opening the door, and never once think the verbal phrase There is no dragon in my garage. I even bet it’s happened to you—that when you open your garage door or bedroom door or whatever, and expect to see no dragons, no such verbal phrase runs through your mind.
And to flinch away from giving up your belief in the dragon—or flinch away from giving up your self-image as a person who believes in the dragon—it is not necessary to explicitly think I want to believe there’s a dragon in my garage. It is only necessary to flinch away from the prospect of admitting you don’t believe.
If someone believes in their belief in the dragon, and also believes in the dragon, the problem is much less severe. They will be willing to stick their neck out on experimental predictions, and perhaps even agree to give up the belief if the experimental prediction is wrong.3 But when someone makes up excuses in advance, it would seem to require that belief and belief in belief have become unsynchronized.
1 Daniel C. Dennett, Breaking the Spell: Religion as a Natural Phenomenon (Penguin, 2006).
2 Although, in fact, a rationalist would congratulate them, and others are more likely to ridicule the claimant if they go on claiming theres a dragon in their garage.
3 Although belief in belief can still interfere with this, if the belief itself is not absolutely confident.
- AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by 10 Jan 2023 16:05 UTC; 341 points) (EA Forum;
- Taboo Your Words by 15 Feb 2008 22:53 UTC; 276 points) (
- Beyond the Reach of God by 4 Oct 2008 15:42 UTC; 249 points) (
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- The Useful Idea of Truth by 2 Oct 2012 18:16 UTC; 190 points) (
- Integrating disagreeing subagents by 14 May 2019 14:06 UTC; 147 points) (
- Belief as Attire by 2 Aug 2007 17:13 UTC; 132 points) (
- Urges vs. Goals: The analogy to anticipation and belief by 24 Jan 2012 23:57 UTC; 126 points) (
- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 124 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 119 points) (
- AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by 10 Jan 2023 16:06 UTC; 117 points) (
- Simultaneously Right and Wrong by 7 Mar 2009 22:55 UTC; 116 points) (
- How to Seem (and Be) Deep by 14 Oct 2007 18:13 UTC; 116 points) (
- Update Yourself Incrementally by 14 Aug 2007 14:56 UTC; 115 points) (
- The Mystery of the Haunted Rationalist by 8 Mar 2009 20:39 UTC; 114 points) (
- What I’ve learned from Less Wrong by 20 Nov 2010 12:47 UTC; 113 points) (
- Techniques for probability estimates by 4 Jan 2011 23:38 UTC; 111 points) (
- Epistemic Luck by 8 Feb 2010 0:02 UTC; 108 points) (
- Doublethink (Choosing to be Biased) by 14 Sep 2007 20:05 UTC; 100 points) (
- Belief in Self-Deception by 5 Mar 2009 15:20 UTC; 100 points) (
- Rationality: Appreciating Cognitive Algorithms by 6 Oct 2012 9:59 UTC; 97 points) (
- Of Two Minds by 17 May 2018 4:34 UTC; 94 points) (
- Fake Reductionism by 17 Mar 2008 22:49 UTC; 93 points) (
- Bayesian Judo by 31 Jul 2007 5:53 UTC; 92 points) (
- Moore’s Paradox by 8 Mar 2009 2:27 UTC; 89 points) (
- Replace the Symbol with the Substance by 16 Feb 2008 18:12 UTC; 88 points) (
- When None Dare Urge Restraint, pt. 2 by 30 May 2012 15:28 UTC; 84 points) (
- Mere Messiahs by 2 Dec 2007 0:49 UTC; 76 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 73 points) (
- Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model by 4 Aug 2010 9:16 UTC; 71 points) (
- Second-Order Logic: The Controversy by 4 Jan 2013 19:51 UTC; 67 points) (
- Prolegomena to a Theory of Fun by 17 Dec 2008 23:33 UTC; 67 points) (
- Angry Atoms by 31 Mar 2008 0:28 UTC; 66 points) (
- Do Scientists Already Know This Stuff? by 17 May 2008 2:25 UTC; 65 points) (
- Qualitatively Confused by 14 Mar 2008 17:01 UTC; 63 points) (
- 29 Nov 2021 21:02 UTC; 63 points) 's comment on The Rationalists of the 1950s (and before) also called themselves “Rationalists” by (
- Reflective Bayesianism by 6 Apr 2021 19:48 UTC; 62 points) (
- Which rationality posts are begging for further practical development? by 23 Jul 2023 22:22 UTC; 58 points) (
- Tendencies in reflective equilibrium by 20 Jul 2011 10:38 UTC; 51 points) (
- 4 Jun 2022 7:07 UTC; 51 points) 's comment on Intergenerational trauma impeding cooperative existential safety efforts by (
- Challenges to Yudkowsky’s Pronoun Reform Proposal by 13 Mar 2022 20:38 UTC; 50 points) (
- Assessment of intelligence agency functionality is difficult yet important by 24 Aug 2023 1:42 UTC; 47 points) (
- ELK Computational Complexity: Three Levels of Difficulty by 30 Mar 2022 20:56 UTC; 46 points) (
- Belief in Belief vs. Internalization by 29 Nov 2010 3:12 UTC; 45 points) (
- Rational Repentance by 14 Jan 2011 9:37 UTC; 45 points) (
- The role of neodeconstructive rationalism in the works of Less Wrong by 1 Apr 2010 14:17 UTC; 44 points) (
- Seduced by Imagination by 16 Jan 2009 3:10 UTC; 43 points) (
- Don’t Believe You’ll Self-Deceive by 9 Mar 2009 8:03 UTC; 43 points) (
- The Fear of Common Knowledge by 9 Jul 2008 9:48 UTC; 41 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Choosing the right dish by 19 Nov 2022 1:38 UTC; 38 points) (
- Confirmation Bias in Action by 24 Jan 2021 17:38 UTC; 37 points) (
- Lighthaven Sequences Reading Group #4 (Tuesday 10/01) by 25 Sep 2024 5:48 UTC; 36 points) (
- Book Review: Being You by Anil Seth by 26 Oct 2021 5:44 UTC; 34 points) (
- Buckets & Bayes by 20 Dec 2021 5:54 UTC; 34 points) (
- Rationality Outreach: A Parable by 17 Mar 2011 13:10 UTC; 34 points) (
- Happiness and Goodness as Universal Terminal Virtues by 21 Apr 2015 16:42 UTC; 30 points) (
- Modularity, signaling, and belief in belief by 13 Nov 2011 11:54 UTC; 29 points) (
- EA is underestimating intelligence agencies and this is dangerous by 26 Aug 2023 16:52 UTC; 28 points) (EA Forum;
- How to come up with verbal probabilities by 29 Apr 2009 8:35 UTC; 27 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- 17 Jun 2013 2:05 UTC; 27 points) 's comment on On manipulating others by (
- Action derivatives: You’re not doing what you think you’re doing by 21 Nov 2024 16:24 UTC; 26 points) (
- The Bedrock of Morality: Arbitrary? by 14 Aug 2008 22:00 UTC; 25 points) (
- 22 Oct 2013 21:03 UTC; 25 points) 's comment on What Can We Learn About Human Psychology from Christian Apologetics? by (
- Four Components of Audacity by 21 Jun 2021 4:49 UTC; 24 points) (
- Scientific Method by 18 Feb 2024 21:06 UTC; 24 points) (
- The Uses of Fun (Theory) by 2 Jan 2009 20:30 UTC; 23 points) (
- Don’t Believe Wrong Things by 25 Apr 2018 3:36 UTC; 22 points) (
- 29 Mar 2009 21:46 UTC; 20 points) 's comment on Tell Your Rationalist Origin Story by (
- Beyond the Reach of God, Abridged for Spoken Word by 6 Dec 2011 18:49 UTC; 19 points) (
- Q: What has Rationality Done for You? by 2 Apr 2011 4:13 UTC; 18 points) (
- The Power to Understand “God” by 12 Sep 2019 18:38 UTC; 18 points) (
- 27 Feb 2009 14:40 UTC; 18 points) 's comment on Tell Your Rationalist Origin Story by (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- 4 Mar 2012 22:47 UTC; 16 points) 's comment on Open Thread, March 1-15, 2012 by (
- 14 Aug 2015 8:07 UTC; 16 points) 's comment on Politics: an undervalued opportunity to change the world? by (
- 23 Oct 2011 9:00 UTC; 16 points) 's comment on Better Disagreement by (
- 18 Nov 2020 22:35 UTC; 16 points) 's comment on Open & Welcome Thread – November 2020 by (
- 3 Feb 2015 10:09 UTC; 15 points) 's comment on Open Thread, Feb. 2 - Feb 8, 2015 by (
- [Altruist Support] Fix the SIAI’s website (EDIT: or not. I’ll do it) by 7 May 2011 18:36 UTC; 15 points) (
- 9 Jul 2011 19:25 UTC; 15 points) 's comment on Rationality Quotes July 2011 by (
- An Intuitive Explanation of Inferential Distance by 26 Nov 2017 14:13 UTC; 14 points) (
- 26 Apr 2012 4:25 UTC; 14 points) 's comment on A Kick in the Rationals: What hurts you in your LessWrong Parts? by (
- 18 May 2013 8:54 UTC; 14 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- Rationality Reading Group: Fake Beliefs (p43-77) by 7 May 2015 9:07 UTC; 14 points) (
- Inward and outward steelmanning by 14 Jul 2022 23:32 UTC; 13 points) (
- 27 Jun 2012 12:59 UTC; 13 points) 's comment on Open Thread, June 16-30, 2012 by (
- Values, Valence, and Alignment by 5 Dec 2019 21:06 UTC; 12 points) (
- 12 Nov 2012 6:41 UTC; 12 points) 's comment on Struck with a belief in Alien presence by (
- 16 Jun 2010 8:35 UTC; 12 points) 's comment on How to always have interesting conversations by (
- 3 Jan 2013 16:42 UTC; 12 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- [SEQ RERUN] Bayesian Judo by 25 Jun 2011 5:09 UTC; 12 points) (
- 13 Jul 2014 14:49 UTC; 11 points) 's comment on Confused as to usefulness of ‘consciousness’ as a concept by (
- The Dogma of Evidence-based Medicine by 25 Jan 2018 21:15 UTC; 11 points) (
- Cross-Cultural maps and Asch’s Conformity Experiment by 9 Mar 2016 0:40 UTC; 10 points) (
- 13 May 2011 15:45 UTC; 10 points) 's comment on The elephant in the room, AMA by (
- [SEQ RERUN] Belief in Belief by 21 Jun 2011 20:27 UTC; 10 points) (
- 23 Feb 2012 10:02 UTC; 10 points) 's comment on I’ve had it with those dark rumours about our culture rigorously suppressing opinions by (
- 25 Mar 2011 2:07 UTC; 9 points) 's comment on “Is there a God” for noobs by (
- 29 Nov 2010 4:15 UTC; 9 points) 's comment on Belief in Belief vs. Internalization by (
- 19 Oct 2012 4:01 UTC; 9 points) 's comment on Rationality Quotes October 2012 by (
- 20 Dec 2010 21:44 UTC; 8 points) 's comment on The Santa deception: how did it affect you? by (
- 5 Jan 2012 12:17 UTC; 8 points) 's comment on Welcome to Less Wrong! (2012) by (
- Is Atheism a failure to distinguish Near and Far? by 2 Feb 2011 4:52 UTC; 8 points) (
- 6 Dec 2011 22:10 UTC; 8 points) 's comment on Beyond the Reach of God, Abridged for Spoken Word by (
- 8 Jun 2012 5:55 UTC; 8 points) 's comment on Debate between 80,000 hours and a socialist by (
- 19 Feb 2013 4:43 UTC; 8 points) 's comment on Falsifiable and non-Falsifiable Ideas by (
- 13 Jun 2014 14:43 UTC; 8 points) 's comment on List a few posts in Main and/or Discussion which actually made you change your mind by (
- Thinking without priors? by 2 Aug 2022 9:17 UTC; 7 points) (
- 1 Feb 2013 14:59 UTC; 7 points) 's comment on The value of Now. by (
- 24 May 2012 0:43 UTC; 7 points) 's comment on Welcome to Less Wrong! (2012) by (
- 3 Oct 2011 19:00 UTC; 7 points) 's comment on Rationality Quotes October 2011 by (
- 2 May 2011 22:51 UTC; 7 points) 's comment on Bayesian Judo by (
- Making Beliefs Pay Rent by 28 Jul 2024 17:59 UTC; 7 points) (
- 1 Dec 2016 4:18 UTC; 7 points) 's comment on Double Crux — A Strategy for Mutual Understanding by (
- 30 Oct 2012 6:04 UTC; 7 points) 's comment on How to Deal with Depression—The Meta Layers by (
- Belief-conditional things—things that only exist when you believe in them by 25 Dec 2021 10:49 UTC; 7 points) (
- 14 Mar 2024 1:00 UTC; 6 points) 's comment on I was raised by devout Mormons, AMA [&|] Soliciting Advice by (
- 7 Aug 2010 8:19 UTC; 6 points) 's comment on Open Thread, August 2010 by (
- 20 Apr 2022 16:29 UTC; 6 points) 's comment on A very quick analogy regarding “opinions” by (
- 14 Sep 2010 20:15 UTC; 6 points) 's comment on The Affect Heuristic, Sentiment, and Art by (
- 3 Nov 2011 15:27 UTC; 6 points) 's comment on 2011 Less Wrong Census / Survey by (
- 21 Apr 2009 23:02 UTC; 6 points) 's comment on Well-Kept Gardens Die By Pacifism by (
- 25 Jan 2012 20:00 UTC; 6 points) 's comment on Urges vs. Goals: The analogy to anticipation and belief by (
- Physics is Ultimately Subjective by 14 Jul 2023 22:19 UTC; 5 points) (
- 22 Dec 2011 18:34 UTC; 5 points) 's comment on Welcome to Less Wrong! by (
- Meetup : Buffalo Meetup by 14 Feb 2013 18:42 UTC; 5 points) (
- 9 Aug 2010 18:20 UTC; 5 points) 's comment on Making Beliefs Pay Rent (in Anticipated Experiences) by (
- 19 Jun 2021 23:58 UTC; 5 points) 's comment on What are good resources for gears models of joint health? by (
- The hour I first alieved by 17 Apr 2021 17:02 UTC; 5 points) (
- 5 Oct 2011 22:15 UTC; 4 points) 's comment on On self-deception by (
- 22 Feb 2012 13:02 UTC; 4 points) 's comment on I believe it’s doublethink by (
- 22 Feb 2012 12:35 UTC; 4 points) 's comment on I believe it’s doublethink by (
- 30 Mar 2011 11:53 UTC; 4 points) 's comment on “Is there a God” for noobs by (
- 19 Mar 2009 0:00 UTC; 4 points) 's comment on In What Ways Have You Become Stronger? by (
- 19 Mar 2011 18:14 UTC; 4 points) 's comment on Omega and self-fulfilling prophecies by (
- 4 Oct 2011 15:32 UTC; 4 points) 's comment on Religion, happiness, and Bayes by (
- Meetup : Ohio Monthly by 23 Feb 2012 23:14 UTC; 4 points) (
- Ideas of the Gaps by 13 Sep 2022 10:55 UTC; 4 points) (
- 26 Jan 2012 1:36 UTC; 4 points) 's comment on Urges vs. Goals: The analogy to anticipation and belief by (
- 20 Dec 2013 10:53 UTC; 4 points) 's comment on Rationality Quotes December 2013 by (
- 13 Feb 2015 22:40 UTC; 3 points) 's comment on The Truth About Mathematical Ability by (
- 21 Jun 2021 19:20 UTC; 3 points) 's comment on Four Components of Audacity by (
- 12 Feb 2013 16:56 UTC; 3 points) 's comment on A confusion about deontology and consequentialism by (
- Global warming is a better test of irrationality that theism by 16 Mar 2012 17:10 UTC; 3 points) (
- 25 Feb 2011 23:53 UTC; 3 points) 's comment on Making Beliefs Pay Rent (in Anticipated Experiences) by (
- 25 Jul 2015 8:00 UTC; 3 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 22 Jun 2011 13:51 UTC; 3 points) 's comment on Eutopia is Scary by (
- 3 Jul 2011 4:18 UTC; 3 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 7 Jan 2011 3:04 UTC; 3 points) 's comment on Techniques for probability estimates by (
- 6 Aug 2010 12:34 UTC; 3 points) 's comment on The Fundamental Question by (
- 30 Aug 2013 22:19 UTC; 3 points) 's comment on Rewriting the sequences? by (
- 10 Oct 2011 22:03 UTC; 2 points) 's comment on [SEQ RERUN] Why Are Individual IQ Differences OK? by (
- 20 Nov 2010 13:14 UTC; 2 points) 's comment on How to Convince Me That 2 + 2 = 3 by (
- 20 Mar 2013 20:56 UTC; 2 points) 's comment on [SEQ RERUN] Don’t Believe You’ll Self-Deceive by (
- 24 Jan 2013 3:04 UTC; 2 points) 's comment on Right for the Wrong Reasons by (
- 27 Apr 2014 16:15 UTC; 2 points) 's comment on Questions to ask theist philosophers? I will soon be speaking with several by (
- 25 Dec 2012 11:08 UTC; 2 points) 's comment on New censorship: against hypothetical violence against identifiable people by (
- Meetup : Bi-weekly Frankfurt Meetup by 19 Jul 2015 8:22 UTC; 2 points) (
- 10 Oct 2012 17:47 UTC; 2 points) 's comment on Welcome to Less Wrong! by (
- 5 Mar 2022 19:21 UTC; 2 points) 's comment on Higher Risk of Nuclear War by (
- 6 Nov 2021 20:42 UTC; 2 points) 's comment on God Is Great by (
- 18 Jan 2012 9:14 UTC; 2 points) 's comment on Leveling Up in Rationality: A Personal Journey by (
- You don’t need Kant by (1 Apr 2009 18:09 UTC; 2 points)
- 22 May 2015 6:50 UTC; 2 points) 's comment on Visions and Mirages: The Sunk Cost Dilemma by (
- 28 Aug 2011 11:41 UTC; 2 points) 's comment on [Poll] Who looks better in your eyes? by (
- 3 Dec 2010 19:36 UTC; 2 points) 's comment on Helpless Individuals by (
- 27 Aug 2012 1:13 UTC; 2 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- Meetup : Vancouver Rationalists: LAN Party + Board Games + Discussion at Waves Cafe: September 10th 3:00 to 5:00PM by 8 Sep 2011 22:39 UTC; 2 points) (
- 2 Nov 2022 21:40 UTC; 2 points) 's comment on Prolegomena to a Theory of Fun by (
- 4 Aug 2010 3:28 UTC; 2 points) 's comment on The Threat of Cryonics by (
- Against Belief-Labels by 9 Mar 2017 20:01 UTC; 2 points) (
- Meetup : Pittsburgh: Belief in Belief by 23 Jun 2012 9:57 UTC; 2 points) (
- 24 Jan 2012 5:42 UTC; 2 points) 's comment on Urges vs. Goals: The analogy to anticipation and belief by (
- 1 Apr 2012 2:34 UTC; 2 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 by (
- 1 Nov 2018 0:07 UTC; 1 point) 's comment on Hazard’s Shortform Feed by (
- 3 Sep 2012 8:42 UTC; 1 point) 's comment on Call For Agreement: Should LessWrong have better protection against cultural collapse? by (
- 29 Oct 2019 17:42 UTC; 1 point) 's comment on bgaesop’s Shortform by (
- 13 Dec 2011 3:13 UTC; 1 point) 's comment on How to Not Lose an Argument by (
- 3 Aug 2011 20:14 UTC; 1 point) 's comment on The elephant in the room, AMA by (
- 4 Oct 2013 8:57 UTC; 1 point) 's comment on Crush Your Uncertainty by (
- 29 Apr 2012 16:35 UTC; 1 point) 's comment on Stanovich on CEV by (
- 31 Jul 2011 19:33 UTC; 1 point) 's comment on Expecting Short Inferential Distances by (
- 24 Nov 2016 15:07 UTC; 1 point) 's comment on Open thread, Nov. 21 - Nov. 27 − 2016 by (
- 28 Dec 2022 20:44 UTC; 1 point) 's comment on Notes on “Can you control the past” by (
- 4 Mar 2012 1:34 UTC; 1 point) 's comment on Request for input: draft of my “coming out” statement on religious deconversion by (
- 15 Oct 2023 23:13 UTC; 1 point) 's comment on AI #32: Lie Detector by (
- 31 Aug 2010 18:21 UTC; 1 point) 's comment on Morality as Parfitian-filtered Decision Theory? by (
- 8 Jul 2011 13:12 UTC; 0 points) 's comment on I’m becoming intolerant. Help. by (
- 14 Dec 2011 18:15 UTC; 0 points) 's comment on How to Not Lose an Argument by (
- 3 May 2011 6:12 UTC; 0 points) 's comment on Melbourne meetup by (
- 22 Feb 2012 20:06 UTC; 0 points) 's comment on I believe it’s doublethink by (
- 24 Jan 2013 4:48 UTC; 0 points) 's comment on Right for the Wrong Reasons by (
- 16 May 2017 22:40 UTC; 0 points) 's comment on Thoughts on civilization collapse by (
- 26 May 2012 7:47 UTC; 0 points) 's comment on Over-applying rationality: Indefinite lifespans by (
- 25 Mar 2011 8:55 UTC; 0 points) 's comment on “Is there a God” for noobs by (
- 7 May 2008 11:22 UTC; 0 points) 's comment on Decoherence is Simple by (
- 19 Aug 2017 0:49 UTC; 0 points) 's comment on Am I Really an X? by (
- 20 Aug 2017 20:45 UTC; 0 points) 's comment on Am I Really an X? by (
- 15 Aug 2017 6:13 UTC; 0 points) 's comment on Am I Really an X? by (
- 22 Aug 2017 6:18 UTC; 0 points) 's comment on Am I Really an X? by (
- 18 Aug 2017 4:30 UTC; 0 points) 's comment on Am I Really an X? by (
- 8 Mar 2017 7:21 UTC; 0 points) 's comment on Am I Really an X? by (
- 17 Aug 2017 2:14 UTC; 0 points) 's comment on Am I Really an X? by (
- 18 Aug 2017 0:30 UTC; 0 points) 's comment on Am I Really an X? by (
- 18 Aug 2017 18:42 UTC; 0 points) 's comment on Am I Really an X? by (
- 21 Aug 2017 18:48 UTC; 0 points) 's comment on Am I Really an X? by (
- 23 Apr 2014 10:05 UTC; 0 points) 's comment on Be comfortable with hypocrisy by (
- 8 Nov 2012 0:39 UTC; 0 points) 's comment on Uncritical Supercriticality by (
- 22 Jul 2017 3:52 UTC; 0 points) 's comment on Can anyone refute these arguments that we live on the interior of a hollow Earth? by (
- 19 Apr 2015 18:55 UTC; 0 points) 's comment on On not getting a job as an option by (
- 6 Nov 2012 17:45 UTC; 0 points) 's comment on 2012 Less Wrong Census/Survey by (
- 20 Jan 2013 3:43 UTC; 0 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 3 May 2011 6:15 UTC; 0 points) 's comment on Melbourne Meetup: Friday 6th May, 6pm by (
- 14 Dec 2011 5:50 UTC; 0 points) 's comment on Rationality Quotes December 2011 by (
- 15 Nov 2010 10:46 UTC; 0 points) 's comment on Fixedness From Frailty by (
- 16 Aug 2015 6:13 UTC; 0 points) 's comment on Righting a Wrong Question by (
- 12 Jul 2012 4:55 UTC; 0 points) 's comment on Morality open thread by (
- 26 Jan 2014 2:56 UTC; 0 points) 's comment on Polling Thread by (
- 11 May 2017 22:50 UTC; 0 points) 's comment on Nonperson Predicates by (
- 21 Feb 2011 0:28 UTC; 0 points) 's comment on Social ethics vs decision theory by (
- 3 Oct 2010 23:35 UTC; -1 points) 's comment on The Irrationality Game by (
- 7 Jul 2012 6:31 UTC; -2 points) 's comment on Rationality Quotes July 2012 by (
- 11 Oct 2009 15:59 UTC; -2 points) 's comment on Let them eat cake: Interpersonal Problems vs Tasks by (
- 10 Jul 2012 6:05 UTC; -2 points) 's comment on Morality open thread by (
- 19 May 2013 6:54 UTC; -2 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 11 Oct 2011 14:22 UTC; -6 points) 's comment on [SEQ RERUN] Torture vs. Dust Specks by (
- Where “the Sequences” Are Wrong by 7 May 2023 20:21 UTC; -15 points) (
- Ontologial Reductionism and Invisible Dragons by 20 Mar 2012 2:29 UTC; -15 points) (
Eliezer, my understanding is: “belief is to believe in something”.
Whether you call it science fiction, heuristics, overcoming bias, history, a belief is a belief. You can’t prove belief as it’s self-subjective. You can’t tell someone what they feel is wrong. Each individual has there equation when it comes to understanding the “dragon” within themself. If dragons can’t be verified as they have never been verified based on history, why do people still feel the need to believe in dragons and continue to discuss the subject and be fascinated by it?
Just Curious Anna
Anna, this blog is too advanced for you and you should not be commenting on it. Go read The Simple Truth until you understand the relation between a map and the territory.
[EDIT: I deleted an additional comment from Anna in this thread.]
(Updated link: The Simple Truth)
Oh great, now I’m going to think “There’s no dragon in my garage” every time I open my garage door for the next week...
I enjoyed The Simple Truth, thanks for linking it.
[[“If the pebbles didn’t do anything,” says Autrey, “our ISO 9000 process efficiency auditor would eliminate the procedure from our daily work.”]]
This “ISO 9000” hypothesis has not been supported by direct observation, unfortunately...
From the post:
If you’ve read Dennett on beliefs, you’ll appreciate that this “wider concept” based on behavior and predictability is really at the heart of things.
I think it is very difficult to attribute a belief in dragons to this “dragon-believer”. Only a small subset of his actions—those involving verbal avowals—make sense if you attribute a belief in dragons to him. There is a conflict with the remainder of his beliefs, as can be seen when he nonchalantly enters his garage, or confabulates all sorts of reasons why his dragon can’t be demonstrated.
But as you have shown, everything makes sense if you attribute a related, but slightly different belief, namely “I should avow a genuine, heartfelt belief in dragons”. Perhaps we can say that this man (and the religious man, since this is the real point) doesn’t just believe in belief, but they believe that they believe. He tries to make a second-order belief do the work of a first-order belief.
How does this compare with Popper’s theory? In the instance above, it’s clear that belief in belief doesn’t make sense. But things may not be as clear. Won’t an event with low probability look like the invisible dragon before it happens?
Anna, If you’re talking about real dragons, the theory that made the most intuitive sense to me (I think I read it in an E.O. Wilson writing?) is that dragons are an amalgamation of things we’ve been naturally selected to biologically fear: snakes and birds of prey (I think rats may have also been part of the list). Dragons don’t incorporate an element of them that looks like a handgun or a piping hot electric stove, probably because they’re too new as threats for us to be naturally selected to fear things with those properties.
Eliezer, Very interesting post. I’ll try to respond when I’ve had time to read it more closely and to digest it.
I want my wife to read this, but I don’t think she’d believe it.
This post helps me understand some of the most infuriating phrases I ever hear (which the title immediately reminded me of): “it doesn’t matter what you believe as long as you believe something”, “everyone has to believe in something”, “faith is a virtue”, &c. It makes sense that if a person’s second-order belief is stronger than their first-order belief, they would say things like that.
Was the reply to Anna serious? That’s outrageous.
Shocked, it wasn’t my first interaction with her.
I like Eliezer’s essay on belief very much. I’ve been thinking about the role of belief in religion. (For the sake of full disclosure, my background is Calvinist.) I wonder why Christians say, “We believe in one God,” as if that were a particularly strong assertion. Wouldn’t it be stronger to say, “We know one God?” What is the difference between belief and knowledge? It seems to me that beliefs are usually based on no data. Most people who believe in a god do so in precisely the same way that they might believe in a dragon in the garage. People are comfortable saying that they know something only when they can refer to supporting data. Believers are valiantly clinging to concepts for which the data is absent. Most people who believe in a god do so in precisely the same way that they might believe in a dragon in the garage.
Regarding the dialogue between the dragon claimant and his challengers, why didn’t the challengers simply ask the claimant, “Why do you say that there is an invisible, inaudible, non-respiriating, flour-permeable dragon in your garage?”
Knowledge involves more than belief. You know p if all of the following are true:
1) You believe p. 2) p is true. 3) If p were not true, you wouldn’t believe it (justified true belief) 4) If p were true, you would believe it (Gettier belief)
And most beliefs, such as the belief that my keys are in my left pocket, are trivial and true, as well as being based on data.
At least in my mind, the processes that generate beliefs like “my keys are in my left pocket” are not perfectly reliable—at least once, I have thought my keys were in my left pocket when in fact I left them on the dresser.
So #3 is demonstrably false for me; on this account, I don’t know where my keys are.
Which is perfectly internally consistent, though it doesn’t match up with the colloquial usage of “to know,” which seems to indicate that the speaker’s confidence in p is above some threshold.
There’s nothing wrong with having precisely defined terms of art, in epistemology or any other field. But it can lead to confusion when colloquial words are repurposed in this way.
Add “with high probability” everywhere.
And also, “How do you know.”
Your question is more helpful, of course. Any person who believes that there is a non-evidentiary dragon in a garage will have some way to answer mine, hopefully without going through too much more stress.
I’ve got a double garage… what if the dragon sneaks out one door while I’m coming in through the other door, then comes in behind me through the second door while I look for it outside the first door?? Dragons everywhere now!!
The parable was original with Antony Flew whose Theology and Falsification can be found here http://www.stephenjaygould.org/ctrl/flew_falsification.html
The parable your refer to by Sagan I think should be attributed to Antony Flew whose Theology and Falsification is available online. http://www.stephenjaygould.org/ctrl/flew_falsification.html
Trying to work out the biases of the new antispam filter. Frequency of comments from same individual in same thread ?
That lastr one got through, so let’s try : Random malfunction ?
Anna: Whether you call it science fiction, heuristics, overcoming bias, history, a belief is a belief. You can’t prove belief as it’s self-subjective. —That only makes for more of a reason it should only be self affecting, too many people try to influance the actions of others bases on their dragons
You can’t tell someone what they feel is wrong. —Yes I can. “There’s a dragon in my bathroom”… (careful examination of the bathroom)… “No, there isn’t, you’re wrong.”
Each individual has there equation when it comes to understanding the “dragon” within themself. —And it needs to be overcome with logic and reason.
If dragons can’t be verified as they have never been verified based on history, why do people still feel the need to believe in dragons and continue to discuss the subject and be fascinated by it? —simply because they were raised with it, taught to dis-belive any evidence provided, and been shown that those who disagree with it are ‘out to get them’
If there exists a conventional syntax for quoting another comment, I think a commenter should use that convention rather than invent his own syntax, don’t you?
There is a distinction between Belief and Knowledge. We can believe things that are untru and disbelieve things that are true.
At least in the case of religious people who are actually convinced God exists, I think the difference between belief and knowledge is thus: Belief is when you think something is true but it’s controversial. Knowledge is when you think someting is true and think everyone would agree with you.
I knew as soon as I read the first paragraph that the comments would start discussing religion, haha...
John Rozendaal, I never really thought about the “We know”/”We believe” distinction before. Seems to me like the church conditions people to think of their ideals as a belief, so that the thought of ‘knowledge’ wont creep into their minds and make them think. Thinking is the church’s worst enemy.
I believe the difference between belief and knowledge stems from experience. Knowledge is accepted as proven data, whether from the experience of others or through your own direct experience. Beliefs are accepted as something proven to varying levels of personal satisfaction inferred directly from experience, dependent upon the person and the belief.
This makes the two rather interdependent in my view, for how can one know without believing that his knowledge is truth and how can one believe without knowing something to believe in?
In the case of modern religion, people take the belief others express as knowledge and use it as a basis for their own beliefs.
X: There’s something that makes beliving and knowledge quite different, and that’s truth which isn’t inside one person head but out there, in reality. I’m sure that if we ask this man if he knows there is a dragon in the garage he will reply affirmatively, no doubt about it, but the truth is that there is no dragon and he just think he knows is in there. But the man doesn’t know anything, he believe a lie and he is making excuses to protect the lie, and one of those excuses is that he knows is in there, is not a belief.
I think this is one of humanity greatest weakness; the need to detach from reality and defend beliefs that are obviously wrong, I understand the psychological need to do so, but, in my opinion, still is a sign of weakness. As Eliezer said we should find joy in what is real.
“There’s something that makes beliving and knowledge quite different, and that’s truth which isn’t inside one person head but out there, in reality.”
Ehm, let me ask you this: Are you 100% sure that the sun will come up tomorrow?
All evidence points that way, yes. We have a fair idea of what is going on yes. But that’s where the ball stops—we will never know with 100% certainity.
When we stop acknowledging that the science of tomorrow may produce evidence that will turn our whole world-view upside down, is when Science becomes Religion.
I’m not saying that we need to start taking mediums seriously and base our lifealtering decisions on numerology. I’m merely saying that the things you take seriously today, the things you’d base your lifealtering decisions on today may be falsified tomorrow, redeemed the week after, only to be shot down again with the latest research come this time next year.
The ‘Truth’ may be out there, but it needs to be approcached empirically with a clear understanding of the fact that even repeated measurements of the same thing will only ever give us circumstancial evidence that may be influenced by our abilities to measure and reason. I don’t think we ever posess true knowledge. Instead we have beliefs that can or cannot stand up to empirical scrutiny. Beliefs that must still be challenged on a regular basis, and acknowledged for what they are.
Does the idea that it is a good thing to subject our beliefs (and even our belief in belief) to logical and analytical scrutiny count as belief in itself or is it so justifiable as to count as knowledge? If so, what is the justification?
I don’t think it does. Scrutinizing your beliefs is a corollary—it naturally follows if you believe that “Truth is good and valuable and its pursuit is worthwhile.” We value truth, we want our maps to match the territory, and so we scrutinize our beliefs. If anything needs to be justified, it’s the value placed on truth and knowledge thereof.
And that’s actually an interesting problem. Although my intuition shouts TRUTH IS GOOD, there’s not much I can say to prove that statement, outside of “It’s useful to be able to make accurate predictions.” It seems like the goodness of truth ought to be provable some way. But maybe it’s just a moral value that our (sub-)culture happens to hold particularly strongly? Perhaps someone better versed than I am in the arts of rationality can give a better answer.
Prove in which way? Not to mention that you need to define “good” first.
Would the observation that people who disregard “truth is good” rarely survive for long be considered a kinda-proof? :-)
I’ve always thought that the idea of “believing in” things was very curious. This is a very thought-provoking article. Every time I engage a debate about this subject (the relevance or usefulness of beliefs) someone is sure to say something about beliefs existing for the benefit of the believer. My feeling is that with most beliefs and with most believers, there is an internal acknowledgement of the falsifiablity of their belief which is outweighed by the perception that some benefit is derived from the belief. What I interpret from this is that most believers subtley admit their own practice of belief in belief. I also feel that even the idea of whether or not one believes in believing in belief can enter the mind of the mundane thinker at such an admission.
Do you believe in anything, or is it all feeling and knowing?
The real question is not “is there a dragon?”, but “why is it having sex with my car?”
Chelo: “I don’t think we ever posess true knowledge.”
I KNOW I went to Tesco’s this morning. Am I wrong? Discuss!
Main post “The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he’ll need to excuse.”
I know this is a bit of a side issue, but how do you justify this claim from the example given? You don’t need such a model to give the answers he gives. Surely you once engaged in late-night undergraduate pseudo-intellectual discussions where you held an ultimately untenable viewpoint but still fended off such questions on the fly?
Perhaps though this is just a problem arising from the rather simplistic metaphor. A dragonista can postulate a dragon and then, as in your example, refute all challenges by simply denying all interactions with the real world, although then of course he’s not really saying anything at all. The religionist has a much more difficult trick to perform. He cannot take the dragonista’s line as his god must interact in some way with the world to have any meaning. He is faced with having to reconcile the interactions he needs from his god (e.g. responses to prayer) with the apparent absence of physical evidence for them. This DOES require the building of the consistent framework you propose, so that he can fend off new challenges without falling into a trap which concedes the non-existence of his god. The convolutions exhibited by fundamentalist Christians when trying to construct such a reconciliation between what they need to believe and the contrary evolutionary evidence are a better example of this.
“Those who find this confusing may find it helpful to study mathematical logic, which trains one to make very sharp distinctions between the proposition P, a proof of P, and a proof that P is provable”
This is a bit of a side question, but wouldn’t a proof that P is provable be a proof of P? In fact, it sounds like a particularly elegant form of proof.
If you trust base system B, then a proof that P is provable in B is good as gold to you. But it is not a proof in B.
http://lesswrong.com/lw/t6/the_cartoon_guide_to_l%C3%B6bs_theorem/
Hrm… if the system is isn’t necessarily trustworthy, then that the system proves that it can prove P doesn’t mean that it’s actually true that it can prove P, I guess.
EDIT: actually, having it as an explicit axiom “If this proves P, then P” runs you into trouble in any system that has something like Lob’s theorem.
(“if some specific subset of the rest of this system, (ie, other than this axiom) proves P, then P” can potentially be okay, though)
Seconded—this is an interesting question. (And I suspect that there are some interesting cases in which a proof that P is provable does not constitute a proof, but this is mainly because I’ve seen mathematicians break similarly intuitive propositions before.)
It wouldn’t surprise me either. However such cases would have to rely on a precise definition of ‘proof’ differently to what I use. The result would then be filed under ‘fascinating technical example’ but not under ‘startling revelation’ and I would take note of the jargon for use when talking to other mathematicians.
Here’s an example of what Doug Hofstadter writes in I Am A Strange Loop. Kurt Goedel discovered that Principia Mathematica by Bertrand Russell does provide reference to itself. So Russell in his book yields the propositions and their proofs, and then Goedel assigns specific numbers to proofs and therefore proves that there is a proof that they are in fact, provable
Outside of mathematics, a statement that is provable is also disprovable. Then it’s called a hypothesis.
I’m reminded of the joke where an engineer, a physicist, and a mathematician are going to a job interview. The interviewer has rigged a fire to start in the wastepaper basket, to see how they react in a crisis situation. The engineer sees the fire, sees the water cooler, grabs the water cooler and dumps it on the fire. The physicist sees the fire, sees the water cooler, grabs pencil and paper, calculates the exact amount of water needed to extinguish the fire, then pours that amount of water into the basket, exactly extinguishing the fire. The mathematician sees the fire, sees the water cooler, and says, “Ah! A solution exists!”.
Blarg… okay this one is tripping me up. There are two parts to this comment. The first part is quasi-math; the other is not. It is very much a brain dump and I have not edited it thoroughly.
EDIT: I think I managed to get it cleared up and responded with a second comment. FYI.
Let B(X) mean belief in X where belief is defined as a predictor of reality so that reality contains event X. Using “There is a dragon in my garage” as X we get:
B(“There is a dragon in my garage.”)
B(“There is not a dragon in my garage.”)
I think it is okay to write the latter as:
B(~X) where X is “There is a dragon in my garage.”
So far okay and both can be verified. The problem comes when X is “There is an unverifiable dragon in my garage.”
B(“There is an unverifiable dragon in my garage.”)
B(“There is not an unverifiable dragon in my garage.”)
Both of these are unverifiable, but the latter is okay because it matches reality? As in, we see no unverifiable dragon so the ~X is… what, the default? This confuses me. Perhaps my notation is wrong. Is it better to write:
B(X)
~B(X)
If B(X) is belief in X, B(~X) != ~B(X). This way we can throw out the unverifiable belief without creating a second unverifiable belief. All of this makes sense to me. Am I still on track with the intent of the post? This implies that B(X) and B(~X) are equally unverifiable when X is unverifiable.
Next is belief in belief:
B(B(X))
Of which I think you are arguing that B(B(X)) does not imply B(X). But are you also saying that B(X) implies B(B(X))? And this is how people can continue to believe in something unverifiable?
I feel like I am drifting far away from the purpose of this post. Where did I misstep?
Here is my second attempt, this time with no math:
Would there be any experimental results that he wouldn’t need to excuse? Is there some form of invisiodragonometer that beeps when he goes into his garage? Would the scenario change any if the subject was genuinely surprised when no sounds of breathing were heard and the oxygen levels remained the same and still offered up excuses of inaudible and non-breathing? How would the typical believer in atoms defend their existence if we wandered into the garage and complained about no breathing sounds?
I can think of simple answers to all of these questions, but it makes me think less of the usefulness of your conclusion. When I think of unverifiable beliefs I think of examples where people will spend their whole life looking for physical proof and are constantly disappointed when they do not find it. These people don’t have an accurate model of the situation in their mind. The example of invisible dragons still applies to these people while your claim that they dodge in advance does not seem to apply.
So… again, I feel like I am missing some key point here.
I can think of examples where someone fully admits that they believe it would be better to believe X but as hard as they try and as much as they want to, they cannot. These people are often guilt ridden and have horrible, conflicting desires, but it doesn’t take much imagination to think of someone who simply states the belief in belief X without emotion but admits to not believing X. At least, I can hear myself saying these words given the right circumstances.
Believing in belief of belief seems like something else entirely unrelated to dragons in garages or unverifiable beleifs. This, again, makes me feel as if I am missing a crucial piece of understanding throughout all of this. If I had to potshot at the missing pieces I would aim toward the definitions of belief. Specifically, what you are calling beliefs aside from predictors of reality. (And even there, I do not know if I have a correct translation.)
I do not know if you have any desire to discuss this subject with me. Perhaps someone else who knows the material is willing? I sincerely apologize if these types of responses are frustrating. This is how I ask for help. If there is a better way to ask I am all ears.
The idea here is that if you really believed you had an invisible dragon in your garage, if somebody proposes a new test (like measuring CO2), your reaction should be “Oh, hey! There’s a chance my dragon breathes air, and if so, this would actually show it’s there! Of course, if not, I’ll need to see it as less likely there’s an invisible dragon.”
If instead, your instant reaction is always to expect that the CO2 test returns nothing, and to spend your first thoughts (even before the test!) coming up with an excuse why this doesn’t disconfirm the dragon… then the part of you that’s actually predicting experiences knows there isn’t actually a dragon, since it instantly knows that any new test for it will come up null.
Do people actually do that? I couldn’t think of anyone I know who would do that. I finally came up with an example of someone I know who has a belief in belief, but it still doesn’t translate into someone who acts like you described.
I am not saying it is impossible; I’ve just never met anyone who acted like this and wasn’t blatantly lying (which I am assuming disqualifies them from belief in belief).
Umm… have you met a religious person? As soon as you mention anything about evidence or tests, they’ll tell you why they won’t/don’t work. These sorts of excuses are especially common if you talk about testing the efficacy of prayer.
99% of the people I know are religious. This isn’t an exaggeration. I can think of 2 or 3 that aren’t. (This doesn’t count online interactions.)
So… I guess I will just repeat what I said earlier. I’ve never met anyone who acted like this and wasn’t blatantly lying.
You have to realize that “evidence or tests” does not mean the same thing to them as it does to you. They have been conditioned against these words. If the belief is something as vague as, “God will show up during worship.” you cannot ask the phrase, “What evidence do you have for this?” This puts them on an immediate defensive because they are used to jerks asking the questions.
This has little to do with quests for belief. It has more to do with the arguments as armies concept. This is an important point. Please don’t dismiss it without thinking about it.
The appropriate way to ask the question is to ask for details about how God is showing up and act enthusiastic. “Every time? Wow! How do you know? Does this happen at other worship services? Has it always happened here? Does he show up stronger at some than others? Which ones are the best? Does he say anything to you?”
If this sounds silly to you, than you aren’t getting it.
If you bring in an CO2 meter and expect to find God you will be called crazy by the people who believe in him. This is completely different than the dragon-in-the-garage example.
Prayer is the best example were I have seen Christians start getting frustrated. Not because they are coming up with excuses, but because they don’t understand why it isn’t working. The people I know actually expect something. If I were to ask them if we could see a statistical difference in a study on the effects of prayer they would answer, “Of course!” The problem with these people is that when the results come back negative they will start explaining away the numbers. If, later, you go back and try the same trick they will just repeat the explanations they used last time.
People who answer, “No.” are more likely to be praying without thinking it will actually work. They are praying for religious purposes.
Not that I’m saying that religious people don’t do this. If you can provide an example that would be great.
I have close friends who are religious, and something that always struck me as both odd and tragic is how they treat their prayers vs. the prayers of others.
When someone else laments that their prayers have not been answered, they reassure them and encourage them to continue praying.
When their own prayers are not answered, they get frustrated and worry that somehow they’re failing God and that they don’t deserve to have their prayers answered.
For others, they act like no excuse is necessary (“God has a plan”), but for themselves they look for one (“I’ve been lax in my faith”).
This is good evidence for the “belief in belief” theory, but is kind of a bummer to think about (How would you feel if you knew the person reassuring you about your prayers actually had the same frustration as you?).
What’s even more of a bummer is how often priests/pastors/etc. get asked “Why does God talk to everyone but me?”
The explanation is that they are just trying to make their friend feel better. You cannot make yourself feel better with the same trick because you know you are secretly condemning your friend for being lax in their faith. You could deny that, I suppose, but I see this more as hypocritical than anything else.
Also, this is significantly more common in certain denominations than others. Some denominations have entire books that solely address this problem.
I don’t think that the people I know are secretly condemning their friends for being lax in their faith. It’s like they feel constant guilt, and don’t identify their bad situations as caused by the same things other peoples’ bad situations are.
Kind of like chalking someone else’s bad behavior up to character flaws but your own to bad circumstances.
Your point about certain denominations is well taken; my friends are almost exclusively one.
I use different language if I’m talking to a theist. Usually I ask something like, “Do you think prayer works?” They say, “Yes.” I say, “So if there was a group of people with some disease, we should expect those who were prayed for to be more likely to get better, right?” The conversation branches here. Either they say, “No” because they know about the studies that have been done, or they say, “Yes,” I mention the studies, and they say something about how you can’t put God to the test.
No it is not. Their reaction is more emotionally charged than in the dragon example. The theists have a belief but anticipations guided by not-belief.
Another example: One of my friends is studying to be a Catholic priest. He believes in evolution. Of course I couldn’t help but ask him if he thought (non-human) animals went to heaven. He said no. “Ah-ha!” I thought, “The trap is set!”
Me: “So there had to be some point in evolution where two hairy proto-humans gave birth to a slightly less hairy human. Even though they only differed from each other as much as we differ from our parents, the proto-human parents didn’t have souls and the child did. If the child went to heaven, he would ask God where his parents went.
Friend: “Yes.”
Me: o_O
Well at least he was consistent. Later I asked him about the efficacy of prayer and he said it worked as long as you weren’t doing a test to see if it worked. How convenient.
ETA: Oh and he doesn’t think cryonics will work since the soul leaves the body at death. Also he believes strong AI is impossible.
This is the best example I have seen yet, but I am still not convinced that the problem is with anticipations not being guided by beliefs. He still anticipates something but is willing to amend the wrong side of the experiment when something goes weird.
But yeah, this is a much clearer example. I can think of a bunch of people I know who act like this.
The rest of this comment is nitpicking over something only slightly related.
This sentence will trigger the conditioning I was talking about. This is the exact wrong way to talk to someone about the subject.
Those who say “No” because they know about the studies are not like the dragon example. They would have to say no before they knew about the studies. And, included in “studies,” this means every single failed prayer from their own life.
If you found someone who had absolutely no good reason to doubt prayer they would expect the studies to show prayer works. A pre-dodge of the experiment is much more likely to point to previous encounters with experiments than anticipations hooked up to not-beliefs.
Those who say “Yes” are now amending their belief to fit the facts. This is not like the dragon example.
Stop trying to trap people. It is petty, rude, and just makes the world worse. Most people, even theists, are willing to talk about their beliefs if they don’t feel defensive. People can smell a trap coming as soon as they see someone’s face. As soon as they get defensive, the conversation becomes a war. This is bad.
Really, the fact that you seem so surprised by this answer makes me think you have no idea what your friend believes. When your predictors to answers about technical questions are off enough to make you go o_O you may want to start looking at your predictors.
Sigh. I am sorry for jumping at you. I don’t really have a good excuse, but I am sorry anyway.
But it’s fun! At least it’s fun between friends. Remember that my friend got the last laugh in my trap example. We both know we’re not going to convince each other, but it’s still fun to play argument chess.
Just to balance things out, I’ll give you an example of a trap my friend set for me.
Me: (Starts to explain transhumanism. Quotes EY saying, “Life is good, death is bad. Health is good, sickness is bad.” etc)
Friend: “If life is good and death is bad, then isn’t suicide wrong in your view?”
Me: “Umm… I guess that’s a bit of an edge case.”
On reflection, I do now wonder if it’s better to modify someone’s mind so that they are no longer suicidal than let them kill themselves. After all, death is a much bigger change than a few erased memories.
[Deleted]
Sorry, I forgot about the difference between explaining away misconceptions about religious belief/practice and speaking in a descriptive positive way about them. I (try to remember to) self-censor the latter.
Your observation and orthonormal’s observations are correct: religious people often expect and claim that evidence for God is impossible. This is because when they say he exists, they mean existence in a different sense than what you think of.
It’s Gould’s separate magisteria. Physical materialism rejects the separate magisteria, and I’m convinced that it is self-consistent in doing so. However, dualists do believe in the separate magisteria and you cannot try to interpret their beliefs in the context of monism—it just comes out ridiculous.
Religious people who have assimilated the ideas of separate magisteria think that the religious fundamentalists who expect there could be evidence of God, and actually expect science to conform to ‘true’ religious belief are kind of crazy.
Ironically, physical materialists seem to have more affinity for and focus much more on the theistic beliefs of the latter (crazy) group. I don’t know what fraction of believers comprise each group (I do know that the fraction of bible literalists increases substantially as you move further south in the United States) but my impression is that the separate magisteria set are much more concerned with rationality and self-consistency, so you’re more or less ignoring the group that would listen to you and which you could have an interesting (albeit frustrating) conversation with.
I’ll go back and read ‘belief in belief’, but I don’t think you have a good interpretation of the people who expect no evidence of God.
It is not possible to interpret “separate magisteria” as different kinds of stuff, one “empirical” and one “non-empirical”. What they are, rather, is different rules of thinking. For example, prayer can often help and never hurt in individual cases, but have no effect in the aggregate (e.g. when surveys are performed). There’s no consistent model that has this attribute, but you can have a rule for thinking about this “separate magisterium” which says, “I’ll say that it works and doesn’t hurt in individual cases, but when someone tries to survey the aggregate, I won’t expect positive experimental results, because it’s not in the magisterium of things that get positive experimental results”.
Mostly, “separate magisterium” is the classical Get-Out-Of-Jail-Free card. It can’t be defined consistently. Mostly it means “Stop asking me those annoying questions!”
This division, needless to say, exists in the map, not in the territory.
I agree. Dualism is simply incoherent within the empirical framework.
It also explains why they don’t expect a CO_2 detector to work or have any relevance.
If, in quantum mechanics, we can say that something doesn’t happen unless it’s observed, why can’t we say that prayer works only if it isn’t observed (in the aggregate)? They seem equally mysterious claims to me.
Indeed, certain interpretations of quantum mechanics (for example, non-local action at a distance) point to dualism. You don’t even need to be quite so exotic: spontaneous particle creation in a vacuum would be evidence that X isn’t closed or complete. These are real and interesting problems at the interface of science and philosophy. It doesn’t minimize physical materialism to acknowledge this.
(I keep saying that I agree that dualism is incoherent—likewise I think that some interpretations of quantum mechanics and the existence of any truly random processes would be incoherent as well for equivalent reasons. )
Oh yes, ‘Belief in Belief’, recalled vividly as I reread it.* I liked this post (up-voted). It’s a bit of physical-materialist solidarity-building, an opportunity taken to poke fun at the out-group. The material being made fun of is taken out of context only to the extent that material coherent to one world-view is transplanted without translation to another.
As Eliezer aptly put it,
The thing is, not everyone owns this. The out-group you’re making fun of doesn’t. The appeal of this view is the paradigmatic strength of physical materialism.
No one has mentioned the idea of dual magisteria as a better explanation for this behavior in these comments until now: that no evidence is expected for the dragon because the dragon isn’t just invisible, its existence is non-empirical.
John Mark Rozendaal simultaneously gets the paradigm difference and dismisses it:
This is an especially useful quote because it highlights how many religious groups (John’s background was Calvinist) realize their beliefs are non-empirical and find this virtuous. That’s why they say they believe in one God, rather than know of one God—faith in something non-empirical isn’t just tolerated, it’s what their faith is about.
* I was about to link to the post ‘Belief in Belief’ and then realized this is where I am .. these Less Wrong worm holes take some getting used to!
I am in occasional contact with religious people, and they don’t behave as the “separate magisteria” hypothesis would predict.
For instance, I have heard things along the following lines: “I hope my son gets better.” “Well, that’s not in your hands, that’s in God’s hands.” All this said quite matter-of-factly.
There is active denial here of something that belongs in the magisterium of physical cause and effect, and active presumption of interference from the supposedly separate magisterium of faith.
Of course most of those people back down from the most radical consequences of these beliefs, they still go see a doctor when the situation warrants—although I understand a significant number do see conflict (or at least interaction) between their faith and medical interventions such as organ transplants or blood transfusions.
This isn’t just an epiphenomenal dragon, it’s a dragon whose proscriptions and prescriptions impinge on people’s material lives.
I do not think this is the best example you could have given, because it can be interpreted—and often is meant as—just a version of the Serenity Prayer.
Much worse is when people promise to pray for you, or advise you to pray, as though this will improve the chances of everything turning out OK. In these cases, I try to just focus on their good intentions; that they will pray for me because they do care. However, sometimes I really do get quite upset with having to pretend that I’m grateful for and satisfied with their prayers when perhaps I would like more sympathy and emotional support or pragmatic help.
Think of the relation between the magisteria as a one-way relationship. The supernatural can affect the natural but there is no way to move backwards into the supernatural.
This is flat wrong and doesn’t accurately describe the theology/cosmology of most theists, but it helps when using the concept of magisteria. Personally, I don’t think the term magisteria is completely useful in this context.
There is a deep problem behind all of these things where one layer or set of beliefs trumps another. In a framework of map/territory beliefs this makes little sense. It certainly doesn’t translate well when talking to someone who doesn’t adhere to a map/territory framework.
An example: If you asked the person why God didn’t make your son get better you will get a bazillion answers. Likewise, if you asked about taking your son to the hospital they will tell you that you should. These two beliefs aren’t in conflict in their system.
I have watched an entire congregation pray for someone who had cancer. They earnestly believed that their prayer was having some effect but if you asked for particulars you will get the bazillion answers. These people are not trying to explain away a future answer. They have seen what appears to be a bazillion different endgames for the scenario they are now in. That, mixed in with the crazy amount of factions within Christian theological circles, isn’t going to make sense with a map/territory framework. But they aren’t using that framework.
The weak assumption in the dragon example is that the believer of the dragon hasn’t already tried using a CO2 meter. Don’t underestimate the amount of historical questions packed behind the confusing answers you get when you ask someone to prove their dragon exists.
That being said, the dragon example does bring up a very awesome and valid point. If I took a few of those people who were in that congregation who prayed about cancer and asked them years later about the prayee’s status… what would they say? Would they expect a change in their state? Would the cancer be gone? What do they expect from the prayer? My guess is that they wouldn’t make any prediction.
Then the natural can perceive the supernatural but not vice versa. To perceive something is to be affected by it.
The real problem with those who go on about separate magisteria is that they are emitting words that sound impressive to them and that associate vaguely to some sort of even vaguer intuition, but they are not doing anything that would translate into thinking, let alone coherent thinking.
I’m sorry to be brutal about this, but nothing I have ever heard anyone say about “separate magisteria” has ever been conceptually coherent let alone consistent.
There’s just one magisterium, it’s called reality; and whatever is, is real. It’s a silly concept. It cannot be salvaged. Kill it with fire.
I grew up as a Mormon; they have a very different view of God than most Christians.
God is an “exalted man”, essentially a human that passed through a singularity. Also, regarding spirits: “There is no such thing as immaterial matter. All spirit is matter, but it is more fine or pure, and can only be discerned by purer eyes. We cannot see it; but when our bodies are purified we shall see that it is all matter.” Spirits are “children” of God, literally progeny in some sense. Spirits are attached to human bodies, live life as mortal beings, and then separate, retaining the memories of that time; the promise of the resurrection is a permanent fusing of spirit matter to undying bodies made of normal matter, and exaltation, reserved for those who prove worthy, is the ability to create spirit beings. It is the spirit that is conscious. “Eternity” just means “far longer than you have the ability to properly conceive of”. “Sin” means “addictive substances or behaviors”.
This sort of story is pretty decent sci-fi for early 1800s.
Mormons fully expect spirit matter to show up in the correct theory of physics, whether it’s dark matter or supersymmetric particles, or whatever.
As a missionary, I encouraged people to pray and ask God if the Book of Mormon was true; many who did so had an experience that was so unusual that they took us very seriously after that. Those that didn’t couldn’t be held accountable for not believing us, since that kind of experience was up to God to provide.
I now think there are simpler explanations for most of what I once believed. It took me a long time to come to the conclusion that I was wrong because of the “no conflict between science and religion” tenet, and I was raised as a Mormon in a very loving, functional family, and had particularly clever parents who were very good apologists, and I’m not a very good rationalist yet.
Erm… I agree with you? I don’t think the term magisteria is an accurate description of what they believe:
“Magisteria” doesn’t do anything useful. People have been using the word to describe why theists think God is “above” empirical results.
Blarg. This is a semantic war. “Affect,” in this case has nothing to do with perception. Don’t forget that these people are not working with the same framework. I am not trying to defend the framework or even that I am claiming it for myself. I am only trying to help explain something.
Yeah, okay, I am with you. I hope I wasn’t giving the impression that I am advocating separate magisteria. You don’t have to apologize for being brutal; I am confused that it seems directed at me.
Do you consider it a useful concept for describing a particular kind of stupidity (eg. Aumann)? Or is a useless concept even then?
I think it is a useless concept even then. It doesn’t make sense and doesn’t compute. By the time you translate whatever “stupidity” you are describing into “magisterium” you (a) know enough about the stupidity to speak to it on its own terms and (b) aren’t really talking about the stupidity; you are talking about magisterium which is a bastardization of two beliefs. How does that help?
As a specific example of Eliezer’s larger point, prayer is a natural attempt to influence the supernatural; so by that account, prayer must be futile.
Er, I am not defending the idea of one-way relationships between magisteria. The point was meant to highlight that magisteria is very much the wrong term.
As far as the one-way relationship, the term was not used to mean communication, causality, or anything else in particular.
The easiest example is a write-only folder on my computer. I can drop a file in that folder but do not have any direct measurement of its success or what happens to it after I drop it there. This relationship is “one-way” in the same way that my original statement was using “one-way.” Likewise, a read-only file can be opened and viewed but not modified. This is also “one-way” in the same manner that I meant “one-way” in the original statement.
Both of these examples are not one-way in the manner that magisteria would describe one-way.
And again, I am not trying to defend this view. I am merely trying to describe why magisteria is the wrong term.
Prayer would be an example of dropping a file into a write-only folder. We do something and assume that something happens to it later. We don’t have access to whatever happens because we don’t have read access.
This statement wouldn’t make any sense in the cosmology of a typical theist. That cosmology may be completely wrong but using this statement to tell them that prayer is futile would make you sound like a complete nut. The discussion needs to start somewhere else.
Interactions between the magisteria are contradictions for you, not necessarily to a dualist who believes it all works out, somehow. (For example, somehow we know about the second magesterium, and knowledge of it has significance on our interaction with the first.)
Also complicating matters is that each religious person has their own location on a scale of self-consistency. I find that most religious people fall well short of self-consistent, but not as short as claiming the dragon doesn’t breath just so the CO2 detector won’t be used.
My point in the comments above is that when religious people claim that there is no evidence or counter-evidence for God, it’s not as often a desperate measure to protect their belief, but simply that their belief in God is not meant to be about an empirical fact like a dragon would be.
Contradictions are contradictions. If, in general, the magisteria don’t interact, but in some specific case, they do interact, that’s a contradiction. It’s a model that doesn’t meet the axioms. That is a matter of logic. You can say “The dualist asserts that no interaction is taking place”, but you can’t say, “for the dualist, that is not a contradiction”.
I can.
In another comment you wrote,
I challenge myself to show you a concept of “separate magisteria” that is conceptually coherent and consistent. Of course it requires relaxation of the initial assumptions of empiricism. Should I proceed, or do you already grant the conclusion if I am going to relax these assumptions, to save me the trouble?
I read your comment again.
The set of religious people I’m talking about don’t deny things in this magisterium. They believe that things in this magesterium would never be in conflict with their beliefs about the second.
The person who is speaking second in this exchange would have assumed that the parent has already done all the pragmatic things they should. It is always the case that the health of a child is outside the parent’s hands to some extent.
When I used the words “active denial” I did so deliberately.
If you asked me about my son’s health, and I had cause to worry, I’d say something like: “We’re arranging the best care we can given our situation; we’re aware there’s a limit to how much we can know about what’s the matter with him, and a limit to how much we can control it.”
What a phrase like “God’s will” conveys is quite different. The meaning I get from it is that my efforts are futile: if it is part of God’s plan that my son should die, he will, no matter how much I arrange for the best care. If it is part of God’s plan that he should live, he will live, even if all I do is feed him herbs.
Now of course, in Bayesian terms, I have no usable priors about God’s plan. I can’t ever reason from the evidence—my son lives or dies—back to the hypothesis, since the hypothesis can explain everything. And if everybody admitted as much, it would be admissible to call this “a matter of faith, distinct from matters of evidence”. To say that everyone is free to form whatever bizarre beliefs they like.
The big issue, the elephant in the drawing room, is that faith is not just a private matter. There are people who do claim that they have privileged information about God’s plan—that matters of faith, for them, are matters of evidence. And this privileged access to God’s plan gives them a right to pass judgment on matters of worldly policy, for instance the current Pope’s recent proclamations on the use of condoms to fight the AIDS epidemic.
How is that not denying things in this magisterium?
If the “separate magisteria” hypothesis was tenable, we would have no reason to see so many people hold correlated beliefs about the non-physical magisterium. Each person would form their own private faith, and let each other person do the same. (The humor of Pastafarianism resides precisely in the ironic way they take this for granted.)
Correlated beliefs can only mean that the magisteria are not separate. To be one of the faithful is to claim—even indirectly, by association—some knowledge that outsiders lack.
The real issue, when you think of it that way, isn’t faith. It’s power—political power.
Just curious—are you a moral realist?
The correlation of beliefs (discounting bible literalists, etc.) is mainly over value judgements rather than empirical facts. For example, if you disagree with the Pope, you probably disagree with his ethics rather than any scientific statements he is making.
Yikes! NYtimes
I have no idea. My meta-ethics are in flux as a result of my readings here.
I have described myself as a “Rawlsian”, if that will help. It seems to me that most of our intuitions about ethics are intutions about how people’s claims against each other are to be settled, when a conflict arises.
I believe that there are discoverable regularities in what agreements we can converge on, under a range of processes for convergence, humanity’s checkered history being one such process. What convinced me of this was Axelrod’s book on cooperation and other readings in game theory, plus Rawls. The veil of ignorance is a brilliant abstraction of the processes for coming to agreements.
I think the Pope is being an ass when he says that condoms would worsen the AIDS epidemic rather than mitigate it. I don’t know much about his personal ethics. I don’t pay much attention to Popes in general.
I most emphatically do not believe that the Pope has “every right to express his opposition to the use of condoms on moral grounds”. Perhaps he has a right to a private opinion on the matter.
But when he makes such a claim, given his influence as pontiff, it is a fact that large numbers of people will act in accordance, and will suffer needlessly as a result—either by contracting the disease or by remaining celibate for no good reason. They are not acting under their own judgement: if the Pope said it was OK to wear rubber, they would gladly wear rubber.
You’ve brought up many different points in this comment. Am I wrong to feel phalanxed?
I don’t see anything here that has to do with the original point I was making, except possibly an admission that you refuse to consider the group of theists I was talking about.
I guess the problem is that we’re all talking about different belief sets—me, you, MrHen—and without pinpointing which belief culture we’re talking about or knowledge of their relative incidence, this is fruitless.
Agree entirely (I said as much in response to MrHen’s “outing” post); so I wanted this to be about things I’d heard first-hand.
“Phalanxed” is a word I wasn’t familiar with, but I hear your connotation of “mustering many arguments”, as in military muster. I’ll cop to having felt angry as I was writing the above; that isn’t directed at you.
I went back to your original comment, the nub of which I take to be this: you tolerate as self-consistent the belief of some groups of theists, on the grounds that their beliefs have no empirical consequences, and that is precisely what marks these beliefs as “faith”.
The nub of what I wanted to say is: you’re reading the exchange I quoted generously. The way I heard it, it had a different meaning. My understanding is that these women weren’t using “God” as a synonym for “luck/uncertainty”. They were referring to the personal God who takes an active interest in people’s lives, which is what they’ve been taught in their churches. (This is just something I overheard while passing them in the street, so I don’t know which church. But I picked this example to illustrate how common it is, even for an atheist, to hear this kind of thing.)
I have looked up Calvinists, if that’s who you mean by “the group of theists” in question. Their doctrines, such as “unconditional election”, refer (as best I can understand those things) to a personal God too, which is expected as a subgroup of Christians. They take the Bible to provide privileged access to God’s plan.
A personal God who takes an active interest in events in this world, as outlined in the Bible, does not meet your criteria for tolerance. It is a belief which has empirical consequences, such as the condemnation of homosexuality. It doesn’t matter how “benign” or “moderate” these inferences are; they show up “separate magisteria” as a pretence.
I was responding to the idea that theists are involved in blatant double-think where they anticipate ways that their beliefs can be empirically refuted and find preemptive defenses. The idea of “separate magisteria” may have been one such defense, but it is the last: once they identify God as non-empirical they don’t have to worry about CO2 detectors or flour or X-ray machines—ever. I think that the continued insistence on thinking of God as a creature hidden in the garage that should leave some kind of empirical trace reveals the inferential distance between a world view which requires that beliefs meet empirical standards and one that does not.
You suggest that the “separate magisteria” is a pretense. This appears to be along the lines of what Eliezer is arguing as well; that it is a convenient ‘get out of jail free’ card. I think this is an interesting hypothesis—I don’t object to it, since it makes some sense.
We base our views about religion on our personal experiences with it. I feel like I encounter people with views much more reasonable than the ones described here fairly often. I thought that ‘separate magesteria’ described their thinking pretty well, since religion doesn’t effect their pragmatic, day-to-day decisions. (Moral/ethical behavior is a big exception of course.) I’ve encountered people who insist that prayers have the power to change events, but I don’t think this is a reasonable view.
I thought people mostly prayed to focus intentions and unload anxiety. Some data on what people actually believe would be extremely useful.
I polled some theist friends who happened to be online, asking “What do you think the useful effects of prayer are, on you, the subject on which you pray, or anything else?” and followup questions to get clarification/elaboration.
Episcopalian: “The most concrete effect of prayer is to help me calm down about stressful situations. The subject is generally not directly affected. It is my outlook that is most often changed. Psychologically, it helps me to let go of the cause of stress. It’s like a spiritual form of delegation.”
Irish Catholic With Jesuit Tendencies: “The Ignation spirituality system has a lot to do with using prayer to focus and distance yourself from the emotions and petty concerns surrounding the problem. Through detatched analysis, one can gain better perspective on the correct choice. … Well, ostensibly, it’s akin to meditation, and other forms of calming reflection possible in other religions. For me, though, I tend to pray, a) in times of crisis, b) in Mass, and c) for other people who I think can use whatever karmic juju my clicking of proverbial chicken lips can muster.” (On being asked whether the “karmic juju” affects the people prayed for:) “It’s one of those things. “I think I can, I think I can.”″ (I said: “So it helps you help them?”) “I guess. Often, there’s little else you can do for folks.”
Mormon: “Well, I think it depends on the need of the person involved and the ability of that person to take care of him/herself. I have heard stories from people I trust where miraculous things have occured as the result of prayer. But I find that, for me personally, prayer gives me comfort, courage, and sometimes, through prayer, my thoughts are oriented in ways that allow me to see a problem from an angle I couldn’t before and therefore solve it. Is it beneficial? Yes. Is it divine intervention? Hard to say. Even if it’s just someone feeling more positive as a result of a prayer, I think it’s a benefit—particularly for the sick. Positive attitudes seem to help a lot there.”
So it looks like byrnema is right!
(All quotes taken with permission)
Even better, a study. (Upshot: Praying for someone has a significant effect on the praying individual’s inclination to be selfless and forgiving toward that person.)
Thank you, Alicorn. It’s very helpful to have any data. Even data from people that are educated and comfortable with atheism, like myself, is better than atheists just speculating about what theists think.
This was a sample of friends of an atheistic philosopher who were answering a question by that same philosopher. The sample, unfortunately, tells me almost nothing about the general population.
Believing in a specific God (who, for example, promises to answer prayers) is an unreasonable view. Believing in the same God but also believing that prayers are unable to change events is even more unreasonable. It’s just more practical.
I’d like to clarify that I’ve not stuck my neck out that any beliefs are reasonable. I said I often encounter beliefs that seem much more reasonable.
Even more? Why?
Roughly speaking:
p(A) = 0.0001
p(A && !A) = 0
I think he may have had trouble avoiding them, even if you don’t count the mirror...
I’m going to test the efficacy of “prayer”… online communication with other minds. I would like at least 30 karma for this comment. If I get this much karma, it will verify the efficacy of this kind of communication. If I get less than that, I guess it will show that this kind of “prayer” isn’t efficacious.
Of course, it could just show that people don’t always do what you want them to do, especially when you explicitly admit that you are testing them.
The previous comment is now at −3… Result of the experiment: negative. This kind of prayer doesn’t work.
Possible conclusions from this study:
The Less Wrong community does not exist (because it can’t answer prayers.)
The Less Wrong community doesn’t always do what you want, especially when you admit you are testing it.
And most likely of all: not many people read many comments, and those who do, are of the second type.
The lesswrong community never claimed to answer prayers.
Still, if people here knew how to cure cancer, most of them would.
Just as a completely non-religious example, my girlfriend is a medical social worker and has been working recently with a patient that is absolutely convinced he has cancer.
There has apparently been some cases of tests that came back clean, and his excuses after the fact, but it’s also turned in to him anticipating future tests not finding the cancer, yet with him still certain he has cancer. I had originally suggested that she ask him ahead of time something like, “well if we do this test and it comes back clean, will you let it go?” but he wasn’t even open to the question.
Like what? Just curious.
I asked… he apparently doesn’t say much. She’s theorizing Münchausen syndrome, which means he’s not necessarily conscious that he’s feigning illness.
Do you ever pray? Have you ever asked God, even in the silence of your heart, for something you truly wanted?
Have you ever looked at the experimental results for intercessory prayer? What do you expect them to show?
There ya go.
Really? That’s it?
Should I continue this conversation? Is there anything to be had or learned here? The internet is a tricky place and it could be my mood tonight, but I feel that this seems to be headed down a volatile path. I don’t mind talking about it but I will get tired of dealing with comments that aren’t making much of an effort to do some quick thinking on their own.
The implication in this is more that this is a stupid, dead horse, argument circle that everyone has been around a few times. The implication is not that you aren’t thinking or that you should be writing small essays on prayer. This is just a big rabbit hole and I get the feeling that no one really wants to go there again.
Talking anyway...
If I asked any Christian I know if they prayed for a brand new car in their driveway tomorrow morning they would probably give an answer like, “Well, it could happen, but it’s not very likely.” I don’t see this as dodging experimental evidence before the test has happened. More than likely they already tried something like that when they were kids and it obviously failed.
My expectation of the experiments on intercessory prayer is that it would do about as well as a placebo. (I am guessing we are talking about prayer for health issues.) Perhaps a little better than placebos, but I don’t know the positive affects of visitors on patients. For some reason, I assume that being stuck alone in a hospital is bad for the average person’s mood which would probably affect their recovery. What I would find more telling is the affect of prayer without the patient hearing about it. In this case, I would expect no significant difference.
But I still ask God for stuff. It makes me feel better and it helps me think about the stuff as a problem to solve. It is a way for me to acknowledge that stuff. “God” could be replaced by a different term, and if I reject Theism it probably will be, but for now prayer has a use. I don’t see it the same way everyone sees it, but even in the case of a Traditional Christian, I don’t think it is a good example of someone making up excuses for an invisible dragon before the experiment happens.
As much as I believe in the power of people to delude and lie to themselves, this is a hard stretch for me. Again, I am not saying its impossible. I just haven’t seen it. It could be all around me, I suppose, but I still don’t see it.
I think that you are interpreting your religious fellows with too much charity. Some of them might be like you. Others won’t be, unless you’re hanging out with an exclusively Unitarian crowd.
If you really want to see a straight-up case, http://lesswrong.com/lw/1lf/open_thread_january_2010/1es2
Yeah, I see everyone with charity. I rarely run into truly stupid people. Most of them have stupid beliefs and stupid habits but they are intelligent enough to still be alive. These people are smart enough to understand the concepts at LessWrong. At least, smart enough to understand everything I have read so far. Maybe not the mathy particulars, but the high level concepts, sure.
The astrologer in your example is not stupid. He dodges because he already knows it will fail because he has seen it fail. I suspect he latches onto his belief for other reasons. I really don’t see how he could be following astrology and not see it fail from time to time. The appropriate question isn’t in the form of an experiment because you don’t know what the believer expects.
I get that this is belief in belief. What I don’t get is how this matches the dragon example. He isn’t dodging an experiment that he hasn’t seen yet. He sees it all the time. Trying to verify the accuracy of astrology is something these people do every day. You can watch in the same video as people ponder if a saying applies to them. Half of them reject it without feeling like their whole system crashed.
I am not defending astrology. I am not defending the beliefs in it. I am confused as to how this looks like the dragon example.
The closest dragon-esque example I have seen posted here was this:
I am starting to think I am misunderstanding the example.
I would like to hear which part of this post gets downvotes. I could see either part deserving them.
I didn’t think it deserved a downvote, so I upvoted, FWIW.
If you want to know which part people are voting on, split it into two posts.
I think I found an example of Belief in Belief that makes sense to me. The other day I met someone and they talked about how World War 3 would take place by next October. They didn’t really go into details, and I didn’t press for details, but the basic source of reasoning was the Book of Revelations.
If I had pressed for information I am sure all sorts of soft reasons would be produced about the current state of international affairs and blah, blah, blah. If I asked if America would be invaded and if there would be terrible destruction the answer would probably be, “Yes.” Yet I am positive that they are doing nothing to prepare for invasion and won’t be surprised in the slightest when October rolls around with no World War. World War 3 will still be happening “soon” and “soon” is still 18 months out. There won’t be any change in behavior, no worries about the botched prediction, no wondering if perhaps the Book of Revelations isn’t the best source material.
This is certainly a Belief in Belief. They really do think they believe it. If I asked them if they believed in an October ’11 WW3, they would say, “Yes.” But there is no conviction and no action as a result of this belief. So… do they really believe?
I think now I understand.
Late to the game, but I’m precisely in this boat.
I don’t have faith—if I did, I’d have no qualms whatsoever about facts and arguments presented by atheists. I wouldn’t be nervously claiming that the dragon is invisible. (Some people who think the apocalypse is nigh actually do stockpile canned food. That’s faith; they believe in Revelations the same way I believe in physics.) I don’t have faith, because I’m actually frightened that some archaeologist will find evidence that there wasn’t any Exodus, for instance. And the fear is really that changing my religious beliefs will make me a worse person. Less grateful? Less reverent? Less respectful? That’s the basic idea but I’m not sure if those words convey it.
To give a non-religious analogy, take the question of whether men have evolved to be irresponsible fathers. That’s an empirical question. But a man can be afraid of believing that he is, indeed, biologically designed to be an irresponsible father, because he fears that such a belief will make him actually treat his children poorly. A rational man, we’d hope, would decide “I’ll be a good father, whatever the evolutionary biologists say.” But he can only do that if he has some independent reason to be a good father, and if he’s aware he does.
A religious person wants to be a good person, and wants to have the right sort of attitude to the world. But all his reasons and motivations come from God. He could fear not believing in God because he fears not being good. Presumably, he has some other, non-God motivations for wanting to be good; but let’s say that he doesn’t know what they are. Then his fear might be justified. With no God and no principles, his behavior might actually change.
If I may extend your hypothetical frightened father metaphor: the man is worried that he is biologically designed to be an irresponsible father, but he is mistaken to worry that he will find out that he is biologically designed to be irresponsible. What he wants is to be responsible, not to think that he is responsible, so the mere fact of whether or not he knows some specific fact is not going to affect that.
Whatever the truth is, the hypothetical frightened father—and the very real frightened theists, such as yourself—already are living under whatever conditions actually hold. If the father is a responsible one, he already wins, whatever his biological predisposition was. If a theist is a good person, that theist already is a good person, whether God is real or not.
That is the first of two essential points. The second is this: if you would rather be good than not, then you are already on the right path, even if you can’t see where you are going. Others have walked this way before, and escaped into clear air.
The relevant question is whether the good person would remain good after they discover God is not real. My hunch is that most people who are good would stay that way.
But I like this point:
And I will take it with me.
It’s better than a hunch—it’s backed up experimentally.
I think it actually comes down to the same logical idea of The Bottom Line, the modus tollens: if the bottom line is formed through good processes, then it often remains strong even when the text above it is created through other means. You (or, I suppose, I) could write an essay on all the cases where it is what was written above the bottom line that was garbage.
I heard that here, ascribed to Eugene Gendlin. It is a valuable insight, I think.
Well, I’m kind of … done.
It occurred to me that nothing I actually revere could object to me responding to the evidence of my eyes and mind. I can’t help doing that. It can’t possibly be blameworthy.
I don’t feel that I’m losing anything right now. What I always took seriously was a sense of justice or truth. Not just mine, you understand, and maybe not a bunch of platonic forms out in the Eagle Nebula either, but something worth taking seriously. A little white light. That’s what I was afraid would go away. But I don’t think it will, now, and all the rest is just window dressing. Maybe I can even pay better attention to it without the window dressing.
I couldn’t believe I’d ever be happy like this, and maybe I’ll see my error soon enough… for so long this was something I promised myself I’d never do, a failure of will. But right now this seems … better. Actually better. Less phony. Truer to what I actually did revere all along.
*hugs*
Would you say that you were expending a lot of effort trying to believe things when it didn’t feel natural to believe them, and now you feel happy because that burden is lifted? Are there any other (what are the reasons) for the happiness?
Yes! It’s that. It’s also that I’m starting to think it’s not so terrible; that I’m not a traitor to anything worth my loyalty.
Also. For a long, long time I felt that God had given up on me… that any deity would have long ago decided I was no good and put me in the reject file. God’s love was an unknown, but it seemed very, very unlikely. A more cheerful thought—but not, I think, a false one—is that there is no distance between Justice and the Judge. If I do right, there’s no additional question, “But is it good enough for God?” I’ve done right. If I learn from my mistakes and make restitution, there’s no additional “But will God forgive me?” If I’ve paid my debts, then I’ve paid my debts.
All I have to do is do justice, love mercy, and walk humbly with reality. Suddenly this seems feasible as it never did before.
Just curious, based on your phrasing I would guess that you’re Jewish, and possibly orthodox (there is some precedent for that here). I pushed the big unsubscribe button in the sky two month ago myself and have gone through some of the same emotions.
Jewish, yes, orthodox, no (and always wondered about the consistency of that; if you’re already choosing not to be strictly observant, what can you then conclude?)
yeah...I should know well enough by now that there are lots of atheists floating around, but it’s nice to have extra data points, especially if they’re friendly!
I know this is an old comment, but… Having gone through a similar process, I just want to give you a big warm hug.
Followed your recent post here and thought I’d add my support as well. I went through something very similar last Christmas (some of that story HERE) and it’s more or less ongoing. I really love how you’ve put things, especially these:
(MrHen)
(RobinZ)
I guess it’s backed up, for example, by Europe as a poll in 2005 found that “18% do not believe there is any sort of spirit, god or life force.” And life in Europe is ticking along fine.
Well, my issue is that people act based on their beliefs. A father will do things for his children because he thinks he can, and thinks he should. If he reads an article in Psychology Today and doesn’t see the point any more, because baboon fathers don’t raise children, well, then, his behavior is likely to change.
The worst case scenario is believing incorrectly that it’s okay to do wrong. Believing incorrectly that it’s wrong to do something okay is not as bad; you’re mistaken, but you’re not destructive. The loss-averse strategy is to be very suspicious of claims that tell you “Relax, don’t worry, it’s all right to do X.”
A little knowledge is a dangerous thing. The solution to this dilemma is to learn more. It really isn’t so bad on the other side if you just keep walking and don’t look down.
I wouldn’t be here if I weren’t seriously open to changing my mind. I’ll give it some thought.
That’s why a lot of atheist organisations exist that promote ideas like “you can be good without God”. If people can get over the belief that morality flows only from God, there aren’t so many worries about people acting worse for not being religious.
It’s kindof silly really. Socrates did a reductio on Divine Command Theory in the Euthyphro, and the Catholic Church has rejected it for related reasons for a long time now.
It may help to consider the question what would you do without morality? (also see the follow up: The Moral Void).
Seriously, if there were no morality, I would still have tastes, and they would still involve being fairly nice to people, but I’d generally put myself first and not worry about it. I’d sometimes be a freeloader or slacker, but not to such extremes that I could see how my actions hurt other people. I’d generally be a productive, sympathetic person, but not terribly heroic or altruistic. I would … not be much different from the way I am now. But without the guilt. Without the sense that that can’t possibly be enough.
For some of us, ‘ethics’ (here read as equivalent to ‘morality’) is an answer to the question “What should I do (or want)?”, which is equivalent to “What do I have most reason to do (or want)?”. If you care about the answers to questions like “Should I order a hamburger or a hot dog?” and “Should I drink this bottle of drain cleaner?” and “Should I put myself first?” then you care about ethics.
If I offer you a bottle of drain cleaner to drink and you refuse it, and I ask you, “What reason did you have for refusing it?” and you give me any answer, then you’re not an ethical nihilist; you think there is something to ethical questions.
Of course, some would not cast such a broad net in their definition of ‘ethics’, but I don’t tend to find such theories of ethics very useful.
I believe that for the most part, people make their religions in their own image.
You have declared B(X) and B(~X) as is often done (re: P=P, Q=~P)
yet you have not proven (or examined) that X is properly (and only) dividable into X, ~X for all cases of “X” for practice: “this sentence is not true”. is easily correct, if one realises that it -assumes- that the only possible values of the sentence are covered by X OR ~X. (ie the B(X)= TRUE || B(~X)=FALSE). When one realises that “a square circle” looks exactly like ‘a square circle’, and thus can be “real” then one starts to understand the a priori assumptions one has created when looking at conditions tested by B(X) and B(~X) as proof tests.
:) Not believing in belief (or faith) is a belief.
Mr. Hen, As I understand your notation, B(B(X)) would mean “I believe that I believe X”. Lemmon’s fourth axiom for doxastic modal logic is B(X) implies B(B(X)). This is sometimes called the positive introspection axiom. I’m pretty sure it applies in any reasonable theory of “rationality”
But this is apparently not what the post says that “Belief in belief” is. In this thread, “Belief in belief” seems to be something like “I ought to believe X, therefore I want to believe X, therefore I will myself to believe X, and I believe that I have succeeded, therefore I believe that I believe X (even though an objective observer can see that I don’t really believe X deep down)” This kind of belief in belief is irrational.
I believe that I believe that such a notable philosopher as Dennett has not completely messed up the meaning of the word “belief” in his zeal to deprecate the continued existence of non-atheists.
I also liked this essay very much. Many times it has occurred to me that “There are people who may claim to believe in Y, but deep down they know just as well as everyone else that Y is not true.”
I would add that our conversation with the hypothetical dragon claimant is not likely to get to the point where you are discussing bags of flour. Because by that point he will probably be extremely angry with you and one way or another the conversation will be concluded.
As the old joke goes, “Shut up, she explained.”
Actually I think it was Steve Sailer who pointed out that with any highly controversial issue, there is likely to be some truth floating around which people do not want to face. Arguably it’s the same concept.
I would also note that if your conversation with the dragon claimant took place on Less Wrong, he would probably just downvote you. ;)
I also liked this essay very much. Many times it has occurred to me that “There are people who may claim to believe in Y, but deep down they know just as well as everyone else that Y is not true.”
I would add that our conversation with the hypothetical dragon claimant is not likely to get to the point where you are discussing bags of flour. Because by that point he will probably be extremely angry with you and one way or another the conversation will be concluded.
As the old joke goes, “Shut up, she explained.”
Actually I think it was Steve Sailer who pointed out that with any highly controversial issue, there is likely to be some truth floating around which people do not want to face. Arguably it’s the same concept.
Hi, I’m new to this blog. I haven’t read all the essays and whatnot, so if you read something I say that you think is bull, I’d like it if you linked me to the essay.
Anyway… Regarding the dragon and the garage scenario: what would the dude say if I were to ask “how do you know the Dragon is there if it’s impossible to know if he’s there?”
It’s an interesting question. Judging by Eliezer Yudkowsky’s story in Is That Your True Rejection?, they would be likely to say something that sounded good even if it’s not their real reason.
What would sound good?
“My ancestors thousands of years ago were aided by the provably omnibenevolent dragon, who then assured them he would forever live invisibly in my garage.”
Just off the cuff
The question would have an answer for some actual believer in belief—not for a hypothetical character in a thought exercise.
Keeping in mind that the thought exercise has limited isomorphism to belief in God. No one believes in an invisible dragon in their garage … because there isn’t any reason to think there is a dragon there. Theists have reasons to believe in God, atheists just don’t agree with those reasons.
Might belief in belief occasionally be valuable when overcoming bias? It would be better to correct my beliefs, but sometimes those beliefs come from bias. I might be convinced in my head that standing on the glass floor of an airplane and looking down is totally safe—this specially-modified-for-cool-views airplane has flown hundreds of flights—yet in my heart deeply believe that if I step onto it I will fall through. I might then choose to “believe in the belief that it is safe to take a step”, while all my instinctual reactions are based on a false model. The cognitive dissonance is due to my inability to integrate something so foreign to the evolutionary environment into my belief structure.
See The Mystery of the Haunted Rationalist.
Belief in disbelief:
— Niels Bohr
(Note: This is often retold with Bohr himself as the one with the horseshoe, but this quote appears to be the authentic one.)
I wonder how common that is, believing that you don’t believe something but acting in a way that implies more belief in it than you acknowledge. One other example I experienced recently: For whatever reason, my mom had a homeopathic cold remedy lying around. (I think a friend gave it to her.) She and I both had colds recently, so she suggested I try some of it. The thing is, she gives full assent to my explanations of why homeopathy is both experimentally falsified and physical nonsense; she even appeared to believe me when I looked at the ingredients and dilution factors and determined that the bottle essentially contained water, sugar, and purple food colouring. But even after that, she still said we may as well try it because it couldn’t hurt. True, it couldn’t hurt… but “it can’t hurt” doesn’t sound like really understanding that the bottle you’re holding consists of water, sugar, and purple.
Another instance may be former theists who still act in some ways as though they believe in God (an interesting mirror image of current theists who don’t act as though they really believe what they profess to believe), though in my experience many of them consider it to be bad habit they’re trying to break, so I’d be less inclined to call it belief in [dis]belief, I’d take that as something more akin to akrasia.
I once took cough drops that really helped with the sore throat from a cold I had, and actually tasted good too. It was only after a day or two that I looked at the packaging and realized they were homeopathic. I didn’t think too hard about it and kept taking them, because I wanted the placebo benefits and all the other brands of cough drop I own taste terrible.
The placebo effect is weakened but doesn’t disappear if you know it’s a placebo.
Citation needed :)
[This citation is a placebo. Pretend it’s a real citation.]
Here’s a study (honestly labeled placebo vs nothing) for irritable bowel syndrome.
I originally got it from a Science et Vie article on a study with four conditions (labeled as placebo vs as treatment; placebo vs treatment), can’t remember what for.
I remember this from earlier, see my response in that thread, and my links to Silberman and Lipson.
The study may well be measuring patients’ tendency to want to fulfill doctors’ expectations rather than any effect on the actual symptoms.
I agree this study is a bit silly. I’ll try to dig up the one I saw, but promise nothing.
Agree that the placebo effect may contain lying to doctors. There may also be some regression to the mean—people who are too healthy are excluded from the study, so when everyone moves at random the ones sick enough to be selected get healthier.
My understanding is that the studies establishing a placebo effect were controlled in a way that’d rule out regression to the mean as a cause of the perceived improvements. Lying to doctors does sound plausible, though.
What’s weird about this is that if this theory works, anything forms an acceptable substitute.
So you don’t need to buy any actual homeopathic “medication”, you can save lots of money by just eating some sugar. (The homeopathic markup on sugar is just unbelievable.)
Even sugar isn’t necessary, since you’re stipulating that “what works” isn’t any particular mechanism of action but just the action of treating yourself. You could as well choose to believe that taking a deep breath three times in succession is a good remedy against the cold (or whatever else ails you).
When I feel the first signs suggesting an incipient cold, I decide THIS IS NOT GOING TO HAPPEN, and it nearly always goes away.
So far, I’ve only been able to make this work for colds, not any other malady.
Must remember to try this.
Which incipient signs do you look for?
A roughness in the throat is usually the first thing I notice. Unchecked, it develops into a cough, sore throat, sneezing, and at the peak a couple of days of being completely unable to function.
This happened about once a year on average before I discovered I could banish them by willpower, since when it’s been more like once in five years, generally from extreme circumstances like being caught in the rain on a bike ride without adequate clothing.
Just to chuck in a little more anecdotal evidence, my husband applied this belief in the placebo effect, and so long as he can get an early night, he never suffers the little bugs and headaches.
It works in all instances where homeopathy has worked… ;)
The placebo effect rocks!
Tends to work pretty well on my own mental state, but very short term. Complicated (expensive?) impressive rituals help, though.
Wow. So, I’m basically brand new to this site. I’ve never taken a logic class and I’ve never read extensively on the subjects discussed here. So if I say something unbearably unsophisticated or naive, please direct me somewhere useful. But I do have a couple comments/questions about this post and some of the replies.
I don’t think it’s fair to completely discount prayer. When I was a young child, I asked my grandmother why I should bother praying, when God supposedly loved everyone the same and people praying for much more important things didn’t get what they wanted all the time.
She told me that the idea is not to pray for things to happen or not happen. If I pray for my basketball team to win our game (or for my son to get well, or to win the lottery, or whatever) then based on how I interpret the results of my prayer I would be holding God accountable for me getting or not getting what I wanted. The point of praying, as she explained it, was to develop a relationship with God so I would be able to handle whatever situation I found myself in with grace. Even though we often structure our prayers as requests for things to happen, the important thing to keep in mind was how Jesus prayed in the garden before he was crucified. Even though he was scared of what was going to happen to him and he didn’t want to go through with it, his prayer was “your will, not mine”. He didn’t pray for things to go his way, although he acknowledged in his prayers that he did have certain things that he wanted. The point of the prayer was not to avoid trials or fix their outcome, but to communicate with God for the strength and courage to hold fast to faith through trials.
Now, I’m certainly not citing my grandmother as a religious or theological expert. But that explanation made sense to me at the time, partially because I think you could probably that it would have the same benefit for people regardless of whether or not there was actually a God to correspond to the prayers, which jives well with how I believe in God.
Maybe I’m misunderstanding the post, but I think I have something like believing that I ought to believe in God, although I’ve always phrased it as choosing to believe in God. Even though I was raised Catholic, I never felt like I really “believed” it. For as long as I can remember, the idea of “belief” has made me incredible uncomfortable. Every time a TV show character asked “didn’t you ever just believe something” I would cringe and wonder how anyone could possibly find such an experience valid when anyone else could have an alternate experience.
Secretly, I’m glad that I’ve never felt any kind of religious conviction. If I did, then I would have to prize my subjective experience over someone else’s subjective experience. I’m quite aware that there are a multitude of people that have had very profound experiences that make them believe in one doctrine or another to the exclusion of all others, and that’s something I can’t really understand. Knowing that other people exist that feel equal conviction about different ideas of God with the same objective evidence makes it impossible for me to have any sort of belief in a specific God or scripture, at least at the level of someone who believes with enough conviction not to be perfectly comfortable with the idea that I’m wrong.
That said, I consider myself Catholic. I don’t agree with all the doctrine and I don’t think I could honestly say I think my religion is correct and other religions are wrong in any way that corresponds to an objective reality. But I choose to believe in this religion because what I do really believe deep down is that there is some higher order that gives meaningfulness to human life.
I consider it to be rather like the way I love my family- I don’t objectively think that my family is the best family in the world, the particular subset of people most deserving of my love and affection. But they’re my family, and I’ll have no other. I can love them while still acknowledging that your love for your family is just as real as mine. Just because they’re different experiences doesn’t make them more or less valid- and just because it isn’t tangible or falsifiable doesn’t make it any less potent. Even so, I’m always curious if I’m really an atheist, or maybe an agnostic, since I don’t really believe it beyond my conscious choice to believe it (and a bit of emotional attachment to my personal history with this specific religion).
Whew. That was a lot of words. Anyways, I’m sure that I’ve got plenty of logical and rational flaws and holes. Like I said, I’m basically brand new to all the ideas presented here, so I’m going to try and thrash my way through them and see what beliefs I still hold at the end.
Hey, welcome to Less Wrong! You might want to take a moment to introduce yourself at the welcome thread. Hope you find LW enjoyable and educational!
Hi there, nice to know I’m not the only one absolutely new and quaking in my slippers here.
I don’t think you’re quite making the mistake of believing in belief. I can’t model your brain accurately just by reading a few paragraphs of course, but you don’t seem to show much flinching-away from admitting the judeo-christian god and the catholic interpretation of it is wrong. I think you’re more identifying the religion of your family and peers as your ‘group’ (tribe, nation, whatever wording you prefer) and shying away from dropping it as part of your identity for the same reason a strong patriot would hate the feeling of betraying their country.
I remember reading a thing about this by… some famous secularist writer, Dawkins or Harris I think. About a million years ago, for all the good my memory is serving me on the matter. I’ll try and find it for you.
As for being attracted to a higher order of things, well.. I agree with you. I just happen to think that higher order is quite physical in nature, hidden from us by the mundanity of its appearance. I think you might really want to read the sequences:
http://wiki.lesswrong.com/wiki/Reductionism_(sequence) and http://wiki.lesswrong.com/wiki/Joy_in_the_Merely_Real
Funny, it’s the second time this past week or so that I encounter a Lesswronger that identifies as Catholic.
(Welcome to LessWrong by the way!)
Surprised not to find Pascal’s wager linked to this discussion since he faced the same crisis of belief. It’s well known he chose to believe because of the enormous (inf?) rewards if that turned out to be right, so he was arguably hedging his bets.
It’s less well known that he understood it (coerced belief for expediency’s sake) to be something that would be obvious to omniscient God, so it wasn’t enough to choose to believe, but rather he actually Had To. To this end he hoped that practice would make perfect and I think died worrying about it. this is described in the Wikipedia article in an evasive third person, but a philosophy podcast I heard attributed the dilemma of insincere belief to Pascal directly.
Fun stuff.
Very interesting. I have transhumanist beliefs that I claim to hold. My actions imply that I believe that I believe, if I understand this properly.
A prime example would be how I tend to my health. There are simple rational steps I can take to increase my odds of living long enough to hit pay dirt. I take okay care of myself, but could do better. Much better.
Cryonics may be another example. More research is required on my part, but a non-zero last stab is arguably better than nothing. I am not enrolled. It feels a bit like Pascal’s Wager to me. Perhaps it is a more valid form of the argument, though. Hoping for a scientific miracle seems essentially different than hoping for a magical miracle. Scientific miracles abound. Artificial hearts, cochlear implants, understanding our origins, providing succor to imbalanced minds, the list goes on. Magical miracles… not so much.
Heck, I could stop forgetting to floss daily! (There seem to be strong correllations between gum disease and heart disease).
I anticipate as if there will be no radical life extension available within my life time, but I will argue for the possibility and even likelihood. Do I have this correct as a type of belief in belief?
Pretty much. Though it might just be a case of urges not lining up with goals.
In both cases, you profess “I should floss every day” and do not actually floss every day. If it’s belief in belief, you might not even acknowledge the incongruence. If it’s merely akrasia, you almost certainly will.
It can be even simpler than that. You can sincerely desire to change such that you floss every day, and express that desire with your mouth, “I should floss every day,” and yet find yourself unable to physically establish the new habit in your routine. You know you should, and yet you have human failings that prevent you from achieving what you want. And yet, if you had a button that said “Edit my mind such that I am compelled to floss daily as part of my morning routine unless interrupted by serious emergency and not simply by mere inconvenience or forgetfulness,” they would be pushing that button.
On the other hand, I may or may not want to live forever, depending on how Fun Theory resolves. I am more interested in accruing maximum hedons over my lifespan. Living to 2000 eating gruel as an ascetic and accruing only 50 hedons in those 2000 years is not a gain for me over an Elvis Presley style crash and burn in 50 years ending with 2000 hedons. The only way you can tempt me into immortality is a strong promise of massive hedon payoff, with enough of an acceleration curve to pave the way with tangible returns at each tradeoff you’d have me make. I’m willing to eat healthier if you make the hedons accrue as I do it, rather than only incrementally after the fact. If living increasingly longer requires sacrificing increasingly many hedons, I’m going to have to solve some estimate of integrating for hedons per year over time to see how it pays out. And if I can’t see tangible returns on my efforts, I probably won’t be willing to put in the work. A local maximum feels satisfying if you can’t taste the curve to the higher local maximum, and I’m not all that interested in climbing down the hill while satisfied.
Give me a second order derivative I can feel increasing quickly, and I will climb down that hill though.
That’s helpful input, thanks. After reading the link and searching the wiki I suspect that it is more likely an akrasia/urges v. goals sort of thing based upon my reaction to noticing the inconsistency. I felt a need to bring my actions in line with my professed beliefs.
This is fascinating. When I look back at the thought patterns of my younger self, I can see so much of this belief-in-belief. Despite being raised religious, I came to an agnostic conclusion at around age ten, and it terrified me, because I very much wanted to believe. To my mind, people with faith had a far greater sense of morality than those without, and I didn’t want to fall into that latter category.
So I proceeded as if I believed, and eventually came to make justifications along the lines of ‘ritual X accomplishes outcome Y’ where Y was something psychologically valuable, for example a sense of community. That made X a good idea even if I didn’t truly believe the theology involved.
When I was first told I had a second-order relationship to my belief, I was very insulted. It was as if I’d defined a good person as a religious one, and by challenging my belief that person was challenging my intrinsic worth (despite the fact that they were an atheist themselves and clearly thought nothing of the sort.) Cognitive distortion at its finest.
It took a profound shift in my thinking about the role of religion in morality before I could accept that it was alright not to believe. The rest followed nicely.
As for Santa Claus? I pretended I believed (despite knowing the absolute impossibility of it being true) for three whole years. The idea that there was such a great conspiracy that every adult seemed to be complicit in really worried me, at six years old, and made me afraid to speak the truth. I knew they wanted me to believe, so I let them think I did.
If I ever have children of my own, needless to say, Santa will be introduced as an enjoyable fiction and nothing more.
It doesn’t seem to me that this post actually makes any coherent argument. It spends a fair amount of words using seemingly metaphysical terms without actually saying anything. But that’s not even the important thing.
Is this post supposed to increase my happiness or lifespan, or even that of someone else?
Well, for one, “beliefs in beliefs” are embodied in patterns of neurons in human brains–they’re a real phenomenon, not a “metaphysical” one, and they can influence peoples’ thoughts, words, and actions. Someone who every week donates money to their their church, because they go to a church, because they “believe in God”, may not really belief in God in the sense of not letting it determine any important decision. But the belief is still there, floating around interacting with the rest of their value system, combining with social pressure, pulling their personal opinions over towards the beliefs endorsed by that church, and of course costing them $x money every week, which, based on how churches usually spend money, is probably mostly spent on installing the belief in belief in God into other peoples’ heads. On an individual level, it’s hard to evaluate whether that person is more or less happy or will live longer, but on a societal level, there are definite effects.
If you think of belief as something like “representing the world as being a certain way”, then a belief in Real Beliefs might have followed from profound ignorance of neursicence. But there are plenty of other ways of getting there. For instance, if one thinks of belief as expalaning actions, then the Real Belief is the one attested by action. Someone who does not giove to charity does ot Really Believe in chaity even if they say they do. Actions either occur or do not: one cannot perform contradictory actions , so one cannot entertain contradictory Real Beliefs. For reasons that have nothing to do with the number of neurons in the brain.
Yet again, a Philosophers are Idiots claim turns out to be poorly founded.
This post taught me a lot, but now “There is no invisible dragon in my garage” will be popping into my head whenever I see a garage.
I noticed that I was confused by your dragon analogy. 1) Why did this guy believe in this dragon when there was absolutely no evidence that it exists? 2) Why do I find the analogy so satisfying, when its premise is so absurd.
Observation 1) Religious people have evidence:
The thing about religion is that a given religion’s effects on people tend to be predictable. When Christians tell you to accept Jesus into your heart, some of the less effective missionaries talk about heaven, but the better ones talk about positive changes to their emotional states. Often, they will imply that those positive life changes will happen for you if you join, and as a prediction that tends to be a very good one.
As a rationalist, I know the emotional benefits of paying attention when something nice happens, and I recognize that feeling gratitude boosts my altruism. I know I can get high on hypoxia if I ever want to see visions or speak in tongues. I know that spending at least an hour every week building ethical responses into my cached behavior is a good practice for keeping positive people in my life. I recognize the historical edifice of morality that allowed us to build the society we currently live in. This whole suite of tools is built into religion, and the means of achieving the benefits it provides is non-obvious enough that a mystical explanation makes sense. Questioning those beliefs without that additional knowledge means you lose access to the benefits of the beliefs.
Observation 2) We expect people to discard falsifiable parts of their beliefs without discarding all of that belief.
The dragon analogy is nice and uncomplicated. There are no benefits to believing in the dragon, so the person in the analogy can make no predictions with it. I’ve never seen that happen in the real world. Usually religious people have tested their beliefs, and found that the predictions they’ve made come true. The fact that those beliefs can’t predict things in certain areas doesn’t change the fact that they do work in others, and most people don’t expect generality from their beliefs. When that guy says that the dragon is permeable to flour, that isn’t him making an excuse for the lack of a dragon. That’s him indicating a section of reality where he doesn’t use the dragon to inform his decisions. Religious people don’t apply their belief in their dragon in categories where believing has not provided them with positive results. Disproved hypotheses don’t disprove the belief, but rather disprove the belief for that category of experience. And that’s pretty normal. The fact that I don’t know everything, and the fact that I can be right about some things and wrong about others means that I pretty much have to be categorizing my knowledge.
Thinking about this article has lead me to the conclusion that “belief in belief” is more accurately visualized as compartmentalization of belief, that it’s common to everyone, and that it indicates that a belief that I have is providing the right answer for the wrong reasons. I predict that if I train myself to react to predicting that the world will behave strangely in order to not violate my hypothesis by saying out loud “this belief is not fully general” I will find that more often than not that this statement will be correct.
“Belief in belief” exists as a phenomenon but is neither necessary nor sufficient to explain the claims of the Dragonist (if I may name his espoused metaphysics thus) in Sagan’s parable.
My most recent encounter with someone who believed in belief was someone who did not in addition believe. He had believed once, but he lost his faith (in this case, in God, not dragons) and he wished he could have it back. He believed in belief—that it was a good thing—but alas, he did not believe.
In the above article, Eliezer (if I may so call him) was invoking the concept of belief in belief to explain something—that is, it was a hypothesis of a sort. The phenomenon in question was this Dragonist who claimed to believe but gave some evidence that he did not in that he rejected the most obvious consequences of a dragon being in the garage. Our hypothesis was that he didn’t really believe but thought he should and was, in effect, trying to convince himself and others that it was so but (in the case of himself) not so overtly that he’d have to admit to himself he wasn’t how he hoped he’d be. If our hypothesis were true, what would we anticipate? If we confronted this guy, that he’d break down and admit he lack of belief? Someone whose belief system runs to invisible dragons is too crazy to let that happen so easily. Maybe what we anticipate is that given sufficient anti-psychotic meds and associated treatments and time, he would recant? What if he didn’t? Would we so believe in our hypothesis that we would have faith that given infinite time (say, the amount of time necessary to search all the integers until we identified the last twin prime or the first perfect number that didn’t end in 6 or 8) he would recant in principle. Worse still, maybe he would recant to get us off his back but continue to believe in secret.
In short, since our Dragonist’s subjective mental state is invisible to us, even were we to sprinkle flour over his head, we are ultimately forced to rely on faith that belief in belief is what is behind this phenomenon.
If his mental state is invisible to us, that means we can’t prove what his mental state is, but it should still be possible to have evidence for his mental state and to know it to some degree of certainty that isn’t 100%. Which is no different from what science does to “prove” anything else.
It would be difficult to say what this evidence would be. As one who has spent some time with people who would generally be called deluded, I can assure you that finding an understandable explanation for their delusions is non-trivial.
My main take-away: There is a difference between conscious and subconscious. If you accuse sb with “You do not believe X” then you will get denial because he consciously believes it. The problem is that he subconsicously does not believe it and thus comes up with excuses in advance.
I’m not disagreeing with any of the content above, but a note about terminology--
LessWrong keeps using the word “rationalism” to mean something like “reason” or possibly even “scientific methodology”. In philosophy, however, “rationalism” is not allied to “empiricism”, but diametrically opposed to it. What we call science was a gradual development, over a few centuries, of methodologies that harnessed the powers both of rationalism and empiricism, which had previously been thought to be incompatible.
But if you talk to a modernist or post-modernist today, when they use the term “rational”, they mean old-school Greek, Platonic-Aristotelian rationalism. They, like us, think so much in this old Greek way that they may use the term “reason” when they mean “Aristotelian logic”. All post-modernism is based on the assumption that scientific methodology is essentially the combination of Platonic essences, Aristotelian physics, and Aristotelian logic, which is rationalism. They are completely ignorant of what science is and how it works. But this is partly our fault, because they hear us talking about science and using the term “rationality” as if science were rationalism!
(Inb4 somebody says Plato was a rationalist and Aristotle was an empiricist: Really, really not. Aristotle couldn’t measure things, and very likely couldn’t do arithmetic. In any case the most important Aristotelian writings to post-modernists are the Physics, which aren’t empirical in the slightest. No time to go into it here, though.)
Went back to re-read some Lacan and Zizek after this, with regards to Dennett’s ‘belief in belief.’ Very similar to the ‘displaced belief’ they talk about. The common example they give is Santa Claus: children probably don’t believe it but they say they do for the presents, because they understand that the adults expect them to believe, etc. The parents don’t believe it but they continue the ruse for the benefit of the children, other people’s children, or whatever they tell themselves. Thus people often *do* admit to themselves that they don’t believe but they say “but nonetheless other people believe.” They displace the belief onto someone else, and they continue going through the motions—and the ‘belief’ functions anyway. Even if nobody actually believes, they believe by proxy by trusting the apparent belief of those around them. Emperor’s New Clothes comes to mind also.
″ No, this invisibility business is a symptom of something much worse. ”
Indeed. If only it *were* as simple as all that… There often is some fundamental Thing preventing people from realization of the truth and then acting in accordance with that truth though. Often times their entire worldview would be shattered, and they just Can’t Have That—it is ideological, in other words. Others know something is charlatanism but they are the charlatan benefiting so they’ll keep making up reasons for why there really is a dragon in their garage (maybe they are selling magical dragon breath for $100/jar). Others use false beliefs merely as a way to signal propaganda and attract followers—they know it’s fakery but they don’t care about debating in good faith to begin with.
Anyway I’m slowly making my way through these after a re-read of your HP fanfic. Just wanted to say that even if “what do I know and how do I know it” is the only thing my brain can hold on to it’s already been well worth it (although I did pound Bayes’ Theorem in there, too). Thanks!
I agree with gfarb that the claimant does not necessarily believe in a belief; taking the parable literally, it is far more likely that the claimant suffers from a delusional disorder and actually genuinely believes that there is a dragon in his garage. As to why he anticipates that no one else will be able to detect the dragon, that would most likely be explained by his past experiences, where other people denied his experiences and failed to find evidence for them.
The recent conception of the hostile telepaths problem goes a long way towards explaining why people believe in belief in the first place.