Making Beliefs Pay Rent (in Anticipated Experiences)
Thus begins the ancient parable:
If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”
If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.
Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?
Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.
It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.
You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?
To answer precisely, you must use beliefs like Earth’s gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock’s second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.
It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.
The same brain that builds a network of inferred causes behind sensory experience can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could simplistically model their minds by drawing a little node labeled “Phlogiston,” and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance.
Or suppose your English professor teaches you that the famous writer Wulky Wilkinsen is actually a “retropositional author,” which you can tell because his books exhibit “alienated resublimation.” And perhaps your professor knows all this because their professor told them; but all they’re able to say about resublimation is that it’s characteristic of retropositional thought, and of retropositionality that it’s marked by alienated resublimation. What does this mean you should expect from Wulky Wilkinsen’s books?
Nothing. The belief, if you can call it that, doesn’t connect to sensory experience at all. But you had better remember the propositional assertions that “Wulky Wilkinsen” has the “retropositionality” attribute and also the “alienated resublimation” attribute, so you can regurgitate them on the upcoming quiz. The two beliefs are connected to each other, though still not connected to any anticipated experience.
We can build up whole networks of beliefs that are connected only to each other—call these “floating” beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens’s ability to build more general and flexible belief networks.
The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit. Do you believe that phlogiston is the cause of fire? Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a retropositional author? Then what do you expect to see because of that? No, not “alienated resublimation”; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?
It is even better to ask: what experience must not happen to you? Do you believe that Élan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you. It floats.
When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don’t know what experiences are implied by Wulky Wilkinsens writing being retropositional, you can go on arguing forever.
Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Believing In by 8 Feb 2024 7:06 UTC; 230 points) (
- Noticing Frame Differences by 30 Sep 2019 1:24 UTC; 215 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 203 points) (
- Gears in understanding by 12 May 2017 0:36 UTC; 193 points) (
- 16 types of useful predictions by 10 Apr 2015 3:31 UTC; 173 points) (
- Three ways CFAR has changed my view of rationality by 10 Sep 2013 18:24 UTC; 172 points) (
- Causal Diagrams and Causal Models by 12 Oct 2012 21:49 UTC; 153 points) (
- Being a Robust Agent by 18 Oct 2018 7:00 UTC; 151 points) (
- If a tree falls on Sleeping Beauty... by 12 Nov 2010 1:14 UTC; 145 points) (
- Dissolving the Question by 8 Mar 2008 3:17 UTC; 144 points) (
- How to learn soft skills by 7 Feb 2015 5:22 UTC; 136 points) (
- Message Length by 20 Oct 2020 5:52 UTC; 134 points) (
- Belief as Attire by 2 Aug 2007 17:13 UTC; 132 points) (
- Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning by 7 Jun 2020 7:52 UTC; 131 points) (
- Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think by 27 Dec 2019 5:09 UTC; 127 points) (
- Urges vs. Goals: The analogy to anticipation and belief by 24 Jan 2012 23:57 UTC; 126 points) (
- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 124 points) (
- Back to the Basics of Rationality by 11 Jan 2011 7:05 UTC; 116 points) (
- The Mystery of the Haunted Rationalist by 8 Mar 2009 20:39 UTC; 114 points) (
- What I’ve learned from Less Wrong by 20 Nov 2010 12:47 UTC; 113 points) (
- Belief in Intelligence by 25 Oct 2008 15:00 UTC; 111 points) (
- Interpreting Yudkowsky on Deep vs Shallow Knowledge by 5 Dec 2021 17:32 UTC; 100 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- The Power of Positivist Thinking by 21 Mar 2009 20:55 UTC; 95 points) (
- Building Phenomenological Bridges by 23 Dec 2013 19:57 UTC; 95 points) (
- My intellectual influences by 22 Nov 2020 18:00 UTC; 95 points) (
- Conceptual Analysis and Moral Theory by 16 May 2011 6:28 UTC; 94 points) (
- Of Two Minds by 17 May 2018 4:34 UTC; 94 points) (
- Quantum Explanations by 9 Apr 2008 8:15 UTC; 93 points) (
- Better Disagreement by 24 Oct 2011 21:13 UTC; 90 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 89 points) (
- Replace the Symbol with the Substance by 16 Feb 2008 18:12 UTC; 88 points) (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 88 points) (
- Go Forth and Create the Art! by 23 Apr 2009 1:37 UTC; 88 points) (
- New User’s Guide to LessWrong by 17 May 2023 0:55 UTC; 87 points) (
- SotW: Be Specific by 3 Apr 2012 6:11 UTC; 86 points) (
- Toward a New Technical Explanation of Technical Explanation by 16 Feb 2018 0:44 UTC; 86 points) (
- The Great Annealing by 30 Mar 2020 1:08 UTC; 79 points) (
- Automatic Rate Limiting on LessWrong by 23 Jun 2023 20:19 UTC; 77 points) (
- Is Reality Ugly? by 12 Jan 2008 22:26 UTC; 74 points) (
- The role of tribes in achieving lasting impact and how to create them by 29 Sep 2021 20:48 UTC; 72 points) (EA Forum;
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 72 points) (
- More Babble by 12 Jan 2018 1:41 UTC; 72 points) (
- Practical Advice Backed By Deep Theories by 25 Apr 2009 18:52 UTC; 70 points) (
- Maybe Lying Doesn’t Exist by 14 Oct 2019 7:04 UTC; 70 points) (
- Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by 5 Nov 2011 11:06 UTC; 69 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Possibility and Could-ness by 14 Jun 2008 4:38 UTC; 66 points) (
- [New] Rejected Content Section by 4 May 2023 1:43 UTC; 65 points) (
- Do Scientists Already Know This Stuff? by 17 May 2008 2:25 UTC; 65 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Unofficial Canon on Applied Rationality by 15 Feb 2016 13:03 UTC; 65 points) (
- Pluralistic Moral Reductionism by 1 Jun 2011 0:59 UTC; 64 points) (
- What’s with all the bans recently? by 4 Apr 2024 6:16 UTC; 64 points) (
- We can all be high status by 10 Oct 2018 16:54 UTC; 62 points) (
- Why Realists and Anti-Realists Disagree by 5 Jun 2020 7:51 UTC; 61 points) (EA Forum;
- Feel the Meaning by 13 Feb 2008 1:01 UTC; 61 points) (
- Spock’s Dirty Little Secret by 25 Mar 2009 19:07 UTC; 60 points) (
- Meditation Trains Metacognition by 20 Oct 2013 0:47 UTC; 59 points) (
- Diversity takes by 14 Feb 2023 16:36 UTC; 58 points) (EA Forum;
- Inferential credit history by 24 Jul 2013 14:12 UTC; 58 points) (
- Maybe Lying Can’t Exist?! by 23 Aug 2020 0:36 UTC; 58 points) (
- Which rationality posts are begging for further practical development? by 23 Jul 2023 22:22 UTC; 58 points) (
- Schelling Categories, and Simple Membership Tests by 26 Aug 2019 2:43 UTC; 58 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- Timeless Physics by 27 May 2008 9:09 UTC; 54 points) (
- Keep Your Beliefs Cruxy by 28 Jul 2019 1:18 UTC; 53 points) (
- A definition of wireheading by 27 Nov 2012 19:31 UTC; 52 points) (
- Anxiety and Rationality by 19 Jan 2016 18:30 UTC; 51 points) (
- Regularization Causes Modularity Causes Generalization by 1 Jan 2022 23:34 UTC; 50 points) (
- Challenges to Yudkowsky’s Pronoun Reform Proposal by 13 Mar 2022 20:38 UTC; 50 points) (
- Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle by 14 Jul 2020 6:03 UTC; 50 points) (
- Range and Forecasting Accuracy by 27 May 2022 18:47 UTC; 48 points) (
- Is “gears-level” just a synonym for “mechanistic”? by 13 Dec 2021 4:11 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- Exploiting the Typical Mind Fallacy for more accurate questioning? by 17 Jul 2012 0:46 UTC; 47 points) (
- Let Your Mind Be Not Fixed by 31 Jul 2020 17:54 UTC; 46 points) (
- Resources for AI Alignment Cartography by 4 Apr 2020 14:20 UTC; 45 points) (
- The role of neodeconstructive rationalism in the works of Less Wrong by 1 Apr 2010 14:17 UTC; 44 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- No Logical Positivist I by 4 Aug 2008 1:06 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Making Beliefs Pay Rent (in Anticipated Experiences): Exercises by 17 Apr 2011 15:31 UTC; 38 points) (
- [Feedback please] New User’s Guide to LessWrong by 25 Apr 2023 18:54 UTC; 38 points) (
- Confirmation Bias in Action by 24 Jan 2021 17:38 UTC; 37 points) (
- Optimizing the Twelve Virtues of Rationality by 9 Jun 2015 3:08 UTC; 36 points) (
- Frequentist vs Bayesian breakdown: interpretation vs inference by 30 Aug 2011 15:58 UTC; 36 points) (
- Lighthaven Sequences Reading Group #4 (Tuesday 10/01) by 25 Sep 2024 5:48 UTC; 36 points) (
- Even if human & AI alignment are just as easy, we are screwed by 13 Apr 2023 17:32 UTC; 35 points) (
- Applied Rationality Workshops: Jan 25-28 and March 1-4 by 3 Jan 2013 1:00 UTC; 34 points) (
- Connecting Your Beliefs (a call for help) by 20 Nov 2011 5:18 UTC; 34 points) (
- Book Review: Philosophical Investigations by Wittgenstein by 12 Oct 2021 20:14 UTC; 34 points) (
- Coding Rationally—Test Driven Development by 1 Oct 2010 15:20 UTC; 33 points) (
- Can Counterfactuals Be True? by 24 Jul 2008 4:40 UTC; 33 points) (
- Narrow your answer space by 28 Dec 2010 11:38 UTC; 33 points) (
- Rationalist Judo, or Using the Availability Heuristic to Win by 15 Jul 2011 8:39 UTC; 33 points) (
- The Comedy of Behaviorism by 2 Aug 2008 20:42 UTC; 32 points) (
- Rationalists should beware rationalism by 6 Apr 2009 14:16 UTC; 32 points) (
- (Subjective Bayesianism vs. Frequentism) VS. Formalism by 26 Nov 2011 5:05 UTC; 32 points) (
- Clarifying the confusion around inner alignment by 13 May 2022 23:05 UTC; 31 points) (
- Connection Theory Has Less Than No Evidence by 1 Aug 2014 10:17 UTC; 30 points) (
- Two kinds of Expectations, *one* of which is helpful for rational thinking by 20 Jun 2016 16:04 UTC; 30 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Which Basis Is More Fundamental? by 24 Apr 2008 4:17 UTC; 28 points) (
- What’s your favorite LessWrong post? by 21 Feb 2019 10:39 UTC; 27 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by 2 Mar 2024 22:05 UTC; 26 points) (
- My atheism story by 4 Feb 2019 14:33 UTC; 26 points) (
- A LessWrong “rationality workbook” idea by 9 Jan 2011 17:52 UTC; 26 points) (
- What does the word “collaborative” mean in the phrase “collaborative truthseeking”? by 26 Jun 2019 5:26 UTC; 26 points) (
- 26 Jun 2019 5:39 UTC; 26 points) 's comment on Causal Reality vs Social Reality by (
- Purpose and Pragmatism by 26 Nov 2007 6:51 UTC; 25 points) (
- 5 Aug 2015 2:40 UTC; 25 points) 's comment on Which LW / rationalist blog posts aren’t covered by my books & courses? by (
- Zen and Rationality: Trust in Mind by 11 Aug 2020 20:23 UTC; 25 points) (
- Decoherent Essences by 30 Apr 2008 6:32 UTC; 24 points) (
- Scientific Method by 18 Feb 2024 21:06 UTC; 24 points) (
- Whirlwind Tour of Chain of Thought Literature Relevant to Automating Alignment Research. by 1 Jul 2024 5:50 UTC; 23 points) (
- Algorithms of Deception! by 19 Oct 2019 18:04 UTC; 23 points) (
- Forecasting is a responsibility by 5 Dec 2020 0:40 UTC; 23 points) (
- Don’t Believe Wrong Things by 25 Apr 2018 3:36 UTC; 22 points) (
- Range and Forecasting Accuracy by 27 May 2022 19:08 UTC; 21 points) (EA Forum;
- Smart non-reductionists, philosophical vs. engineering mindsets, and religion by 4 Aug 2012 10:48 UTC; 21 points) (
- LW Melbourne: Report on Public Rationality Lecture by (17 Aug 2013 14:17 UTC; 21 points)
- 15 Apr 2012 17:19 UTC; 21 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- 26 Sep 2012 16:27 UTC; 20 points) 's comment on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? by (
- Consciousness of abstraction by 21 Dec 2020 21:35 UTC; 20 points) (
- Doxa, Episteme, and Gnosis Revisited by 20 Nov 2019 19:35 UTC; 19 points) (
- 2 Aug 2019 19:17 UTC; 19 points) 's comment on Off the Cuff Brangus Stuff by (
- 14 Sep 2012 5:12 UTC; 19 points) 's comment on The raw-experience dogma: Dissolving the “qualia” problem by (
- Fantasy-Forbidding Expert Opinion by 10 Aug 2020 0:01 UTC; 18 points) (
- 4 May 2009 15:29 UTC; 18 points) 's comment on Without models by (
- What are some good examples of fake beliefs? by 14 Nov 2020 7:40 UTC; 18 points) (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- Any layperson-accessible reference posts on how to operationalize beliefs ? by 5 Feb 2021 7:26 UTC; 17 points) (
- Forager Anthropology by 28 Jul 2010 5:48 UTC; 17 points) (
- Map and Territory: Summary and Thoughts by 5 Dec 2020 8:21 UTC; 16 points) (
- Does cognitive therapy encourage bias? by 22 Nov 2010 11:31 UTC; 16 points) (
- 6 Feb 2024 0:44 UTC; 16 points) 's comment on Noticing Panic by (
- 16 Feb 2023 3:06 UTC; 16 points) 's comment on My understanding of Anthropic strategy by (
- 1 Mar 2012 17:22 UTC; 16 points) 's comment on Rationality Quotes March 2012 by (
- Interesting rationalist exercise about French election by 16 Apr 2012 15:34 UTC; 16 points) (
- Notes on Rationality by 16 Jan 2022 19:05 UTC; 16 points) (
- Without a trajectory change, the development of AGI is likely to go badly by 29 May 2023 23:42 UTC; 16 points) (
- Theories That Can Explain Everything by 2 Jan 2020 2:12 UTC; 15 points) (
- 15 Jul 2013 2:47 UTC; 15 points) 's comment on Rationality Quotes July 2013 by (
- 26 Jan 2018 22:13 UTC; 15 points) 's comment on What are the Best Hammers in the Rationalist Community? by (
- Update Then Forget by 17 Jan 2013 18:36 UTC; 15 points) (
- LW4EA: 16 types of useful predictions by 24 May 2022 3:19 UTC; 14 points) (EA Forum;
- [SEQ RERUN] Making Beliefs Pay Rent (in Anticipated Experiences) by 20 Jun 2011 0:34 UTC; 14 points) (
- An Intuitive Explanation of Inferential Distance by 26 Nov 2017 14:13 UTC; 14 points) (
- 10 Nov 2011 20:55 UTC; 14 points) 's comment on PredictionBook: A Short Note by (
- 24 Jul 2011 18:04 UTC; 14 points) 's comment on Developing Empathy by (
- What are the top 1-10 posts / sequences / articles / etc. that you’ve found most useful for yourself for becoming “less wrong”? by 27 Mar 2022 0:37 UTC; 14 points) (
- The role of tribes in achieving lasting impact and how to create them by 29 Sep 2021 20:48 UTC; 14 points) (
- 27 Feb 2009 23:30 UTC; 14 points) 's comment on The Most Important Thing You Learned by (
- Rationality Reading Group: Fake Beliefs (p43-77) by 7 May 2015 9:07 UTC; 14 points) (
- Rationality Reading Group: Fake Beliefs (p43-77) by 7 May 2015 9:07 UTC; 14 points) (
- 15 Dec 2020 18:56 UTC; 14 points) 's comment on What confusions do people have about simulacrum levels? by (
- How to use “philosophical majoritarianism” by 5 May 2009 6:49 UTC; 13 points) (
- Memes? by 23 Sep 2012 0:05 UTC; 13 points) (
- 5 May 2011 14:18 UTC; 12 points) 's comment on Meditation, insight, and rationality. (Part 2 of 3) by (
- 30 Mar 2010 3:15 UTC; 12 points) 's comment on The I-Less Eye by (
- 14 Jun 2013 13:17 UTC; 12 points) 's comment on Can we dodge the mindkiller? by (
- What rationality material should I teach in my game theory course by 14 Jan 2014 2:15 UTC; 11 points) (
- 6 May 2020 5:38 UTC; 11 points) 's comment on Algorithms of Deception! by (
- 20 Mar 2022 17:49 UTC; 11 points) 's comment on Challenges to Yudkowsky’s Pronoun Reform Proposal by (
- 24 Jan 2012 19:07 UTC; 10 points) 's comment on Could Democritus have predicted intelligence explosion? by (
- 31 Oct 2013 6:55 UTC; 10 points) 's comment on Aliveness in Training by (
- 20 Sep 2011 20:37 UTC; 10 points) 's comment on Subjective Realities by (
- 19 Dec 2007 6:28 UTC; 10 points) 's comment on Beware of Stephen J. Gould by (
- Have you changed your mind recently? by 6 Feb 2015 18:34 UTC; 10 points) (
- 5 Nov 2013 7:41 UTC; 10 points) 's comment on No Universally Compelling Arguments in Math or Science by (
- On Hollywood Heroism by 12 Oct 2018 21:35 UTC; 10 points) (
- 10 Jun 2020 21:12 UTC; 10 points) 's comment on Failed Utopia #4-2 by (
- 9 Nov 2011 20:39 UTC; 9 points) 's comment on Q&A with new Executive Director of Singularity Institute by (
- Truth seeking as an optimization process by 18 Aug 2015 11:03 UTC; 9 points) (
- 23 Jun 2011 19:57 UTC; 9 points) 's comment on Rationality Minicamp report and photos [link] by (
- 23 Oct 2011 14:33 UTC; 9 points) 's comment on Amanda Knox: post mortem by (
- 22 Dec 2012 18:37 UTC; 9 points) 's comment on Narrative, self-image, and self-communication by (
- 18 Feb 2014 16:10 UTC; 9 points) 's comment on Identity and Death by (
- 18 Oct 2010 6:48 UTC; 9 points) 's comment on Was Carl Segan an Agnostic Prophet? by (
- 16 Jun 2014 18:41 UTC; 9 points) 's comment on On Terminal Goals and Virtue Ethics by (
- 23 Apr 2015 16:43 UTC; 8 points) 's comment on Dancing room for you and applied problems in foreign fields by (
- 14 Sep 2013 14:26 UTC; 8 points) 's comment on The Up-Goer Five Game: Explaining hard ideas with simple words by (
- Consider tabooing “I think” by 12 Nov 2024 2:00 UTC; 8 points) (
- 5 Jan 2012 12:17 UTC; 8 points) 's comment on Welcome to Less Wrong! (2012) by (
- 17 Oct 2012 17:23 UTC; 8 points) 's comment on How To Have Things Correctly by (
- 13 Feb 2011 8:01 UTC; 8 points) 's comment on Bridging Inferential Gaps and Explaining Rationality to Other People by (
- 21 Nov 2014 18:32 UTC; 8 points) 's comment on Narcissistic Contrarianism by (
- 9 Jul 2008 17:16 UTC; 8 points) 's comment on Adaptation-Executers, not Fitness-Maximizers by (
- The Reality of Emergence by 19 Aug 2017 21:58 UTC; 8 points) (
- Could evolution have selected for moral realism? by 27 Sep 2012 4:25 UTC; 7 points) (
- 23 Oct 2011 8:46 UTC; 7 points) 's comment on Rationality Quotes October 2011 by (
- Making Beliefs Pay Rent by 28 Jul 2024 17:59 UTC; 7 points) (
- This Territory Does Not Exist by 13 Aug 2020 0:30 UTC; 7 points) (
- 21 Nov 2014 16:05 UTC; 7 points) 's comment on Narcissistic Contrarianism by (
- 22 Feb 2010 18:54 UTC; 7 points) 's comment on Case study: abuse of frequentist statistics by (
- 31 Jul 2014 18:45 UTC; 7 points) 's comment on Resist the Happy Death Spiral by (
- 2 Aug 2011 18:44 UTC; 7 points) 's comment on Book of Mormon Discussion by (
- Belief-conditional things—things that only exist when you believe in them by 25 Dec 2021 10:49 UTC; 7 points) (
- 26 Jun 2011 21:05 UTC; 7 points) 's comment on Exclude the supernatural? My worldview is up for grabs. by (
- 27 Sep 2013 8:08 UTC; 6 points) 's comment on Open Thread, September 23-29, 2013 by (
- 8 Jan 2011 23:15 UTC; 6 points) 's comment on Is there anything after death? by (
- 16 May 2011 11:09 UTC; 6 points) 's comment on Coercion is far by (
- Meetup : San Diego meetup by 12 Nov 2011 18:26 UTC; 6 points) (
- Is skilled hunting unethical? by 17 Feb 2018 18:48 UTC; 6 points) (
- Why Two Valid Answers Approach is not Enough for Sleeping Beauty by 6 Feb 2024 14:21 UTC; 6 points) (
- 27 Dec 2022 1:58 UTC; 6 points) 's comment on DragonGod’s Shortform by (
- Suggested forecasting wiki text addition by 29 Dec 2022 11:55 UTC; 5 points) (EA Forum;
- 2 Jan 2011 6:24 UTC; 5 points) 's comment on The Proper Use of Humility by (
- 12 Jul 2012 14:16 UTC; 5 points) 's comment on Welcome to Less Wrong! (2012) by (
- 17 Jan 2020 17:56 UTC; 5 points) 's comment on Can we always assign, and make sense of, subjective probabilities? by (
- 25 Dec 2014 10:35 UTC; 5 points) 's comment on Nth_Level_Player by (
- 19 Jun 2012 12:46 UTC; 5 points) 's comment on Thwarting a Catholic conversion? by (
- 17 Jul 2020 17:47 UTC; 5 points) 's comment on My Dating Plan ala Geoffrey Miller by (
- 16 Feb 2015 10:54 UTC; 5 points) 's comment on Wisdom for Smart Teens—my talk at SPARC 2014 by (
- 27 Feb 2009 23:29 UTC; 5 points) 's comment on The Most Important Thing You Learned by (
- Leave beliefs that don’t constrain experience alone by 30 Oct 2017 4:03 UTC; 5 points) (
- 11 Feb 2024 3:30 UTC; 5 points) 's comment on Believing In by (
- 1 Aug 2013 1:08 UTC; 5 points) 's comment on More “Stupid” Questions by (
- “Free Will” in a Computational Universe by 22 Sep 2022 21:25 UTC; 5 points) (
- 22 Oct 2010 20:02 UTC; 5 points) 's comment on Does it matter if you don’t remember? by (
- 10 May 2009 20:08 UTC; 5 points) 's comment on The First Koan: Drinking the Hot Iron Ball by (
- 12 Dec 2011 3:05 UTC; 5 points) 's comment on Video Q&A with Singularity Institute Executive Director by (
- 3 Apr 2019 23:33 UTC; 5 points) 's comment on Ideas ahead of their time by (
- 8 Jun 2011 6:59 UTC; 4 points) 's comment on Pluralistic Moral Reductionism by (
- 14 Apr 2012 6:34 UTC; 4 points) 's comment on Configurations and Amplitude by (
- 3 Feb 2023 21:43 UTC; 4 points) 's comment on Reply to Duncan Sabien on Strawmanning by (
- 13 Dec 2011 20:46 UTC; 4 points) 's comment on How to Not Lose an Argument by (
- 31 Oct 2010 19:39 UTC; 4 points) 's comment on Value Deathism by (
- 25 Apr 2011 17:11 UTC; 4 points) 's comment on Leaving a line of retreat for theists by (
- 16 Jul 2015 16:09 UTC; 4 points) 's comment on Monthly Bragging Thread July 2015 by (
- 10 Jul 2009 23:28 UTC; 4 points) 's comment on Causation as Bias (sort of) by (
- 15 Sep 2013 18:40 UTC; 4 points) 's comment on Notes on Brainwashing & ‘Cults’ by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- Thoughts on a possible solution to Pascal’s Mugging by 1 Aug 2012 12:32 UTC; 4 points) (
- Meetup : Ohio Monthly by 23 Feb 2012 23:14 UTC; 4 points) (
- 12 Apr 2013 20:07 UTC; 4 points) 's comment on LW Women Submissions: On Misogyny by (
- 28 Apr 2011 3:20 UTC; 4 points) 's comment on What is Metaethics? by (
- 15 Oct 2010 21:21 UTC; 4 points) 's comment on LW favorites by (
- 2 Jul 2010 6:24 UTC; 4 points) 's comment on A Challenge for LessWrong by (
- 15 Sep 2007 21:54 UTC; 4 points) 's comment on Why I’m Blooking by (
- What are Emotions? by 15 Nov 2024 4:20 UTC; 4 points) (
- Does cognitive therapy encourage bias? by 22 Nov 2010 9:52 UTC; 4 points) (
- 30 Jun 2019 18:19 UTC; 4 points) 's comment on Causal Reality vs Social Reality by (
- 19 Jun 2021 18:14 UTC; 3 points) 's comment on Why did EA organizations fail at fighting to prevent the COVID-19 pandemic? by (EA Forum;
- 8 Aug 2011 3:53 UTC; 3 points) 's comment on The elephant in the room, AMA by (
- 27 Sep 2010 7:14 UTC; 3 points) 's comment on Is Rationality Maximization of Expected Value? by (
- 29 May 2012 6:47 UTC; 3 points) 's comment on Welcome to Less Wrong! (2012) by (
- 21 Oct 2021 4:52 UTC; 3 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- 20 Aug 2021 7:06 UTC; 3 points) 's comment on LVSN’s Shortform by (
- 16 Jun 2020 16:09 UTC; 3 points) 's comment on Dark Side Epistemology by (
- 2 Oct 2012 5:50 UTC; 3 points) 's comment on The Useful Idea of Truth by (
- Suggest short Sequence readings for my college stat class by 1 Jan 2018 19:46 UTC; 3 points) (
- 5 May 2014 9:36 UTC; 3 points) 's comment on Truth: It’s Not That Great by (
- 28 Feb 2009 0:12 UTC; 3 points) 's comment on The Most Important Thing You Learned by (
- 7 Dec 2016 20:32 UTC; 3 points) 's comment on My problems with Formal Friendly Artificial Intelligence work by (
- 28 Apr 2009 12:51 UTC; 3 points) 's comment on What is control theory, and why do you need to know about it? by (
- 28 Jun 2011 10:21 UTC; 3 points) 's comment on What can we gain from rationality? by (
- 3 Jun 2010 16:10 UTC; 3 points) 's comment on Rationality quotes: June 2010 by (
- 24 Mar 2012 18:00 UTC; 3 points) 's comment on Nonmindkilling open questions by (
- 26 Apr 2011 22:38 UTC; 3 points) 's comment on What is Metaethics? by (
- 12 Mar 2012 15:07 UTC; 3 points) 's comment on Falsification by (
- 30 Aug 2013 22:19 UTC; 3 points) 's comment on Rewriting the sequences? by (
- 3 Mar 2014 1:03 UTC; 2 points) 's comment on Your Strength as a Rationalist by (
- 20 Nov 2010 13:14 UTC; 2 points) 's comment on How to Convince Me That 2 + 2 = 3 by (
- 1 May 2018 15:06 UTC; 2 points) 's comment on Origin of Morality by (
- 21 Sep 2011 2:57 UTC; 2 points) 's comment on Subjective Realities by (
- Meetup : Bi-weekly Frankfurt Meetup by 19 Jul 2015 8:22 UTC; 2 points) (
- 28 May 2011 0:59 UTC; 2 points) 's comment on . by (
- 4 Oct 2023 7:09 UTC; 2 points) 's comment on Open Thread – Autumn 2023 by (
- 8 Jan 2012 16:55 UTC; 2 points) 's comment on Welcome to Less Wrong! by (
- 28 Aug 2010 15:35 UTC; 2 points) 's comment on Open Thread, August 2010 by (
- 22 Nov 2013 17:54 UTC; 2 points) 's comment on The sun reflected off things by (
- 1 Dec 2012 14:46 UTC; 2 points) 's comment on Open Thread, December 1-15, 2012 by (
- 19 Jul 2024 1:12 UTC; 2 points) 's comment on Alignment: “Do what I would have wanted you to do” by (
- 4 Oct 2017 17:58 UTC; 2 points) 's comment on The Reality of Emergence by (
- Meetup : Pittsburgh: Making Beliefs Pay Rent by 25 May 2012 3:04 UTC; 2 points) (
- Meetup : LW Buffalo Meetup at Buffalo Labs by 29 Jan 2013 0:15 UTC; 2 points) (
- 28 Dec 2012 4:17 UTC; 2 points) 's comment on Morality Isn’t Logical by (
- 13 Jan 2013 8:36 UTC; 2 points) 's comment on The Useful Idea of Truth by (
- 19 Feb 2013 6:13 UTC; 2 points) 's comment on Falsifiable and non-Falsifiable Ideas by (
- 15 Jun 2011 21:42 UTC; 2 points) 's comment on Induction, Deduction, and the Collatz Conjecture: the Decidedly Undecidable Propositions. by (
- 13 Apr 2015 14:26 UTC; 2 points) 's comment on What are “the really good ideas” that Peter Thiel says are too dangerous to mention? by (
- 17 May 2012 16:00 UTC; 2 points) 's comment on Open Thread, May 16-31, 2012 by (
- 30 Oct 2010 3:17 UTC; 2 points) 's comment on Levels of Intelligence by (
- 3 Jul 2008 8:20 UTC; 2 points) 's comment on The Bedrock of Fairness by (
- 23 Jan 2021 18:06 UTC; 2 points) 's comment on Deutsch and Yudkowsky on scientific explanation by (
- Does cognitive therapy encourage bias? by 22 Nov 2010 11:11 UTC; 2 points) (
- 20 Nov 2010 19:49 UTC; 2 points) 's comment on What I’ve learned from Less Wrong by (
- 25 Jul 2016 3:11 UTC; 2 points) 's comment on A rational unfalsifyable believe by (
- 20 Sep 2011 21:09 UTC; 2 points) 's comment on Intro-level training materials for rationality / critical thinking by (
- 22 May 2011 21:23 UTC; 1 point) 's comment on Conceptual Analysis and Moral Theory by (
- 14 Apr 2012 20:06 UTC; 1 point) 's comment on Configurations and Amplitude by (
- 18 Jan 2010 21:10 UTC; 1 point) 's comment on Advancing Certainty by (
- 11 Apr 2011 22:03 UTC; 1 point) 's comment on New Less Wrong Feature: Rerunning The Sequences by (
- Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? by 4 Sep 2020 19:46 UTC; 1 point) (
- What does it mean to “believe” a thing to be true? by 27 Dec 2018 13:28 UTC; 1 point) (
- Meetup : San Diego experimental meetup by 28 Dec 2011 22:02 UTC; 1 point) (
- 24 Feb 2018 21:03 UTC; 1 point) 's comment on Mythic Mode by (
- 18 Dec 2010 4:03 UTC; 1 point) 's comment on Outside the Laboratory by (
- In plain English—in what ways are Bayes’ Rule and Popperian falsificationism conflicting epistemologies? by 2 Apr 2021 21:21 UTC; 1 point) (
- 27 Jul 2016 18:52 UTC; 1 point) 's comment on Open thread, Jul. 25 - Jul. 31, 2016 by (
- 28 Jul 2016 17:24 UTC; 1 point) 's comment on Open thread, Jul. 25 - Jul. 31, 2016 by (
- 9 Dec 2010 22:47 UTC; 1 point) 's comment on Were atoms real? by (
- 2 May 2011 5:44 UTC; 1 point) 's comment on The two meanings of mathematical terms by (
- 12 Apr 2012 2:42 UTC; 1 point) 's comment on against “AI risk” by (
- 16 May 2012 14:12 UTC; 1 point) 's comment on Open Thread, May 16-31, 2012 by (
- 7 Apr 2012 2:03 UTC; 1 point) 's comment on Zombies! Zombies? by (
- 20 Apr 2011 8:57 UTC; 1 point) 's comment on [SEQ RERUN] The Martial Art of Rationality by (
- 20 Feb 2010 21:57 UTC; 1 point) 's comment on Conversation Halters by (
- 15 Sep 2022 3:00 UTC; 1 point) 's comment on Why Do People Think Humans Are Stupid? by (
- What moral systems (e.g utilitarianism) are common among LessWrong users? by 23 Feb 2023 3:33 UTC; 1 point) (
- 22 May 2011 21:23 UTC; 0 points) 's comment on Conceptual Analysis and Moral Theory by (
- 22 May 2011 21:17 UTC; 0 points) 's comment on Conceptual Analysis and Moral Theory by (
- 31 Oct 2013 2:15 UTC; 0 points) 's comment on Aliveness in Training by (
- 24 Aug 2011 2:43 UTC; 0 points) 's comment on A Sketch of an Anti-Realist Metaethics by (
- 1 May 2018 18:21 UTC; 0 points) 's comment on Origin of Morality by (
- 22 Feb 2012 20:06 UTC; 0 points) 's comment on I believe it’s doublethink by (
- 17 Jan 2011 0:24 UTC; 0 points) 's comment on Welcome to Less Wrong! by (
- 8 Mar 2012 22:54 UTC; 0 points) 's comment on How to Fix Science by (
- 29 Jun 2013 13:55 UTC; 0 points) 's comment on Have no heroes, and no villains by (
- The Reality of Emergence by 4 Oct 2017 8:11 UTC; 0 points) (
- 29 Apr 2011 3:50 UTC; 0 points) 's comment on Being Wrong about Your Own Subjective Experience by (
- 8 Dec 2010 2:49 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 4 Feb 2011 8:36 UTC; 0 points) 's comment on Is Atheism a failure to distinguish Near and Far? by (
- 18 Oct 2011 13:35 UTC; 0 points) 's comment on 0 And 1 Are Not Probabilities by (
- 9 Jul 2009 3:02 UTC; 0 points) 's comment on Rationality Quotes—July 2009 by (
- 28 Oct 2013 16:53 UTC; 0 points) 's comment on What Can We Learn About Human Psychology from Christian Apologetics? by (
- 15 Apr 2009 18:49 UTC; 0 points) 's comment on It’s okay to be (at least a little) irrational by (
- 30 Oct 2010 7:42 UTC; 0 points) 's comment on What is the Archimedean point of morality? by (
- 30 Dec 2014 9:46 UTC; 0 points) 's comment on Open thread, Dec. 22 - Dec. 28, 2014 by (
- 20 Jan 2013 3:43 UTC; 0 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 10 Jun 2010 4:03 UTC; 0 points) 's comment on UDT agents as deontologists by (
- 22 Apr 2015 6:11 UTC; 0 points) 's comment on Happiness and Goodness as Universal Terminal Virtues by (
- 15 Dec 2013 7:29 UTC; 0 points) 's comment on ‘Effective Altruism’ as utilitarian equivocation. by (
- 11 Oct 2007 4:45 UTC; 0 points) 's comment on A Priori by (
- 30 Mar 2012 5:06 UTC; 0 points) 's comment on New front page by (
- Rethinking Education by 15 Feb 2014 5:22 UTC; -1 points) (
- 24 Dec 2012 22:18 UTC; -2 points) 's comment on New censorship: against hypothetical violence against identifiable people by (
- 8 Mar 2011 20:12 UTC; -2 points) 's comment on How best to show dying is bad by (
- A brief guide to not getting downvoted by 30 Oct 2010 2:32 UTC; -3 points) (
- 22 May 2011 17:24 UTC; -5 points) 's comment on Newcomb’s Problem and Regret of Rationality by (
- 17 Aug 2010 7:19 UTC; -7 points) 's comment on Desirable Dispositions and Rational Actions by (
Great post. As always.
I assume that most of math is being ignored for simplicity’s sake?
I think his point isn’t so much that what you’re saying WILL have a practical impact on your sensory experiences, just that it has the potential to do so. What you “expect” to experience as a result. In real life we can’t weld a pair of trillion-pound bars of gold to each other and then see how much they weigh, but because of mathematics, we know that if we were to place them on an accurate scale we would see a weight of two trillion pounds.
What good is math if people don’t know what to connect it to?
All math pays rent.
For all mathematical theorems can be restated in the form:
If the axioms A, B, and C and the conditions X, Y and Z are satisfied, then the statement Q is also true.
Therefore, in any situations where the statements A,B,C and X,Y,Z are true, you will expect Q to also be verified.
In other words, mathematical statements automatically pay rent in terms of changing what you expect. (Which is) the very thing it was required to show. ■
In practice:
If you demonstrate Pythagoras’s Theorem, and you calculate that 3^2+4^2=5^2, you will expect a certain method of getting right angles to work.
If you exhibit the aperiodic Penrose Tiling, you will expect Quasicrystals to exist.
If you demonstrate the impossibility of solving to the Halting Problem, you will not expect even a hypothetical hyperintelligence to be able to solve it.
If you understand why you can’t trisect an angle with an unmarked ruler and a compass (not both used at the same time), you will know immediately that certain proofs are going to be wrong.
and so on and so forth.
Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them.
Is this to say that one of the purposes of mathematics is to prove something new, even without knowing what it might be used for, with the awareness that it might be useful at a later point? Or that it might form part of a proof for something else that is also currently unknown?
Yes, there are numerous cases where a field in “pure” mathematics proved interesting theorems that mathematicians undertook because of its challenging and elegant nature (like certain theorems possess generality and elegance) which were then to be found to be practically useful, which are called “applied” mathematics. Frankly, this distinction is blurred as pure mathematics are so useful (see Eugene Wigner’s “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”) that the abstract nature of mathematics has huge extensibility and general applications in multiple domains. For instance, Einstein’s GR was based on the pure mathematics of Riemannian manifolds, which is an abstract topological structure, not tied to reality in any way initially. Or how algebraic topology is used for data mining, how number theory is used for cryptography, how linear algebra is used for machine learning, group theory is used for particle physics… and even how Bayesian probability theory is used for LW rationality.
Stephen Wolfram has great resources on rulial spaces and the nature of computation for the universe’s fundamental ontology (the territory not the map) in which these networks of theorems can correspond to our empirical reality. (psst I am a very new LW user, and I am deciding if I should do a Sequence for this idea of “rulial cover” which is how rulial deduction can be applied to Solomonoff induction and Bayesian abduction, would be great if someone thinks this is interesting to explore so I can be motivated)
To link back to Eliezer’s post, “floating beliefs” in a Bayesian net can be connected through adjusting the “weights” of the edges that connects that belief using Bayesian inference, and mathematics make these robust inferences from axioms (deductively validity as 100% in weight and 0% in prior). Therefore, anticipation becomes certain under a set of idealized axioms.
‘I am deciding if I should do a Sequence for this idea of “rulial cover” which is how rulial deduction can be applied to Solomonoff induction and Bayesian abduction’
I don’t really know what you mean, but if it’s something unseen you can expect it to be useful!
Is it not the purpose of math to tell us “how” to connect things? At the bottom, there are some axioms that we accept as basis of the model, and using another formal model we can infer what to expect from anything whose behavior matches our axioms.
Math makes it very hard to reason about models incorrectly. That’s why it’s good. Even parts of math that seem particularly outlandish and disconnected just build a higher-level framework on top of more basic concepts that have been successfully utilized over and over again.
That gives us a solid framework on which we can base our reasoning about abstract ideas. Just a few decades ago most people believed the theory of probability was just a useless mathematical game, disconnected from any empirical reality. Now people like you and me use it every day to quantify uncertainty and make better decisions. The connections are not always obvious.
http://abstrusegoose.com/504 :-)
Thats exactly how i felt in high school. Im glad i changed that because it wouldn’t be useful to me if i’d never learned algebra. The first part of the class is hard to use and discouraging to new students.
Is pure math a set of beliefs that should be evicted?
No, for reasons expressed above by VKS.
Note the word “pure”. By definition, pure maths doesn’t pay off in experience. If it did, it would be applied.
IMO the distinction between pure and applied math is artificial, or at least contingent; today’s pure math may be tomorrow’s applied math. This point was made in VKS’s comment referenced above:
The question is whether anyone should believe pure maths now. If you are allowed to believe things that might possibly pay off, then the criterion excludes nothing.
Metabeleifs! Applied math concepts that seem useless now, have, in the past, become useful. Therefore, the belief that “believing in applied math concepts pays rent in experience” pays rent in experience, so therefore you should believe it.
If you believe in applied math, what are the grounds for excluding “pure” math? Most of the time “pure” just means that the mathematician makes no explicit reference to real-world applications and that the theorems are formulated in an abstract setting. Abstraction usually just boils down to figuring out exactly which hypotheses are necessary to get the conclusion you want and then dispensing with the rest.
Let’s take the theory of probability as an example. There’s nothing in the general theory that contradicts everyday, real-world probability applications. Most of the time the general theory does little other than make precise our intuitive notions and avoid the paradoxes that plague a naive approach. This is an artifact of our insistence on logic. A thorough, logical examination of just about any piece of mathematics will quickly lead to the domain “pure” math.
I am not making the statement “exclude pure math”, I am posing the question “if pure math stays, what else stays?”
Maybe post utopianism is an abstract idealisation that makes certain concepts precise.
There are beliefs that directly pay rent, and then there are beliefs that are logical consequences of rent-paying beliefs. The same basic principles that give you applied math will also lead to pure math. We can justify spending effort on pure math on the grounds that it may pay off in the future. However, our belief in pure math is tied to our belief in logic.
If you asked whether this can be applied to something like astrology, I’d ask whether astrology was a logical consequence of beliefs that do pay rent.
Unlike scientific knowledge or other beliefs about the material world, a mathematical fact (e.g. that z follows from X1, X2,..., Xn), once proven, is beyond dispute; there is no chance that such a fact will be contradicted by future observations. One is allowed to believe mathematical facts (once proven) because they are indisputably true; that these facts pay rent is supported by VKS’s argument.
Truths of pure maths don’t pay rent in terms iof expected experience. EY has put forward a criterion of truth, correspondence, and a criterion of believability, expected experience , and pure maths fits neither. He didn’t want that to happen, and the problem remains, here and elsewhere, of how to include abstract maths and still exclude the things you don’t like. This is old ground, that the logical postivists went over in the mid 20th century.
I think I see where you are going with this.
My initial interpretation of EY’s original post is that he was explicating a scientific standard of belief that would make sense in many situations, including in reasoning about the physical world (EY’s initial examples were physical phenomena—trees falling, bowling balls dropping, phlogiston, etc.). I did not really think he was proposing the only standard of belief. This is why I was baffled by your insistence that unless a mathematical fact had made successful predictions about physical, observable phenomena, it should be evicted.
However, later in the original post EY used an example out of literary criticism, and here he appears to be applying the standard to mathematics. So, you may be on to something—perhaps EY did intend the standard to be universally applied.
It seems to me that applying EY’s standard too broadly is tantamount to scientism (which I suspect is more-less the point you were making).
Here is a truth of pure mathematics: every positive integer can be expressed as a sum of four squares.
Expected experiences: there will be proofs of this theorem, proofs that I can follow through myself to check their correctness.
Et voilà!
Truth of astrology: mars in conjunction with Jupiter is dangerous for Leos
Expected experience: there will be astrology articles saying Leo’s are in danger when mars is in conjunction with Jupiter.
Of course astrological claims pay rent. The problem with astrology is not that it’s meaningless but that it’s false, and the problem with astrologers is that they don’t pay the epistemological rent.
Also, a proof is a different thing from a mathematician saying so. The rent that is being paid there is not merely that the theorem will be asserted but that there will be a proof.
Try telling Eliezer
The original post does not mention astrology. If you want to spy out some place where Eliezer has said that astrological claims are meaningless, go right ahead. I am not particularly concerned with whether he has or not.
Here and now, you are talking to me, and as I pointed out, the belief can pay rent, but astrologers are not making it do so. Those who have seriously looked for evidence, have, so I understand, generally found the beliefs false.
I think this is both right and not in contradiction with the post.
The belief that pays the rent here is that there is going to be a high correlation between Mars being in conjunction with Jupiter and astrology believers born around August experiencing heightened feelings of being in danger.
That does not say anything on the “truth” of astrology itself.
Same applies to the article’s example on Wulky Wilkinsen. The belief that alienated resublimation justifies the fictional author’s retropositionality does not pay rent. The belief that failing to mention retropositionality correlates with higher chances of failing a literature test on Wilkinsen does probably pay rent.
From that belief, the expected experience should be Leo people being less fortunate during those days.
That was the point. Its a cheat to expect astrology truths to product experiences of reading written materials about astrology, so it’s a cheat expect to pure maths truths …
Let me complete the ellipsis with what I actually said. A mathematical assertion leads me to expect a proof. Not merely experiences of reading written materials repeating the assertion.
And a proof still isnt an .experience in the relevant sense. Its not like predicting an eclipse,
What’s the difference between behaviours of non-sentient objects and behaviours of sentient people that makes one an experience and the other not?
Allow me to answer your question with a question: What good is music?
Lacking a point of reference, the word ‘music’ is interchangeable with ‘noise’. Consider your query as if read by one who had been deaf right out of the womb.
forgive the presumption, I’m new to this whole thing in many ways, but I have a feeling you either did not read or did not understand the ‘map and territory’ sequence. Perhaps this would help you answer your own question.
http://wiki.lesswrong.com/wiki/Map_and_Territory
That’s not the right question. In order for facts to be useful, they must connect to something. Music is not a fact, except in the sense that “Music exists, therefore I expect to hear it under this circumstance.”
Math is an expression of what is constantly happening around us. Music is a thing, not a belief. “What good is Music?” Well, what good are trees? What good is love?
You put music into the wrong catagory. Music is a thing, not a belief.
Define music.
’e probably means the set of sounds that have certain structural properties, and that humans find enjoyable to listen to.
The question is, what definition of music could change the meaning of my explanation?
(Not implying that there isn’t one, just wondering, and making you wonder too.)
The what good is love thing wasn’t meant to be philosophical, it was just an example.
In practice, most of the time people figure out what to connect it to later. More precisely, most of it probably doesn’t connect to anything, but what does connect to stuff usually isn’t found to do so until much later than it is invented/discovered.
Some ungrounded concepts can produce your own behavior which in itself can be experienced, so it’s difficult to draw the line just by requiring concepts to be grounded. You believe that you believe in something, because you experience yourself acting in a way consistent with you believing in it. It can define intrinsic goal system, point in mind design space as you call it. So one can’t abolish all such concepts, only resist acquiring them.
For any instrumental activity, done to achieve some other end, it makes sense to check that specific examples are in fact achieving the intended end.
Most beliefs may have as their end the refinement of personal decisions. For such beliefs it makes sense not only to check whether they effect your personal experience, but also whether they effect any decisions you might make; beliefs could effect experience without mattering for decisions.
On the other hand, some beliefs may have as their end effecting the experiences or decisions of other creatures, such as in the far future. And you may care about effects that are not experienced by any creatures.
Only if you have reason to believe your naive pattern matching of expectations to observation isn’t already updating your expectations about instrumental activity.
Otherwise, your ″privileging the hypothesis″ that you are in fact wrong.
It’s kind of like smoothing in machine learning. It will have costs and benefits.
Elizer, your post above strikes me, at least, as a restatement of verificationism: roughly, the view that the truth of a claim is the set of observations that it predicts. While this view enjoyed considerable popularity in the first part of the last century (and has notable antecedents going back into the early 18th century), it faces considerable conceptual hurdles, all of which have been extensively discussed in philosophical circles. One of the most prominent (and noteworthy in light of some of your other views) is the conflict between verificationism and scientific realism: that is, the presumption that science is more than mere data-predictive modeling, but the discovery of how the world really is. See also here and here.
Maybe I’m inferring from too little data, but I suspect that most readers at this site aren’t too interested in sceintific realism.
Our favourite mantra (“the map is not the territory”) acknowledges and then gracefully side-steps the issues that you’re raising.
(I just realized that Eliezeer answers this below. Comment retracted. Is there some way for me to delete this?)
Rooney, as discussed in The Simple Truth I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam’s Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.
I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable. We can’t do quantum mechanics with kets, but no bras. We can’t do Gentzen natural deduction with rules of elimination, but no rules of introduction. We can’t do Bayesian updating with observations, but no priors. And I claim that you can’t have a theory of meaning which deals only with consequences of statements being true but not with what actions put the universe into a state in which the statement becomes true.
This position of mine comes from my interpretation of the dissertation of Noam Zeilberger of CMU (2005, I think). Zeilberger’s main concern lies in Logic and Computer Science, but along the way he discusses theories of truth implicit in the work of Martin-Lof and Dummett.
Perplexed, I’m not sure I understood what you meant by
Or if I agree with it at all. Wouldn’t statements about what actions make certain statements true simply be part of the first category? I don’t see a problem with only having statements and their consequences. I see you’ve made this comment 12 years ago, so I don’t know how you would stand on this today.
That seems obviously correct. However, unless you pursue knowledge for its own sake, you should probably not be overly concerned with preserving past truths—unless they are going to impact on future decisions.
Of course, the decisions of a future superintelligence might depend on all kinds of historical minutae that we don’t regard as important. So maybe we should preserve those truths we regard as insignificant to us for it. However, today, probably relatively few are enslaved to future superintelligences—and even then, it isn’t clear that this is what they would want us to do.
An explicit belief that you would not allow yourself to hold under these conditions would be that the tree which falls in the forest makes a sound—because no one heard it, and because we can’t sense it afterwards, whether it made sound or not had no empirical consequence.
Every time I have seen this philosophical question posed on lesswrong, the two sophists that were arguing about it were in agreement that a sound would be produced (under the physical definition of the word), so I’d be really surprised if you could let go of that belief.
Hm, yeah. The trouble is how the doctrine handles deductive logic—for example, the belief that a falling tree makes vibrations in the air when the laws of physics say so is really a direct consequence of part of physics. The correct answer definitely appears to be that you can apply logic, and so the doctrine should be not to believe in something when there is no Bayesian evidence that differentiates it from some alternative.
While I fully agree with the principle of the article, something stuck out to me about your comment:
What I noticed was that you were basically defining a universal prior for beliefs, as much more likely false than true. From what I’ve read about Bayesian analysis, a universal prior is nearly undefinable, so after thinking about it a while, I came up with this basic counterargument:
You say that true beliefs are vastly outnumbered by false beliefs, but I say, how could you know of the existence of all these false beliefs, unless each one had a converse, a true belief opposing it that you first had some evidence for? For otherwise, you wouldn’t know whether it was true or false.
You may then say that most true beliefs don’t just have a converse. They also have many related false beliefs opposing them. But I would say, those are merely the converses that spring from the connections of that true belief with its many related true beliefs.
By this, I hope I’ve offered evidence that a fifty-fifty universal T/F prior is at least as likely as one considering most unconsidered ideas to be false. (And I would describe my further thoughts if I thought they would be useful here, but, silly me, I’m replying to a post from almost 8 years ago.)
If you have an arbitrary proposition—a random sequence of symbols constrained only by the grammar of whatever language you’re using—then perhaps it’s about equally likely to be true or false, since for each proposition p there’s a corresponding proposition not p of similar complexity.
But the “beliefs” people are mostly interested in are things like these:
There is exactly one god, who created the universe and watches over us; he likes forgiveness, incense-burning, and choral music, and hates murder, atheism and same-sex marriage.
Two nearby large objects, whatever they are, will exert an attractive force on one another proportional to the mass of each and inversely proportional to the square of the distance between them.
and the negations of these are much less interesting because they say so much less:
Either there is no god or there are multiple gods, or else there is one god but it either didn’t create the universe or doesn’t watch over us—or else there is one god, who created the universe and watches over us, but its preferences are not exactly the ones stated above.
If you have two nearby objects, whatever force there may be between them is not perfectly accurately described by saying it’s proportional to their masses, inversely proportional to the square of the distance, and unaffected by exactly what they’re made of.
So: yeah, sure, there are ways to pick a “random” belief and be pretty sure it’s correct (just say “it isn’t the case that” followed by something very specific) but if what you’re picking are things like scientific theories or religious doctrines or political parties then I think it’s reasonable to say that the great majority of possible beliefs are wrong, because the only beliefs we’re actually interested in are the quite specific ones.
I don’t think “converse” is the word you’re looking for here—possibly “complement” or “negation” in the sense that (A || ~A) is true for all A—but I get what you’re saying. Converse might even be the right word for that; vocabulary is not my forte.
If you take the statement “most beliefs are false” as given, then “the negation of most beliefs is true” is trivially true but adds no new information. You’re treating positive and negative beliefs as though they’re the same, and that’s absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define “anything except that one specific experience” as “an experience”, then you can define a negative belief as a belief, but at that point I think you’re actually falling into exactly the trap expressed here.
If you replace “belief” with “statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category” (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then “true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category” is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible “statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category” being true.
As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and no other information about them, then it only takes one bit of evidence—enough to rule out half the options—to decide which belief is likely true. If I have 1024 possible beliefs and no other evidence, it takes 10 bits of evidence to decide which is true. If I conduct an experiment that finds that belief 216 +/- 16 is true, I’ve narrowed my range of options from 1024 to 33, a gain of just less than 5 bits of evidence. Ruling out one more option gives the last of that 5th bit. You might think that eliminating ~96.8% of the possible options sounds good, but it’s only half of the necessary evidence. I’d need to perform another experiment that can eliminate just as large a percentage of the remaining values to determine the correct belief.
It’s amazing how many forms of irrationality failure to see the map-territory distinction, and the resulting reification of categories (like ‘sound’) that exist in the mind, causes: stupid arguments, phlogiston, the Mind Projection Fallacy, correspondence bias, and probably also monotheism, substance dualism, the illusion of the self, the use of the correspondence theory of truth in moral questions… how many more?
I think you’re being too hard on the English professor, though. I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them. But I’ve never experienced a college English class; perhaps my innocent fantasies will be shaken then.
Michael V, you could say that mathematical propositions are really predictions about the behavior of physical systems like adding machines and mathematicians. I don’t find that view very satisfying, because math seems to so fundamentally underly everything else—mathematical truths can’t be changed by changing anything physical, for instance—but it’s one way to make math compatible with anticipation.
I think Eliezer’s point was about the student. “Wulky Wilkinsen is a ‘post-utopian’” could be meaningful, if you know what a post-utopian is and is not (I don’t, and don’t care). The student who learns just the statement, however, has formed a floating belief.
We might even initially use propositional beliefs as indicators of meaningful beliefs about the world. But if we then discuss these highly compressed beliefs without referencing their meaning, we often feel like we are reasoning when really we have ceased to speak about the world. That is, grounded beliefs can become “floaty” and spawn further “floaty” beliefs.
In my sociology class, we talk about how “Man in his natural state has liberty because everyone is equal”. “Natural state”, “liberty”, and “equal” could conceivably be linked to descriptions of social interaction or something. However, class after class we refrain from talking about specific behaviors. Concepts float away from their referents without much resistance—it’s all the same to the student, who only needs to make a few unremarkable remarks to get his B+ for class participation. Compare:
“Man in his natural state has liberty because everyone is equal”
“Man in his natural state is equal because everyone has liberty”
“When everyone has liberty and is equal, man is in his natural state”
These statements should express very different beliefs about the world, but to the student they sound equally clever coming out of the professor’s mouth.
(Edit for minor grammar and formatting)
It’s amazing how many forms of irrationality failure to see the map-territory distinction
Should have been “how many forms of irrationality result from failure...”. Sorry.
I agree with those who say it’s okay to figure things out later. If my music professor says a certain composer favors the Aeolian mode, I may not be able to visualize that on the spot but who cares? I can remember that statement and think about it later. Likewise with phlogiston, I have a vague concept of what it is and someday the alchemists will discover more precisely what’s going on there.
Too much cognitive effort would be spent if, every time I thought about linear algebra, I had to visualize the myriad concrete instances in which it will be applied. I bet thinking in abstractions results in way more economical use of thinking time and thinking-matter.
In what way is the belief that beliefs should be grounded not a free-floating belief itself?
I anticipate expressing free-floating beliefs would get me negative karma on Less Wrong.
More seriously:
I do not anticipate free-floating beliefs being useful in the same sense that maps of reality are useful. A map can turn out to be accurate or inaccurate, and insofar as it is accurate it can help me navigate and manipulate reality. My belief that “a proper belief should not be free-floating” prohibits free-floating beliefs from doing any of that.
Or one might as well see it as not a belief, but as a definition. There’s BeliefType1 which is grounded in reality, and BeliefType2 which is not, and we happen to call BeliefType1 a “proper belief”. (Of course we still do it for a reason, because we care about our sheep, or rather, we care about our beliefs being true and thus useful.)
Not sure which approach makes more sense.
The ability to anticipate experiences is one of our maximands because we have goals that are optimally achieved with this ability. To believe that beliefs should allow us to anticipate experiences is grounded in the desire to achieve our goals.
One way of answering might be to say that there is no separate “belief” that beliefs should be grounded. But i’m not sure.
All I know is that the question annoys me, but I can’t quite put my finger on it. It reminds me of questions like (1) the accusation that you can’t justify the use of logic logically, or (2) the accusation that tolerance is actually intolerant—because it’s intolerant of intolerance. There might be a level distinction that needs to be made here, as in (2) - and maybe in (1) though I think that’s different.
(1) has come out of my mouth on a few occasions, albeit not in those exact words. It’s normally after a few beers and I feel like playing the extreme skeptic a la David Hume, just to annoy everyone. I think the best way around it is to resort to the empirical argument and say that, in our experience, it is always right: Essentially the same thing Yudkowsky does with PA arithmetic here. Trying to find an argument against it which is truly “rationalist” in the continental sense has been a dead end in my experience.
(2) sort of depends on the pragmatics and what “tolerance” actually means to the persons involved in a given context. If you define tolerance as simply being tolerant of other viewpoints, then you can still be tolerant of the intolerant viewpoints. However, if you define it as freedom from bigotry, then that could indeed be called “intolerant” by the standards of the first definition.
I hope I’m making sense here.
Mark: Believing that beliefs should be grounded anticipates that there is absolutely no change in anticipation if one were to change these free floating ideas. Of course this doesn’t really answer your question because it just restates the definition of ‘free floating beliefs’ in different words. This belief actually follows from Eliezers belief in Occam’s Razor, which predicts that when faced with unexplained events, if one creates a set of theories explaining these events, any predictions made by the simple theories are more likely to actually happen than predictions made by complex theories. I’m not quite sure if Occam’s Razor is an axiom of science or just yet another belief. At least there is quite a bit of support for this belief, if you look into the history of science.
Another point: I think phlogiston is a bit of a poor example. Phlogiston actually corresponds very closely with something currently believed as real: phlogiston is the absence of oxygen. Seeing it this way, it’s very well possible to build a theory of phlogiston explaining and predicting nearly all observations of fire, e.g. fire releases phlogiston, and if you burn something in a confined space the air gets saturated with phlogiston and cannot take in any more, so the fire goes out. A very important argument in the debate between phlogistionians and oxygants was when experiments were done to measure the weight of phlogiston and oxygen, and phlogiston turned out to have a negative weight.
Jan: Occam’s razor is not so much a rule of science but an operating guideline for doing science. It could be reduced to “test simple theories first”. In the past this has been very useful in keeping scientific effort productive, the ‘belief’ is that it will continue to be useful in this way.
This led to a fun read of “occam’s razor” wikipedia entry. Hickum’s dictum in particular was a great find (generalized beyond medicine, it could be that explanations for unexplained events can be as complex as they damn well please). As a practical corrective, it seems to me that probability theory suggests that the best accessible explanation to us for unexplained events is in the set of simpler theories, but is probably not one of the absolute simplest.
Eliezer once wrote that “We can build up whole networks of beliefs that are connected only to each other—call these “floating” beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens’s ability to build more general and flexible belief networks.
The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit.”
I can’t see how nearly all of the beliefs expressed in this post predict or prohibit any experience.
“Alchemists believed that phlogiston caused fire”
How is that different than our current belief that oxygen causes fire?
Because one of these allows you to make predictions, and the other doesn’t. Saying “fire has a cause, and I’m going to call it ‘phlogiston’!” doesn’t tell you anything about fire, it’s just a relabeling. Now, if you make enough observations, maybe you’ll eventually conclude that “phlogiston is the absence of oxygen” (even though this isn’t really correct), but at that point you can throw out the label “phlogiston”. Contrariwise, if you say “oxidization causes fire”, where “oxygen” is a previously known thing with known properties, then this allows you to actually make predictions about fire. E.g. the fact a candle in a sufficiently small closed space will go out before it melts, but not necessarily if there’s a plant in there too. One pays rent, the other doesn’t.
The hypothesis went a little deeper than that. “Flammable things contain a substance, and its release is fire” lets you make many predictions — e.g., that things will burn in vacuum, or that things burned in open air will always lose mass (this is how it was falsified).
Ah, true.
Always gain mass, once they realized it was negative mass.
The idea that it doesn’t always gain mass doesn’t falsify phlogiston any more than it falsifies oxygen for the same reason.
Also, people didn’t find the change in weight particularly useful, so this wasn’t that big a problem.
Again, the vacuum thing isn’t much of a problem either. It’s not necessarily possible to purify phlogiston.
I’m not sure I follow, oxidization doesn’t predict gaining or losing mass (on any scale like phlogiston would, that is), it predicts an interaction of materials forming a new composite substance. Oxidation doesn’t prevent material from being lost or changed in other ways which could cause an overall greater or lesser mass than the original object. What it does predict, however, is that the total mass of all molecules in the equation, once accounted for, will be the same. This is consistent with observation.
If phlogiston has a negative mass, then anything that can burn must gain mass. I don’t see any way around it. The theory states that it is a release of negative material, and there is no way to account for it once released.
One thing you would expect to find with phlogiston is an object that was primarily made up of phlogiston, giving it a negative mass. Explosives, for example, clearly have so much phlogiston that it literally rips the object (and anything nearby) apart when released. You would therefore expect all explosives to be relatively light in spite of the original weight of their components.
You could test this with black powder: saltpeter, charcoal, and sulfer each release a certain amount of phlogiston when burned. Combine them and a significantly more phlogiston is clearly released. You would therefore expect more phlogiston to have flowed into the material during the combination of the three objects during the making of gunpowder. However, the weights actually stay quite the same. The observation doesn’t bear out the prediction, so the prediction is clearly wrong. If the prediction is wrong, the theory that made it is either wrong outright, or flawed in some way. Since the only prediction phlogiston can make is wrong, then the theory is at the very least flawed in some crippling way, and needs to be completely re-worked.
It’s lack of ability to predict expectations is what killed it. You can predict what will happen when you add oxygen to a reaction. You cannot predict what will add phlogiston to a material, thereby allowing it to burn.
A huge example is the sun. It is a giant ball of fire—therefore, a giant ball of phlogiston, or at least a very significant portion of its mass to be made up of phlogiston in order to burn that intensely for that long. So it should have a low mass, possibly even a negative mass. Yet this giant ball of mostly phlogiston is actually the heaviest thing in the solar system by a massive margin.
Phlogiston is incompatible with many, many theories that have been independantly verified. Also, oxygen causing fire is not the theory. The theory is molecules and their chemical interactions, of which oxygen is just one type, and the predictions of oxygen causing most of the exothermic reactions is consistent with all other chemical reactions and is predictable based on rules that are consistent whether a reaction is exothermic or endothermic, among a great many other things. It also predicts which objects will burn and which will not. This same chemical theory leads to atomic theory, which predicts fusion, which has absolutely nothing at all to do with oxygen, yet describes the behavior of the sun very accurately before you even start to measure the sun’s output.
The way to test a theory is to predict first, then observe. This is basic science. Phlogiston cannot pass this test, chemical theory can.
You can make exactly the same predictions with phlogiston. If you burn coal next to iron, it will refine it. You could predict this with oxygen (oxygen is moving from the iron to the coal) or with phlogiston (phlogiston is moving from the coal to the iron).
It’s like with electric charge. If you think of it as positive charge moving around, it has almost exactly the same predictive power as thinking of it as electrons moving around.
But you can only predict it if you already know that a gain of phlogiston refines iron; if you don’t, you can only observe it afterward and write it down as a property of phlogiston.
If you don’t know anything about oxygen or phlogiston beforehand, then, sure, they’re pretty much equally predictive, i.e., not very much. But if “oxygen” is not in fact just an arbitrary label as “phlogiston” is, but in fact something you’re already working with in other ways, then they’re not symmetric.
Also as Nick Tarleton points out below there are other asymmetries, though those are not so much in the predictive power.
“But you can only predict it if you already know that a gain of phlogiston refines iron”
Same goes for oxygen.
That’s what I just said.
Sorry. Too used to defending my position to realize you’re not attacking it.
Okay, I admit that that’s not really a prediction, but until then, they couldn’t even explain it.
If you’re going to do it like this, what’s one thing oxygen predicted?
By the way, I’m responding to the fact that I lost two karma points on that, not any actual post.
In this specific example and at that level of precision, yes; but only one of these models can be (easily) refined to make precise, correct quantitative predictions. Even at that qualitative level, though, they make different predictions about burning things in vacuum or in non-oxygen atmospheres.
Just because I haven’t seen the link in this particular discussion, some more defense of phlogiston link
Uhhh… oxygen exists?
And so does the absence of oxygen, or, as they called it, phlogiston.
You’re giving phlogiston qualities no one who held that theory gave it. If you want to call the absence of oxygen phlogiston, okay, but you aren’t talking about the same phlogiston everyone else is talking about. Moreover, thinking about fire this way is clumsy and incompatible with the rest of our knowledge about physics and chemistry.
We already had a conception of matter when phlogiston was invented… and phlogiston was understood as a kind of matter. To say the phlogiston is really this other kind of thing, which isn’t matter but a particular kind of absence of matter is both unhelpful and a distortion of phlogiston theory. The whole point of the phlogiston theory was that they thought there was a kind of matter responsible for fire! But there isn’t matter like that.
Now by defining phlogiston as the absence of oxygen you might be able to model combustion in a narrow set of circumstances—but you couldn’t fit that model with any of your other knowledge about physics and chemistry.
In short neither the original kind nor your kind of phlogiston exist.
It was at one point theorized to have negative mass. If it’s matter, and you make everything else weigh more, it works out the same.
I fail to see why you think it can’t fit it with other knowledge of physics and chemistry. You can think of electricity as positively charged particles moving around with virtually zero loss of predicting power.
For example, you can’t use phlogiston in any model that also includes oxygen. Nor can you do any work at the molecular or sub-molecular level.
Similarly, thinking of electricity in terms of positively charged particles would be incompatible with atomic theory.
“you can’t use phlogiston in any model that also includes oxygen”
You also can’t use oxygen in any model that also includes phlogiston. Oxygen and phlogiston both describe the same phenomena. They’ve been looking at it from both ends, and found out they were the same thing. Oxygen was slightly more accurate then phlogiston, but they were both about the same accuracy.
“Nor can you do any work at the molecular or sub-molecular level.”
It’s also incompatible with much of quantum physics.
Every physical theory we’ve come up with, when examined close enough, is completely and utterly wrong. If we’re going to have any useful definition of accuracy, you can’t just throw it out of the window because of that.
It worked perfectly for almost everything they did at the time. For that matter, it works perfectly for almost everything we’re doing now.
Sigh. Oxygen has properties that have nothing to do with fire. You need it to properly model cellular respiration, water electrolysis, air currents, buoyancy, the properties of compounds of which the element is a part etc. Give me a coherent periodic table of elements that includes phlogiston instead of oxygen and we can talk.
Some theories are less wrong. So yes, you absolutely can throw a physical theory out the window if it is wrong. You might save the equations so you can make quick, approximate calculations (i.e. Newtonian mechanics) but that doesn’t mean you include all the entities in the theory in your ontology.
This is essentially a truism for all outdated scientific theories.
Sure unless you want to make sense of combustion and anything that requires knowledge of modern chemistry or atomic theory at the same time!
The absence of oxygen isn’t much like a substance whose release is fire:
it doesn’t have any consistent physical or chemical properties;
many things not containing oxygen fail to burn in air, and none burn in vacuum;
on the other hand, things do burn under oxidizers other than oxygen;
oxidized substances are very poorly modeled by mixtures of the original substance and oxygen;
things burned in open air can either gain or lose weight;
etc.
“it doesn’t have any consistent physical or chemical properties;”
And oxides do? Or are you referring to pure phlogiston? It’s not that big a deal that you can’t get pure phlogiston. It’s nigh impossible to purify fluorine. I think that under our current understanding of physics, it’s totally impossible to isolate a single quark.
It moves because it’s attracted to some things more than others. It’s still attracted to everything more than itself.
“many things not containing oxygen fail to burn in air”
Hurts both theories equally. Presumably, it’s strongly bonded to the phlogiston/it doesn’t strongly bond to oxygen.
″...and none burn in vacuum;”
As I said, you can’t get pure phlogiston.
“on the other hand, things do burn under oxidizers other than oxygen;”
Hurts both theories equally. The only way to solve it to my knowledge is that there are things that cause fire other than phlogiston/oxygen.
“things burned in open air can either gain or lose weight;”
Hurts both theories equally. Presumably, some of the matter escapes into the air sometimes.
Everything you listed either is only a very minor problem or is exactly as bad for the idea of oxygen.
I loved this post, but I have to be a worthless pedant.
If you drop a ball off a 120-m tall building, you expect impact in t=sqrt(2H/g)=~5 s. But that would be when the second-hand is on the 1 numeral.
Heh. I got this right originally, then reread it just recently while working on the book, saw what I thought was an error (1 numeral? just one second? why?) and “fixed” it.
What about knowledge for the sake of knowledge? For instance I don’t anticipate that my belief that The Crusades took place will ever directly affect my sensory experiences in any way. Does that then mean that this belief is completely worthless and on the same level as the belief in ghosts, psychics, phlogiston, etc.?
Wouldn’t taking your chain of reasoning to its logical conclusion require one to “evict” all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch? After all, how much personal sensory experience do you have that confirms the existence of atoms, for example?
DP
I think Eliezer’s point is less strong than you think: for one thing, reading a history book is a sensory experience, and fewer history books would proclaim that The Crusades occurred in worlds where they had not than in worlds where they had.
I was going to write a more detailed reply, but then realized that any continued discussion will require us to debate what exactly the OP meant to say in his post, which is pointless since neither of us can read his mind. So let’s just call it a day.
DP
This is something of a fallacy of gray. Of course we can read his mind, through the power of human telepathy, by reading more on the same topic. We can’t read minds perfectly, but perfect knowledge is never available anyway, and unless you can point out the specific uncertainty you have that decides the discussion, there is no sense in requiring more detail. You might want to stop the discussion for other reasons, but the reason you stated rings false.
I was expecting the link to be Mundane Magic.
The point is not that the ability is “magical”, but that it’s real, that we do have an ability to read minds, in exactly the same sense as Dpar appealed to the impossibility of.
First of all, calling speech “human telepathy” strikes me as a little pretentious, as well as inaccurate, since the word “telepathy” is generally accepted to have supernatural connotations. Speech is speech; no need to complicate the concept.
Secondly, the article you linked seemed a little rambling and without a clear point. All I was able to take away from it is that the meaning of words is relative. If that’s the case then I respond with “well, duh!”; if I missed a deeper point, please enlighten me.
Finally, when you take it upon yourself to question another person’s purely subjective reasoning, you’re treading very close to completely indefensible territory. If I say that I wanted to stop the discussion because I believe that the author’s intended meaning is ambiguous, it’s a tall order to question that that is indeed what I believe. Unless you can come up with clear evidence of how my behavior contradicts my stated subjective opinion, you more or less have to take my word that that really is what I think.
DP
You misunderstand. Vladimir Nesov was not claiming that you don’t believe that the author’s intended meaning is ambiguous. Rather, he was claiming that your belief that “the author’s intended meaning is ambiguous” is false, or at least not enough to constitute a good reason for stopping the discussion.
The point of calling speech ‘human telepathy’ in this instance is that you claimed there’s no way to know what the author was thinking since we “can’t read his mind”. But there is a way to know what the author was thinking to some extent, so by reading your own reasoning backwards we therefore indeed can read minds.
I stated that taking the OP’s reasoning to its logical conclusion requires one to “evict” all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch. RobinZ responded by saying that the OP’s point is less strong than I think. Since two (presumably) reasonable people can disagree on what the OP meant, his point, as it is written, is by definition ambiguous.
Where do we go from here other than debate what he really meant? What is the point of such debate since neither of us has any special insight into his thought process that would allow us to settle this difference of subjective interpretations? I believe that to be sufficient reason for stopping the discussion. I’m not sure what specifically Vladimir takes issue with here.
As to your point of human telepathy—comparing reading what someone wrote to reading his mind is a very big stretch. I can see how you could make that argument if you get really technical with word definitions, but I think that it is generally accepted that reading what a person wrote on a computer screen and reading his mind are two very different things.
DP
Right, but RobinZ was not arguing against this claim (depending on what you mean by ‘personally’ here) but rather pointing out that your reasoning was flawed.
RobinZ pointed out that your belief that the crusades took place affects your sensory experience; if you believe they happened, then you should anticipate having the sensory experience of seeing them in the appropriate place in a history book, if you were to check.
If you thought that your belief that the crusades happened did not imply any such anticipated experiences, then yes, it would be worthless and on the same level as belief in an invisible dragon in your garage.
So reading about something in a book is a sensory experience now? I beg to differ. A sensory experience of The Crusades would be witnessing them first hand. The sensory experience of reading about them is perceiving patterns of ink on a piece of paper.
DP
Edit: Also, I think that RobinZ didn’t state that as something that she believed, she stated that as something that she believed the OP meant. It’s that subjective interpretation of his position that I didn’t want to debate. If you wish to adapt that position as your own and debate its substance, we certainly can.
What’s important isn’t the number of degrees of removal, but that the belief’s being true corresponds to different expected sensory experiences of any kind at all than its being false. The sensory experience of perceiving patterns of ink on a piece of paper counts.
Now you could say: “reading about the Crusades in history books is strong evidence that ‘the Crusades happened’ is the current academic consensus,” and you could hypothesize that the academic consensus was wrong. This further hypothesis would lead to further expected sensory data—for instance, examining the documents cited by historians and finding that they must have been forgeries, or whatever.
If you adapt that position, then the belief in ghosts for instance will result in the sensory experience of reading or hearing about them, no? Can you then point to ANY belief that doesn’t result in a sensory experience other than something that you make up yourself out of thin air?
If the concept of sensory experience is to have any meaning at all, you can’t just extrapolate it as you see fit. If you can’t see, hear, smell, taste, or touch an object directly, you have not had sensory experience with that object. That does not mean that that object does not exist though.
DP
Yes, ghost stories are evidence for the existence of ghosts. Just not very strong evidence.
There can be indirect sensory evidence as well as direct.
You are disputing definitions. Reading something in a book is a sort of thing you’d change expectation about depending on your model of the world, as are any other observations. If your beliefs influence your expectation about observations, they are part of your model of reality. On the other hand, if they don’t, they are sometimes too part of your model of reality, but it’s a more subtle point.
And returning to your earlier concerns, consider me having a special insight into the intended meaning, and proving counterexample to the impossibility of continuing the discussion. Reading something in a history book definitely counts as anticipated experience.
Very interesting read on disputing definitions. While the solution proposed there is very clever and elegant, this particular discussion is complicated by the fact that we’re discussing the statements of a person who is not currently participating. Coming up with alternate words to describe our ideas of what “sensory experience” means does nothing to help us understand what he meant by it. Incidentally this is why I didn’t want to get drawn into this debate to begin with.
Also—“consider me having a special insight into the intended meaning”—on what grounds shall I consider your having such special insight?
I’ve closely followed Yudkowsky’s work for a while, and have a pretty good model of what he believes on topics he publicly discusses.
Fair enough. So if, on your authority, the OP believes that reading about something is anticipated experience, does that not then cover every rumor, fairy tale, and flat out non-sense that has ever been written? What then would be an example of a belief that CANNOT be connected to an “anticipated experience”?
See this comment on the first part of your question and this page on the second (but, again, there are valid beliefs that don’t translate into anticipated experience).
I agree wholeheartedly that there are valid beliefs that don’t translate into anticipated experience. As a matter of fact what’s written there was pretty much the exact point that I was trying to make with my very first response in this topic.
Does that not, however, contradict the OP’s assertion that “Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.”? That’s what I took issue with to begin with.
It does contradict that assertion, but not at first approximation, and not in the sense you took the issue with. You have to be very careful if a belief doesn’t translate into anticipated experience. Beliefs about historical facts that don’t translate into anticipated experience (or don’t follow from past experience, that is observations) are usually invalid.
You seem to place a good deal of value on the concept of anticipated experience, but you give it a definition that’s so broad that the overwhelming majority of beliefs will meet the criteria. If the belief in ghosts for instance can lead to the anticipated experience of reading about them in a book, what validity does the notion have as a means of evaluating beliefs?
When a belief (hypothesis) is about reality, it responds to new evidence, or arguments about previously known evidence. It’s reasonable to expect that as a result, some beliefs will turn out incorrect, and some certainly correct. Either way it’s not a problem: you do learn things about the world as a result, whatever the conclusion. You learn that there are no ghosts, but there are rainbows.
The problem are the beliefs that purport to be speaking about reality, but really don’t, and so you become deceived by them. Not being connected to reality through anticipated experience, they take your attention where there is no use for them, influence your decisions for no good reason, and protect themselves by ignoring any knowledge about the world you obtain.
It is a great heuristic to treat any beliefs that don’t translate into anticipated experience with utmost suspicion, or even to run away from them in horror.
How would you learn that there are no ghosts? You form the belief “there are ghosts” which leads to the anticipated experience (by your definition of such) that “I will read about ghosts in a book”, you go and read about ghosts in a book. Criteria met, belief validated. Same goes for UFOs, psychics, astrology etc. What value does the concept of anticipated experience have if it fails to filter out even the most common fallacious beliefs?
That there are books about ghosts is evidence for ghosts existing (but also for lots of other things). There are also arguments against this hypothesis, both a priori and observational. A good model/theory also explains why you’d read about ghosts even though there is no such thing.
You’re not addressing my core point though. If the criteria of anticipated experience as you define it is as likely to be satisfied by fallacious beliefs as it is by valid ones, what purpose does it serve?
I addressed that question in this comment; if something is unclear, ask away. The difference is between a belief that is incorrect, and a belief that is not even wrong.
Alright, I think I see what you’re getting it, but I still can’t help but think that your definition of sensory experience is too broad to be really useful. I mean the only type of belief that it seems to filter out is absolute nonsense like “I have a third leg that I can never see or feel”, did I get that about right?
Yes. It happens all the time. It’s one way nonsense protects itself, to persist for a long time in minds of individual people and cultures.
(More generally, see anti-epistemology.)
So essentially what you and Eliezer are referring to as “anticipated experience” is just basic falsifiability then?
With a bayesian twist: things don’t actually get falsified, don’t become wrong with absolute certainty, rather observations can adjust your level of belief.
Ok, I understand what you mean now. Now that you’ve clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.
Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, “Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability.”
But the way I see it, this doesn’t refute Popper, or the notion of falsifiability: it just means we’ve generalized the notion to probabilistic cases, instead of just the binary categorization of “unfalsified” vs. “falsified”. This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.
I reached much clearer understanding once I’ve peeled away the structure of probability measure and got down to mathematically crisp events on sample spaces (classes of possible worlds). From this perspective, there are falsifiable concepts, but they usually don’t constitute useful statements, so we work with the ones that can’t be completely falsified, even though parts of them (some of the possible worlds included in them) do get falsified all the time, when you observe something.
Isn’t that like saying we’ve generalized the theory that “all is fire” to cases where the universe is only part fire? If falsification is absolute then Popper’s insight that “all is falsification” is just plain wrong; if falsification is probabilistic then surely the relevant ideas existed before Popper as probability theory. It’s not like Popper invented the notion that if a hypothesis is falsified we shouldn’t believe it.
Falsifiability can be quantified, in bits. If the only test you have for whether something’s true or not is something lame like whether it appears in stories or not, then you have a tiny amount of falsifiability. If there is a large supply of experiments you can do, each of which provides good evidence, then it has lots of falsifiability.
(This really deserves to be formalized, in terms of something along the lines of expected bits of net evidence, but I’m not sure how to do so, exactly. Expected bits of evidence does not work, because of scenarios where there is a small chance of lots of evidence being available, but a large chance of no evidence being available.)
Just a note about terminology: “expected bits of evidence” also goes by the name of entropy, and is a good thing to maximize in designing an experiment. (My previous comment on the issue.)
And if I understand you correctly, you’re saying that the problem with entropy as a measure of falsifiability, is that someone can come up with a crank theory that gives the same predictions in every single case, except one that is near impossible to observe, but which, if it happened, would completely vindicate them?
If so, the problem with such theories is that they have to provide a lot of bits to specify that improbable event, which would be penalized under the MML formalism because it lengthens the hypothesis significantly. That may be want you want to work into a measure of falsifiability.
But then, at that point, I’m not sure if you’re measuring falsifiability per se, or just general “epistemic goodness”. It’s okay to have those characteristics you want as a separate desideratum from falsifiability.
Isn’t it an essential criteria of falsifiability to be able to design an experiment that can DEFINITIVELY prove the theory false?
That is the criterion which the Bayesian idea of evidence lets you relax. Instead of saying that “you need to be able to define experiments where at least one result would be completely impossible by the theory”, a Bayesian will tell you that “you need to be able to define experiments where the probability of one result under the theory is significantly different from the probability of another result”.
Look at, say, the theory that a coin is weighted towards heads. If you want to be pedantic, no result can “definitely prove” that it is not (unusual events can happen), but an even split of heads and tails (or a weighting towards tails) is much more unusual given that theory than a weighting towards heads.
Edit PS: I am totally stealing the meme that “Bayes is a generalization of Popper” from SilasBarta.
I’m pretty sure that was handily discussed in An Intuitive Explanation of Bayes’s Theorem and A Technical Explanation of Technical Explanation.
Fair point, and it was EY’s essay that showed me the connection. But keep in mind, the point of the essay is, “Bayesian inference is right, look how Popper is a crippled version of it.”
My point in saying “my” meme is different: “Popper and falsificationism are on the right track—don’t shy away from the concepts entirely just because they’re not sufficiently general.” It’s a warning against taking the failures of Popper to mean that any version of falsificationism is severely flawed.
Ehhcks-cellent!
Steal the meme, and spread it as far and as wide as you possibly can! The sooner it beats out “Popper is so 70 years ago”, the better. (Kind of ironic that Bayes long predated Popper, though the formalization of [what we now call] Bayesian inference did not.)
Example of my academically-respected arch-nemesis arguing the exact anti-falsificationist view I was criticizing.
As Robin’s explained below Bayesianism doesn’t do that. You should also see the works of Lakatos and Quine where they discuss the idea that falsification is flawed because all claims have auxiliary hypotheses and one can’t falsify any hypothesis in isolation even if you are trying to construct a neo-Popperian framework.
Yes, but that still doesn’t show falsificationism to be wrong, as opposed to “narrow” or “insufficiently generalized”. Lakatos and Quine have also failed to show how it’s a problem that you can’t rigidly falsifiy a hypothesis in isolation: Just as you can generalize Popper’s binary “falsified vs. unfalsified” to probabilistic cases, you can construct a Bayes net that shows how your various beliefs (including the auxiliary hypotheses) imply particular observations.
The relative likelihoods they place on the observations allow you to know the relative amount by which those various beliefs are attenuated or amplified by any particular observation. This method gives you the functional equivalent of testing hypotheses in isolation, since some of them will be attenuated the most.
Right, I was speaking in a non-Bayesian context.
If I remember rightly, that’s where poor old Popper came unstuck: having thought of the falsifiability criterion, he couldn’t work out how to rigorously make it flexible. And as no experiment’s exactly 100% uppercase-D Definitive, that led to some philosophers piling on the idea of falsifiability, as JoshuaZ said.
But more recent work in philosophy of science suggests a more sophisticated way to talk about how falsifiability can work in the real world.
The key idea is “severe testing”, where a “severe test” is a test likely to expose a specific error in a model, if such an error is present. Those models that pass more, and more severe, tests can be regarded as more useful than those that don’t. This approach also disarms the “auxiliary hypotheses” objection JoshuaZ paraphrased; one can just submit those hypotheses to severe testing too. (I wouldn’t be surprised to find out that’s roughly equivalent to the Bayes net approach SilasBarta mentioned.)
At the bottom of the sidebar at the bottom, you will find a list of top contributors; Vladimir Nesov is on the list.
Belatedly: Welcome to Less Wrong! Please feel free to introduce yourself.
A belated thanks! :)
DP
The LessWrong FAQ says that there is value in replying to old content, so I’m commenting in hopes that it is useful to someone in the future, and just for the sake of organizing my thoughts.
I would have phrased this differently than Yudkowsky, but I think I understand the concept he was getting at when he gave this example:
His point is that this is just semantics. It makes no difference to the world whether we label something “post-utopian” or “aegffsdfa eereraksrfa” or anything else. The words you read in the book will be the same. The reason I don’t like this example is that, if I actually knew some literary jargon, I might get some real verifiable information that does actually mean I should expect a specific kind of sensory experience. It’s just that the classification scheme is arbitrary, and so is my belief that one classification scheme is “correct”.
The label is just a label, so arguing about classification schemes is just semantics. Using this definition, your belief that the crusades took place would affect what sorts of things you would expect to read, and what sorts of archeological finds you would expect to find if you went looking for them. However, if you believe that the crusades marked the beginning of the high middle ages, that would just be semantics. We could say that the middle ages started at the sacking of Rome, or we could make a label like “dark ages” to describe the intermediary period. What we call it and how we classify it makes no difference in the actual reality of history. It’s just semantics.
Semantic labels are part of the structure of an explicit model. For instance, the Chinese use the same word for both “rat” and “mouse”. A model with a ratmouse vertex will behave differently to a model with separate rat and mouse verteces. The structure and function of model affect what it predicts, what it’s users can notice, how they behave. Agents do not passively receive a stream of predetermined experiences, they interact with the world, and the experiences they can expect depend on the structure and function of their models...
..and more besides. Models contain evaluative weightings as well as neutral structure. For instance, in the English speaking world, mice have the connotation of being cute, rats of being vermin. The professor might not be failing to specify an empirical confirmable concept when describing the writer as a post utopian: she might rather be succeeding in tweaking her students’ evaluative model. She might be aiming at making a social or political point.
There is a long history of the political influence of language ranging from Greek rhetoricIan’s to Orwell’ s essays. A STEM type might consider it pointless, to focus on such issues, rather than what can be proved objectively. A humanities type might also consider it pointless to focus on objective, empirical claims with no social or political upshot. Neither complaint is really about meaningfullness or semantics, in the sense if the meaningfulness of the words, rather they are both about the subjectively evaluated pointfulness of an activity.
By a convoluted meta level irony, the way the way the term “semantics” is often used is itself a way if funneling the reader towards a conclusion. We have seen that there are circumstances where a semantic change would make a difference: where it makes a structural/functional change, and where it makes an evaluative/connotational difference. Since these circumstances don’t always to apply, there are circumstances where a semantic change really is trivial, really “just semantics”. For instance, if the word cat were replaced by the word zeb, in a connotationally neutral way, that would be semantics of a pointless kind that doesn’t change anything. But that situation is atypical. Although the standard rhetoric about what is “just semantic” suggests the opposite., most rewordings make a difference. Indeed, it is likely that people object to recordings because they do make a difference, not because they don’t.
Consider: A: So youre pro abortion? B: I’m pro choice A: Thats just semantics.
A has spotted that B’s rewording has strengthened his argument, by introducing a phrasing with a positive connotation, and so she objects to it… using the common apprehension that rewordings are just semantics, and don’t change anything!
Thanks for breaching that topic. I considered pointing out that my “aegffsdfa eereraksrfa” example might be more difficult to pronounce than “post-utopian”, and so actually would have an impact on the world in general. On reflection, I decided to make the assertion that it “makes no difference”, since that would spare a lot of confusion. It’s a good first order approximation. When introducing a topic, it’s important to take the Bohr model view of the world before trying to explain quarks and leptons.
The entanglement of semantic language with our interpretation of reality clouds things. Scientific language is precise, but often dry and hard to understand. However, by de-coupling the two worlds, we study the underlying reality without those (or perhaps with only minimal) distorting effects from our language. That’s what we are doing when we talk about Map and Territory here on LW. We get a better map from this, but if we also compare the collective maps of societies to the best maps of reality, we can look for systematic differences. Some of these are cognitive biases, which we tend to concentrate on here on LW. However, there are also many other interesting or useful things that we can learn about ourselves as mapmakers. For example, the Bouba/kiki effect might help us choose more intuitive vocabulary as we build a more and more extensive set of jargon.
Just studying the way languages evolve can be informative, whether it’s rigorously using Computational Linguistics or informally by an author or artist. The mere existence of a formal scientific understanding of reality allows a poet or philosopher, if they are familiar only with the answers but not the underlying explanations, to look at some facet of human nature and ask “isn’t it odd when people...”. A great deal of social commentary is built from that one question.
You write, “suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a ‘post-utopian’. What does this mean you should expect from his books? Nothing.”
I’m sympathetic to your general argument in this article, but this particular jibe is overstating your case.
There may be nothing particularly profound in the idea of ‘post-utopianism’, but it’s not meaningless. Let me see if I can persuade you.
Utopianism is the belief that an ideal society (or at least one that’s much better than ours) can be constructed, for example by the application of a particular political ideology. It’s an idea that has been considered and criticized here on LessWrong. Utopian fiction explores this belief, often by portraying such an ideal society, or the process that leads to one. In utopian fiction one expects to see characters who are perfectible, conflicts resolved successfully or peacefully, and some kind of argument in favour of utopianism. Post-utopian fiction is written in reaction to this, from a skeptical or critical viewpoint about the perfectibility of people and the possibility of improving society. One expects to see irretrievably flawed characters, idealistic projects turn to failure, conflicts that are destructive and unresolved, portrayals of dystopian societies and argument against utopianism (not necessarily all of these at once, of course, but much more often than chance).
Literary categories are vague, of course, and one can argue about their boundaries, but they do make sense. H. G. Wells’ “A Modern Utopia” is a utopian novel, and Aldous Huxley’s “Brave New World” is post-utopian.
Indeed. Some rationalists have a fondness for using straw postmodernists to illustrate irrationality. (Note that Alan Sokal deliberately chose a very poor journal, not even peer-reviewed, to send his fake paper to.) It’s really not all incomprehensible Frenchmen. While there may be a small number of postmodernists who literally do not believe objective reality exists, and some more who try to deconstruct actual science and not just the scientists doing it, it remains the case that the human cultural realm is inherently squishy and much more relative than people commonly assume, and postmodernism is a useful critical technique to get through the layers of obfuscation motivating many human cultural activities. Any writer of fiction who is any good, for instance, needs to know postmodernist techniques, whether they call them that or not.
Yes.
That said, it’s not too surprising that postmodernists are often the straw opponent of choice.
The idea that the categories we experience as “in the world” are actually in our heads is something postmodernists share with cognitive scientists; many of the topics discussed here (especially those explicitly concerned with cognitive bias) are part of that same enterprise.
I suspect this leads to a kind of uncanny valley effect, where something similar-but-different creates more revulsion than something genuinely opposed would.
Of course, knowing that does not make me any less frustrated with the sort of soi-disant postmodernist for whom category deconstruction is just a verbal formula, rather than the end result of actual thought.
I also weakly suspect that postmodernists get a particularly bad rap simply because of the oxymoronic name.
Oh yeah. While it’s far from a worthless field, and straw postmodernists are a sign of lazy thinking, it is also the case that postmodernism contains staggering quantities of complete BS.
Thankfully, these are also susceptible to postmodernist analysis, if not by those who wish to keep their status …
Would you consider Le Guin’s The Dispossessed to be post-utopian? I think she intends her Anarres to be a good place on the whole, and a decent partial attempt at achieving a utopia, but still to have plausible problems.
Not to go off on a tangent, but I’d say it’s more utopian than critical of utopia—I don’t think we can require utopias to be perfect to deserve the name, and Anarres is pretty (perhaps unrealistically) good, with radical (though not complete) changes in human nature for the better.
Brave New World is definitely dystopian, not post-utopian. Nancy’s suggestion for post-utopian is exactly right. I definitely agree that we can meaningfully classify cultural production, though.
I think it’s both. “Brave New World” portrays a dystopia (Huxley called it a “negative utopia”) but it’s also post-utopian because it displays skepticism towards utopian ideals (Huxley wrote it in reaction to H. G. Wells’ “Men Like Gods”).
I don’t claim any expertise on this subject: in fact, I hadn’t heard of post-utopianism at all until I read the word in this article. It just seemed to me to be overstating the case to claim that a term like this is meaningless. Vague, certainly. Not very profound, yes. But meaningless, no.
The meaning is easily deducible: in the history of ideas “post-” is often used to mean “after; in consequence of; in reaction to” (and “utopian” is straightforward). I checked my understanding by searching Google Scholar and Books: there seems to be only one book on the subject (The post-utopian imagination: American culture in the long 1950s by M. Keith Booker) but from reading the preview it seems to be using the word in the way that I described above.
The fact that the literature on the subject is small makes post-utopianism an easier target for this kind of attack: few people are likely to be familiar with the idea, or motivated to defend it, and it’s harder to establish what the consensus on the subject is. By contrast, imagine trying to claim that “hard science fiction” was a meaningless term.
I played a mental game trying to make predictions based on the information, that Wulky Wilkinsen is post-utopian and shows colonial allienation—never heard of any of that before :-). Wulky Wilkinsen is post-utopian … I expect to find a bunch of critically acclaimed authors, who wrote their most famous books before Wulky wrote his most famous books (5 − 15 years ahead ?), lived in the same general area as Wulky, and portrayed people who were more altruistic and prone to serve general good than we normally see in real life. It does not say too much about the actual writing style of Wulky—he could have written either in the similar way as “the bunch” (utopians), or just the opposite—he could have been just fed up by the utopians’ style and portray people more evil than we normally see in everyday life. So my prediction does not tell what Wulky’s books feel like, but it is still a prediction, right ? Colonial allienation—the book contains characters that have lived in a colony (e.g. India) for a long time (athough they might have just arrived to the “maternal” colonial country, e.g. Britain). These characters are confronted with other characters that have lived in the “maternal” colonial country for a long time (athough they might have just arrived to the colony :-) ). There are conflicts between these two groups of people, based on their background. They have different preferences when they are making decisions, probably involving other people. Thus they are allienated. Do not tell me this was not the point of Eliezer’s post, let me just have some fun !
How is this not just a simple arguement on semantics (on which I believe a vast majority of arguements are based)?
They both accept that the tree causes vibrations in the air as it falls, and they both accept that no human ear will ever hear it. The arguement appears to be based solely on the definition, and surrounding implications, of the word “sound” (or “noise” as it becomes in the article) - and is therefore no arguement at all.
I think that may have been the point:
You can define a thing based on any criteria you like. It simply has to allow your expectations to agree with reality in order for it to be true.
One says “it is sound because it vibrates regardless of whether anyone hears it.” This person believes that sound is the vibrations.
The other says “it is not sound because it is never processed in a mind.” This person does not deny that the vibrations exist, he simply believes it isn’t sound until someone hears it.
These two have different definitions of “sound”, but within their definitions both allow expectations that are completely consistent with reality. The point is to make sure your beliefs “pay rent”—that they allow you to have expectations that match up with reality. If the second person had the same belief of what sound was as the first (i.e. vibrations in the air), yet also believed that vibrations in the air do not occur when there is nobody to hear them, that belief would not pay rent. When they recorded the sound with nobody around he would expect there to be nothing at all on the tape, yet there would be something on the tape. The only way to resolve this is to adjust your belief after the fact, which means your belief couldn’t pay its rent.
See also the movie version of this post.
This video has sound problems which immediately turned me off wanting to try and parse what he’s saying. I suggest using a microphone and properly syncing the sound if they intend to do many more of these.
“Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a “post-utopian”. What does this mean you should expect from his book? Nothing.”
When I first read this I thought, “Huh? Surely it tells you something, because I already have beliefs about what ‘utopian’ probably means, and what the ‘post’ part of it probably means, and what context these types of terms are usually used in… That sounds like a whole bag of reasons to expect certain things/themes/ideas in his book!”
But I think this missed the point Eliezer is making; a point I suggest would be more clear if he said:
“Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a “barnbeanbaggle”. What does this mean you should expect from his book? Nothing.”
Darn right. I have no idea what a “barnbeanbaggle” is. It creates no anticipations about what I”ll find in his book; it’s free-floating.
Free-floating beliefs have to at least feel like beliefs. You can’t even think you have a belief about whether Wulky Wilkinsen is a barnbeanbaggle unless you think you have some idea of what “barnbeanbaggle” is being used to mean. The thing about using a made-up word is that it’s too easy to notice that you don’t know what to anticipate from it. The thing about “post-utopian” is that, even if you have some idea of what “post-utopian” is supposed to mean, being told (by someone you perceive as sufficiently authoritative) that a certain author is “post-utopian” is quite likely to just make you selectively interpret that author’s works to fit that schema. Similar to how you can make professional wine tasters describe a white wine the way they usually describe red wines by dying it red.
The made-up word being too easy to notice is a good point.
“I believe Wulky is a post-utopian.”
“The professor says Wulky is a post-utopian, and I expect to figure out what the term means and confirm or disconfirm this claim by reading his book.”
When I first read this post I thought (2), and if I understand it right, the post is attacking (1).
I may be getting too tied-up with the labels being used...
You originally misunderstood Eliezer’s point, and now understand it.
If many people will similarly misunderstand it, that is a reason for Eliezer to change it on lesswrong or if/when it appears in his book. If you are relatively unusual, it is only a weak reason.
Reasons not to change it would be a lack of viable alternatives. Can we think of an alternative better than “post-utopian” or “barnbeanbaggle”? For example, a less meaningful term from literary theory or another field?
My boyfriend just suggested “metaspontaneity” !
The Mighty Handful ?
But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?
If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”
She thinks: “What a butt-ugly idiot!” and gets the hell away from him.
Joe goes on happily believing that he’s smart and beautiful.
For myself, the answer is obvious: my beliefs are means to an end, not ends in themselves. They’re utility producers only insofar as they help me accomplish utility-producing operations. If I were to buy stock believing that its price would go up, I better hope my belief paid its rent in correct anticipation, or else it goes out the door.
But for Joe? If he has utility-pumping beliefs, then why not? It’s not like he would get any smarter or prettier by figuring out he’s been a butt-ugly idiot this whole time.
They can. They just do so very rarely, and since accepting some inaccurate beliefs makes it harder to determine which beliefs are and aren’t beneficial, in practice we get the highest utility from favoring accuracy. It’s very hard to keep the negative effects of a false belief contained; they tend to have subtle downsides. In the example you gave, Joe’s belief that he’s already smart and beautiful might be stopping him from pursuing self-improvements. But there definitely are cases where accurate beliefs are definitely detrimental; Nick Bostrom’s Information Hazards has a partial taxonomy of them.
I don’t think it’s possible for a reflectively consistent decision-maker to gain utility from self-deception, at least if you’re using an updateless decision theory. Hiding an unpleasant fact F from yourself is equivalent to deciding never to know whether F is true or false, which means fixing your belief in F at your prior probability for it. But a consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.
No, this is not true. Many of the reasons why true beliefs can be bad for you are because information about your beliefs can leak out to other agents in ways other than through your actions, and there is is no particular reason for this effect to be linear. For example, blocking communications from a potential blackmailer is good because knowing with probability 1.0 that you’re being blackmailed is more than 5 times worse than knowing with probability 0.2 that you will be blackmailed in the future if you don’t.
Oh, sure. By “gain utility” I meant “gain utility directly,” as in the average Joe story.
I don’t think it’s linear in the average Joe story, either; if there’s one threshold level of belief which changes his behavior, then utility is constant for levels of belief on either side of that threshold and discontinuous in between.
A rational agent can have its behavior depend on a threshold crossing of belief, but if there’s some belief that grants it utility in itself (e.g. Joe likes to believe he is attractive), the utility it gains from that belief has to be linear with the level of belief. Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.
This doesn’t sound right. Could you describe the Dutch-booking procedure explicitly? Assume that believing P with probability p gives me utility U(p)=p^2+C.
An additive constant seems meaningless here: if Joe gets C utilons no matter what p is, then those utilons are unrelated to p or to P—Joe’s behavior should be identical if U(p)=p^2, so for simplicity I’ll ignore the C.
Now, suppose Joe currently believes he is not attractive. A surgery has a .5 chance of making him attractive and a .5 chance of doing nothing. This surgery is worth U(.5)-U(0)=.25 utilons to Joe; he’ll pay up to that amount for it.
Suppose instead the surgeon promises to try again, once, if the first surgery fails. Then Joe’s overall chance of becoming attractive is .75, so he’ll pay U(.75)-U(0)=.75^2=0.5625 for the deal.
Suppose Joe has taken the first deal, and the surgeon offers to upgrade it to the second. Joe is willing to pay up to the difference in prices for the upgrade, so he’ll pay .5625-.25=.3125 for the upgrade.
Joe buys the upgrade. The surgeon performs the first surgery. Joe wakes up and learns that the surgery failed. Joe is entitled to a second surgery, thanks to that .3125-utility purchase of the upgrade. But the second surgery is now worth only .25 utility to him! The surgeon offers to buy that second surgery back from him at a cost of .26 utility. Joe accepts. Joe has spent a net of .0525 utility on an upgrade that gave him no benefit.
As a sanity check, let’s look at how it would go if Joe’s U(p)=p. The single surgery is worth .5. The double surgery is worth .75. Joe will pay up to .25 utility for the upgrade. After the first surgery fails, the upgrade is worth .5 utility. Joe does not regret his purchase.
You’re missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward. If Joe expects to have the surgery but to never find out whether or not it worked, then its value is U(0.5)-U(0)=0.25. On the other hand, if he expects to be told whether it worked or not, then he ends up with a belief-score or either 0 or 1, not 0.5, so its value is (0.5*U(1.0) + 0.5*U(0)) - U(0) = 0.5.
Suppose Joe is uncertain whether he’s attractive or not—he assigns it a probability of 1⁄3. Someone offers to tell him the true answer. If Joe’s utility-of-belief function is U(p)=p^2, then being told the answer is worth ((1/3)*U(1) + (2/3)*U(0)) - U(1/3) = ((1/3)*1 + (2/3)*0) - (1/9) = 2⁄9, so he takes the offer. If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)*sqrt(1) + (2/3)*sqrt(0)) - sqrt(1/3) = −0.244, so he plugs his ears.
Okay, here we go. I’ve possibly reinvented the wheel here, but maybe I’ve come up with a simple, original result. That’d be cool. Or I’m interestingly wrong.
We wish to show that superlinear utility-of-belief functions, or equivalently ones that would cause an agent to prefer ignorance, lead to inconsistency.
Suppose Joe equally wants to believe each of two propositions, P and Q, to be true, with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x. Without loss of generality, we set U(0) to 0 and U(1) to 1. Both propositions concern events that will invisibly occur at some known future time.
Joe anticipates that he will eventually be given the following choice, which will completely determine P and Q:
Option 1: P xor Q. Joe won’t know which one is true, so he believes each of them is true with probability 1⁄2. So he has U(1/2)+U(1/2)=2*U(1/2) utility. By assumption this is greater than 1. So let 2*U(1/2) − 1 = k.
Option 2: One proposition will become definitely true. The other will become true with probability p, where p is chosen to be greater than 0 but less than U-inverse(k). Joe will know which proposition is which. Joe’s utility would be less than U(1) + U(U-inverse(k)), or less than 1 + 2*U(1/2) − 1, or less than 2*U(1/2).
Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2*U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2*U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to x*U(1). If he were to modify his utility function such that U’(x) = x*U(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function.
Thus we can say that all superlinear utility functions are inherently unstable, in that an agent with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x, may increase its expected U by modifying to U’(x) = x*U(1) for all x.
The strongest possible constraint we can give for inherent stability of a utility-of-belief function is that, with utility-of-belief function U, an agent can never improve its U-utility by switching to any other utility function, except under cases wherein it anticipates being modeled by an outside entity. If we removed this exception, no non-degenerate utility-of-belief function could be called stable because we could always posit an outside entity that punishes agents modeled to have specific utility functions. The linear utility of belief function satisfies this condition, since it behaves identically whether it is maximizing the probability of P or its U(p(P)), so it always anticipates itself maximizing its own utility function. We have just shown that no superlinear function satisfies this constraint.
But by conservation of expected evidence, no agent with a linear or sublinear utility-of-belief function can increase its expected utility-of-belief by hiding evidence from itself.
Therefore, a rational agent with a stable utility function cannot make itself happier by hiding evidence from itself, unless it is being modeled by an outside entity.
Thanks for taking the time to try puzzling this out, but I suspect it’s just interestingly wrong. The magic seems to be happening in this paragraph:
I don’t see where U(1/2+p/2) comes from; should that be U(1)+U(p)? I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has. Finally, what does it matter whether a change increases expected utility under the new function? It’s only utility under the old function that matters—changing utility function to almost anything maximizes the new function, including degenerate utility functions like number of paperclips.
Joe doesn’t know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.
Not sure what you mean here.
It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U’, but Joe-with-U’ can’t maximize U’ by changing back to U.
That’s interesting. The one problem that I have is it’s rather unclear when a belief is evaluated for the purposes of utility. Which is to say, does Joe care about his belief at time t=now, or t=now+delta, or over all time? It seems obvious that most utility functions that care only about the present moment would have to be dynamically inconsistent, whether or not they mention belief.
Thanks, that’s a good point. In fact, it’s possible we can reduce the whole thing to the observation that it matters when utility of belief function is evaluated if and only if it’s nonlinear.
Apologies; I realize this is both not very clearly written, and full of holes when considered as a formal proof. I have a decent excuse in that I had to rush out the door to go to the HPMOR meetup right after writing it. Rereading it now, it still looks like a sketch of a compelling proof, so if neither jimrandomh nor any lurkers see any obvious problems, I’ll write it up as a longer paper, with more rigorous math and better explanations.
Did you ever end up writing it up? I think I’d follow more easily if you went a little slower and gave some concrete examples.
Good point.
I agree here.
But I still suspect that if your U(p) is anything other than linear on p, you can get Dutch-booked. I’ll try to come back with a proof, or at least an argument.
It’s sort of taken for granted here that it is in general better to have correct beliefs (though there have been some discussions as to why this is the case). It may be that there are specific (perhaps contrived) situations where this is not the case, but in general, so far as we can tell, having the map that matches the territory is a big win in the utility department.
In Joe’s case, it may be that he is happier thinking he’s beautiful than he is thinking he is ugly. And it may be that, for you, correct beliefs are not themselves terminal values (ends in themselves). But in both cases, having correct beliefs can still produce utility. Joe for example might make a better effort to improve his appearance, might be more likely to approach girls who are in his league and at his intellectual level, thereby actually finding some sort of romantic fulfillment instead of just scaring away disinterested ladies. He might also not put all his eggs in the “underwear model” and “astrophysicist” baskets career-wise. You can further twist the example to remove these advantages, but then we’re just getting further and further from reality.
Overall, the consensus seems to be that wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook will pay off in the long run.
I think you’ve hit on one of the conceptual weaknesses of many Rationalists. Beliefs can pay rent in many ways, but Rationalists tend to only value the predictive utility of beliefs, and pooh pooh other other utilities of belief. Comfort utility—it makes me feel good to believe it. Social utility—people will like me for believing it. Efficacy utility—I can be more effective if I believe it.
Predictive Truth is a means to value, and even if a value in itself, it’s surely not the only value. Instead of pooh poohing other types of utility, to convince people you need to use that predictive utility to analyze how the other utilities can best be fulfilled.
The trouble is that this rationale leads directly to wireheading at the first chance you get—choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don’t want that, so those people should make their beliefs only a means to an end.
However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them… yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.
I am going to try and sidetrack this a little bit.
Motivational speeches, pre-game speeches: these are real activities that serve to “get the blood flowing” as it were. Pumping up enthusiasm, confidence, courage and determination. These speeches are full of cheering lines, applause lights etc., but this doesn’t detract from their efficacy or utility. Bad morale is extremely detrimental to success.
I think that “Joe has utility-pumping beliefs” in that he actually believes the false fact “he is smart and beautiful”; is the wrong way to think of this subject.
Joe can go in front of a mirror and proceed to tell/chant to himself 3-4 times: “I am smart! I am beautiful! Mom always said so!”. Is he not in fact, simply pumping himself up? Does it matter that he isn’t using any coherent or quantitative evaluation methods with respect to the terms of “smart” or “beautiful”? Is he not simply trying to improve his own morale?
I think the right way to describe this situation is actually: “Joe delivers self motivational mantras/speeches to himself” and believes that this is beneficial. This belief does pay in anticipated experiences. He does feel more confident afterwards, it does make him more effective in conveying himself and his ideas in front of others. Its a real effect, and it has little to do with a false belief that he is actually “smart and beautiful”.
Well, he might. Or, rather, there might be available ways of becoming smarter or prettier for which jettisoning his false beliefs is a necessary precondition.
But, admittedly, he might not.
Anyway, sure, if Joe “terminally” values his beliefs about the world, then he gets just as much utility out of operating within a VR simulation of his beliefs as out of operating in the world. Or more, if his beliefs turn out to be inconsistent with the world.
That said, I don’t actually know anyone for whom this is true.
I don’t know too many theist janitors, either. Doesn’t mean they don’t exist.
From my perspective, it sucks to be them. But once you’re them, all you can do is minimize your misery by finding some local utility maximum and staying there.
In this example, Joe’s belief that he’s smart and beautiful does pay rent in anticipated experience. He anticipates a favorable reaction if he approaches a girl with his gimmick and pickup line. As it happens, his innaccurate beliefs are paying rent in inaccurate anticipated experiences, and he goes wrong epistemically by not noticing that his actual experience differs from his anticipated experience and he should update his beliefs accordingly.
The virtue of making beliefs pay rent in anticipated experience protects you from forming incoherent beleifs, maps not corresponding to any territory. Joe’s beliefs are coherent, correspond to a part of the territory, and are persistantly wrong.
If my tenants paid rent with a piece of paper that said “moneeez” on it, I wouldn’t call it paying rent.
In your view, don’t all beliefs pay rent in some anticipated experience, no matter how bad that rent is?
Or they pay you with forged bills. You think you’ll be able to deposit them at the bank and spend them to buy stuff, but what actually happens is the bank freezes your account and the teller at the store calls the police on you.
No, for an example of beliefs that don’t pay rent in any anticipated experience, see the first 3 paragraphs of this article:
Two people have semantically different beliefs.
Both beliefs lead them to anticipate the same experience.
EDIT: In other words, two people might think they have different beliefs, but when it comes to anticipated experiences, they have similar enough beliefs about the properties of sound waves and the properties of falling trees and recorders and etc etc that they anticipate the same experience.
Taboo “semantically”.
See also the example of The Dragon in the Garage, as discussed in the followup article.
Taboo’ed. See edit.
Although I have a bone to pick with the whole “belief in belief” business, right now I’ll concede that people actually do carry beliefs around that don’t lead to anticipated experiences. Wulky Wilkinsen being a “post-utopian” (as interpreted from my current state of knowing 0 about Wulky Wilkinsen and post-utopians) is a belief that doesn’t pay any rent at all, not even a paper that says “moneeez.”
Is there a difference between utility and anticipated experiences? I can see a case that utility is probability of anticipated, desired experiences, but for purposes of this argument, I don’t think that makes for an important difference.
“Smart and beautiful” Joe is being Pascal’s-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison.
I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn’t want to be Joe. But once you are Joe, his irrationality looks different from the inside.
This post probably changed the way I regulate my own thoughts more than any other. How many arguments I have heard never would have happened if everyone involved read this...
Based on this, I would very much like to make a variant of Monopoly, with beliefs/theories in place of properties, and evidence for money. Invest a large chunk to establish a belief, with its rent determined by sophistication and usefulness of prediction, ranging from Aristotelian physics to relativity, spermatists & ovists to Darwinian evolution, and so on. Other players would have to give you some credit when they land on your theories, and admit that they give results.
This would also be a great way to teach some history of science, if well designed.
Of course, the analogy becomes interesting when you consider what corresponds to the cutthroat capitalism...
I don’t understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).
The phlogiston theory had predictive power (e.g. what kind of “air” could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better understanding. (I base this reading on Jim Loy’s page on Phlogiston Theory.
Literary genres don’t have much predictive powers if you don’t know anything about them—if you do, then they do. Classifying a writer as producing “science fiction” or “fantasy” creates anticipations that are statistically meaningful. For another comparison, saying some band plays “Death Metal” will shape our anticipation; somewhat differently for those who can distinguish Death Metal from Speed Metal as compared to those who merely know that “Metal” means “noise”.
I can imagine beliefs leading to false anticipations, and they’re obviously inferior to beliefs leading to more correct ones. That doesn’t mean they’re free-floating.
One example for the free-floating belief is actually about the tree falling in the forest: to believe that it makes a sound does not anticipate any sensory experience, since the tree falls explicitly where nobody is around to hear it, and whether there is sound or no sound will not change how the forest looks when we enter it later. However, to let go of the belief that the tree makes a sound does not seem to me to be very useful. What am I missing?
I understand that many beliefs are held not because they have predictive power, but because they generalize experiences (or thoughts) we have had into a condensed form: a sort of “packing algrithm” for the mind when we detect something common; and when we understand this commonality enough, we get to the point where we can make prediction, and if we don’t yet, we can’t, but may do so later. There is no belief or thought we can hold that we couldn’t trace back to experiences; beliefs are not anticipatory, but formed from hindsight. They organize past experience. Can you predict which of these beliefs is not going to be helpful in organizing future experiences? How?
I think that this is really a discussion of explanatory power, of which scientific causation is one example. All theories attempt to explain a set of examples. Scientific theories attempt to explain causation in natural phenomena, thus their “explanatory power” is proportional to their predictive power. A unified theory of forces at the planetary and subatomic levels would explain more examples than any do now, thus it would have great explanatory power.
Yet causation isn’t the only type of explanatory relationship. Causation implies time and events, whereas these are only one type of explanation. For example, the Pythagorean theorem explains why physical right triangles in reality have the lengths that they do. It doesn’t “cause” them to have the properties they do. It would be foolish to say that any property of physical triangles “explains” or “proves” the Pythagorean theorem, because mathematical truths exist independent of practicalities. Plato’s dialogue The Euthyphro beautifully explains why even if the set of things which are x and the set of things which are y are equivalent (in that case, the set of pious actions and the set of god loved actions,) they are not the same quality if one (god loved) explains the other (piety) and not vice versa. Similarly, the total number of hydrogen atoms in a glass of water is always even, but it is the quality of evenness (any number which is a multiple of two must be even) that explains this, not any quality of hydrogen. The one “explains” (but does not “cause”) the other.
Thus, I think some parts of this post would be better understood as being stated as thus: any theory which provides no additional explanatory power should be ignored.
So, looking at the case of Phlogiston, the OP is not saying it is “wrong,” but that it lacks the explanatory power that justifies it as a useful theory. If I take the Neils Bohr model of the atom, and say that there are extra invisible subatomic particles, and that these particles are “god,” you would be hard pressed to prove me wrong. But this theory does not predict any new phenomena, nor is it falsifiable, nor, most importantly, does it have an explanatory relationship with any other known truth about atoms: none of them explain this theory, and it explains none of them. It exists completely independent from any other aspect of atomic theory, thus it lacks any explanatory power as a theory.
Yet there are theories which have great explanatory power but not empirical predictive power. Lets say I’m a simplistic deontologist who says that killing is wrong because human life is good. Along comes a utilitarian who says, I have a theory which explains, in all the cases where you’re right, why you are right, and in those cases where you aren’t, why you aren’t, according to your own first principle. In terms of my very simplistic ethical theory, the utilitarian would absolutely be “less wrong” than me, for he has provided a theory which better explains the hard cases my theory failed to (justified killings, kill 1 save 2 etc.)
In the case of the post-utopian author, I think that we again are getting wrapped up in “prediction” when we should concern ourselves with explanation.
What is a plumber? Is it a man who comes to your house, sits on your couch, eats your food, watches your TV, and flirts with your wife? Even if this is true of all plumbers, it is not the definition of plumber. Definitions should be proscriptive, such that they give you the means to determine what counts as an x, and what a good x is. If a plumber fixes pipes, anyone who fixes pipes is a plumber, a good plumber fixes them well, and no one who doesn’t fix pipes is a plumber.
Thus, hold literary labels to the same standard. Don’t ask, “is this label true”? Because as we saw earlier with the god particle example, many theories cannot be proven false but still have greater or lesser explanatory power (see economics, ethical theories etc). The better standard is explanatory power. Is there a definition of the quality “post-utopian” such that any book with quality x is post-utopian, x explains why it counts as post utopian, and the more x it is, the more it is post-utopian it is? Saying post-utopian is a,b,c,d,e,f,g,h, but failing to provide a single explanation of the aforementioned form is like calling the plumber a man who eats your food and flirts with your wife: it is a descriptive definition, not a proscriptive definition. It may be true of the every plumbers, but it is not the thing that makes plumbers count as plumbers.
I think the OP meant to say that literary labels like post-utopianism fail to meet this standard. Sure, you can come up with descriptive statements of the terms which may be true (post-utopian books do not portray utopian societies as possible) but this is not a definition because it is not this quality that a. makes post-utopian books count as utopian, b. without which a book cannot be post-utopian, and c. designates a clear set of books which either are, or are not, post-utopian. Textual analysis perhaps can be more wrong and “less wrong,” but literary theories are just not the sorts of truth-bearing statements that mathematical, scientific, or philosophical theories are.
Compare “post-utopian” to “even”. Even numbers are a set of specific numbers, but there is a single quality they have (being multiples of 2) which explains why they are in the set. Without that quality, they would, “by definition”, not be even. This is the standard we should be looking for in definitions and theories. Not just that they are “true” (plumbers do steal your food, watch your tv, and flirt with your wife) but that they have the sort of explanatory power we’ve isolated.
Thus, I think the larger point of the post stands. There are better theories and worse theories, and we should prefer the better ones.
Aaaaaaaaugh.
I’m not trying to define the terms, just posit a very very simple theory of the form killing is wrong because human life is good. Such a theory would be inferior on its own premises than a very very simple utilitarianism, regardless of whether either or the premise itself is true. As such I oversimplified utilitarianism just as much, but it doesn’t matter for the scope of the example.
Edit: in fact, for the purposes of the example it is better if the “deontologist” is wrong about deontology, because it better illustrates how one theory can have greater explanatory power than another only on the grounds of the former’s justification without reference to external verifiability. “human life is good” is a poor first principle, but if it is true, the utilitarian’s principle applies it better than the “deontologist’s” did.
Someone who believes that killing is wrong because human life is good is not a deontologist. See here.
Here the deontologist is arguing for the principe ‘killing is wrong regardless of the consequences’ (deontic) but uses a poor justification for which consequentialism is a more reasonable conclusion. So the ‘deontologist’ is wrong even though his principle cannot be externally verified. I was just (unclearly I see) using this strawman to illustrate how theories could be better and worse at explaining what they attempt to explain without being the sorts of things which can be proven. I will attempt to be clearer in future.
Wonderful exposition of versificationism (I meant verificationism lol, but I won’t change it cause I like the reply bellow). I do have a question though. You said:
Well yes, we don’t directly observe atoms (actually we do now but we didn’t have to). But it is still save to say that if a belief doesn’t make predictions about future sensory experiences it is meaningless, or at least unverifiable. Those predictions may be about the shape of ink squiggles on a piece of paper after some rules are applied, or they may be a prediction about the pattern that a monitor’s many pixels will form after reacting to some instrument in an experiment. In either case, the hypothesis is always linked to the world by the senses, or are you claiming something different?
Versificationism is presumably the doctrine that the truth of a proposition should be evaluated on the basis of how easily it can be expressed in poetic form. Empirically, this seems to favour any number of probably-untrue beliefs, so I’m inclined to reject it. :-)
I have in fact seen something a little like this, in a more sophisticated form, maintained seriously. For instance, here’s Dorothy L Sayers (the context is her series of radio plays “The man born to be king”). “From the purely dramatic point of view the theology is a enormously advantageous, because it locks the whole structure into a massive intellectual coherence. It is scarcely possible to build up anything lop-sided, trivial or uinsound on that steely and gigantic framework. [...] there is no more searching test of a theology than to submit it to dramatic handling; nothing so glaringly exposes inconsistencies in a character, a story, or a philosophy as to put it upon the stage and allow it to speak for itself. [...] As I once made a character say in another context: ‘Right in art is right in practice’; and I can only affirm that at no point have I yet found artistic truth and theological truth at variance.”
And, though I disagree with her entirely on the truth of the sort of theology she’s writing about, I think she does actually have a point of sorts. But a professional writer of fiction like Sayers really ought to have known better than to suggest that truth can be distinguished from untruth by seeing how easily each can be formed into art.
A related epistemology that is popular in the business world is PowerPointificationism, which holds that the truth of a proposition should be evaluated by how easily it can be expressed in PowerPoint. Due to the nature of PowerPoint as a means of expression, this epistemology often produces results similar to those of Occam’s sand-blaster, which holds that the simplest explanation is the correct one (note that unlike Occam’s razor, Occam’s sand-blaster does not require that the explanation be consistent with observation).
...and I just spit coffee on my keyboard.
That’s marvelous… is that original with you?
I take it you’re familiar with Edward Tufte’s “The Cognitive Style of PowerPoint”?
Good article. Some thoughts:
I probably constrain my experiences in lots of ways that I don’t even know about, but I don’t think there’s always a way to know whether a belief will constrain your experiences, even if it is based on empirical (or even scientific) observation. Isaac Newton’s beliefs constrained all of our beliefs for centuries. Scholars were so unwilling to question classical mechanics that they came up with this “ether” stuff that could never be observed directly, and thus didn’t further constrain their experience, but had the nice side effect of resolving inconsistencies in their previously held theories. However, even though Einstein’s theory was more correct than Newton’s, without Newton’s theory mechanical engineering wouldn’t exist, and without Einstein’s, the Bomb wouldn’t exist. I mean this is obviously a gross oversimplification of the development of the Bomb, but I’m just saying there’s not much use for relativity outside of a classroom/particle accelerator.
Global Positioning System
I understand that having beliefs that are falsifiable in principle and make predictions about experience is incredibly important. But I have always wondered if my belief in falsifiability was itself falsifiable. In any possible universe I can imagine it seems that holding the principle of falsifiability for our beliefs would be a good idea. I can’t imagine a universe or an experience that would make me give this up.
How can I believe in the principle of falsifiability that is itself unfalsifiable?! I feel as though something has gone wrong in my thinking but I can’t tell what. Please help!
Excellent question!
Excellent, because it illustrates the problem with “believing in” the principle of falsifiability, as opposed to using it and understanding how it relates to the rest of my thinking.
Forget that the principle of falsifiability is itself incredibly important. What sorts of beliefs does the principle of falsifiability tell me to increase my confidence in? To decrease my confidence in?
What would the world have to be like for the former beliefs to be in general less likely than the latter?
Thanks for the reply Dave. Are you saying I should not look at falsifiability as a belief, but rather a tool of some sort? That distinction sounds interesting but is not 100% clear to me. Perhaps someone should do a larger post about why the principle should not be applied to itself.
I have also thought of putting the problem this way: Eliezer states that the only ideas worth having are the ones we would be willing to give up. Is he willing to give up that idea? I don’t think so..., and I would be really interested to know why he doesn’t believe this to be a contradiction.
What I’m saying is that the important thing is what I can do with my beliefs. If the “principle of falsifiability” does some valuable thing X, then in worlds where the PoF doesn’t do X, I should be willing to discard it. If the PoF doesn’t do any valuable thing X, then I should be willing to discard it in this world.
It seems we have empirical and non-empirical beliefs that can both be rational, but what we mean by “rational” has a different sense in each case. We call empirical beliefs “rational” when we have good evidence for them, we call non-empirical beliefs like the PoF “rational” when we find that they have a high utility value, meaning there is a lot we can do with the principle (it excludes maps that can’t conform to any territory).
To answer my original question, it seems a consequence of this is that the PoF doesn’t apply to itself, as it is a principle that is meant for empirical beliefs only. Because the PoF is a different kind of belief from an empirical belief, it need not be falsifiable, only more useful than our current alternatives. What do you think about that?
I think it depends on what the PoF actually is.
If it can be restated as “I will on average be more effective at achieving my goals if I only adopting falsifiable beliefs,” for example, then it is equivalent to an empirical belief (and is, incidentally, falsifiable).
If it can be restated as “I should only adopt falsifiable beliefs, whether doing so gets me anything I want or not” then there exists no empirical belief to which it is equivalent (and is, incidentally, worth discarding).
You have just refuted the contention that all warranted beliefs must be falsifiable in principle. Karl Popper, who introduced the falsifiability criterion and pushed it as far if not further than it can go, never advocated that all beliefs should be falsifiable. Rather, he used falsifiability as the criterion of demarcation between science and non-science, while denying that all beliefs should be scientific. His contention that falsifiability demarcates science does imply, as he recognized, that the criterion of falsifiability is not itself a scientific hypothesis.
Rational beliefs are not necessarily scientific beliefs. Mathematics is rational without being falsifiable. The same is true of philosophical beliefs, such as the belief that scientific beliefs are falsifiable. But rational beliefs that are not scientific must be refutable, and falsifiable beliefs are a proper subset of refutable beliefs. Falsifiable beliefs are refutable in one particular way: they are refutable by observation statements, which I think are equivalent to EY’s anticipations. Science is special because it is 1) empirical (unlike mathematics) and 2) has an unusual capacity to grow human knowledge systematically (unlike philosophy). But that does not imply that we can make do with scientific beliefs exclusively, one reason being the one that you mention about criteria for the acceptance of scientific theories.
The broader criterion of refutability doesn’t necessarily involve refutation by observation statements. How would you refute the falsifiability criterion? It would be false if science it were the case that scientists secured the advance of science by using some other criteria (such as verification).
It’s a mistake to conflate the questions of whether a theory is scientific and whether it’s corroborated (by attempted falsifications). Or to conflate whether it’s scientific or it’s rationally believable. Theories aren’t bad because they aren’t science. They’re bad because they’re set up so they resist any form of refutation. Rational thought involves making your thinking vulnerable to potential refutation, rather than protecting it from any refutation.In science, the mode of refutation is observation, direct connection to sensory data. But it won’t do (as you’ve realized by trying to apply falsifiability to itself) to limit one’s thinking entirely to that which is falsifiable.
You later ask (in effect) whether the refutability criterion is itself even refutable. Would EY be willing, ever, to give it up? He should be, were someone to show that sheer dogmatism conduces to the growth of knowledge. That I can’t conceive of a plausible argument to that end doesn’t obviate the refutability of the contention
I think that resolves your confusion, but I don’t want to imply that Popper uttered the last word—there are problems with neglecting verification in favor of strict falsificationism.
Thank you for your thoughts.
What are the criteria that we use for accepting or refuting rational non-empirical beliefs? You mention that falsifiability would be refuted if some other criteria “secured the advance of science.” You also mention that we should give up the refutability criterion if “sheer dogmatism conduces to the growth of knowledge.” It sounds like our criteria for the refutability of non-empirical beliefs are mostly practical; we accept the epistemic assumptions that make things “work best.” Is there more to it than this?
To be pedantic and Popperian, I’d have to correct your use of “empirical beliefs.” The philosophical positions at issue aren’t scientific but they are empirical. “Empirical”—to be the basis for scientific observation statements—must be expressible in low-level observation sentences that all competent scientists agree on.
The belief in question is that science’s crucial distinguishing feature allowing it to advance is the subjection of science’s claims to empirical testing, allowing strict falsification. We can’t run an experiment or otherwise record observation statements, so we resort to philosophical debate aimed at refutation. Refutation is obtained by plausible argument. For instance, in the discussion about demarcation, an example of a potentially plausible argument goes if we relied on falsification exclusively, we would never have evidence that a claim is true, only that it isn’t false. But we rely on scientific theories and consider them close to the truth (or at least as probably so). Therefore, falsifiability can’t explain the distinctiveness of science.
This involves highly plausible claims, based on observation, about how we in fact use scientific theories. But although the result of observation, it can’t be reduced to something everyone agrees on that is closely tied to direct perception, as with an observation statement.
For me the principle of falsifiability is best understood as a way of distinguishing scientific theories about the world from other theories about the world. In other words, falsifiability is one way of defining what science is and is not. A theory that does not constrain experience (“God works in mysterious ways”) is not a scientific theory because it can explain any occurrence and is therefore not falsifiable.
Because falsifiability is a definition, not a theory about the world, there’s no reason to think it can be falsified. The definition could be wrong by failing to accurately or usefully define scientific theory, but that’s conceptually different.
Falsifiability is a very bad way to define science (or scientific theories). If falsifiability was all it took for a theory to be scientific, then all theories known to be false would be scientific (after all, if something is known to be false, it must be falsifiable). Do we really want a definition of science that says astrology is science because it’s false?
*shrug*
I don’t think the current line of enquiry is particularly useful.
“Astrology works” is a scientific theory to the degree that it is, in fact, acceptable science to do an experiment to see whether or not astrology has predictive power. It’s rhetorically inaccurate to say that means “astrology is science” though, because of course the practice of astrology is not. But sure, it’s probably a good idea to include other conditions. Excessively unlikely (or non-reductionist?) hypotheses could be classified as non-scientific, for the simple reason that even considering them in the first place would be a case of privileging the hypothesis.
None of this contradicts falsifiability being “a way of distinguishing scientific theories about the world from other theories about the world”, if we have other ways of distinguishing scientific from non-scientific, such as “reductionism”.
Astrology does seem to consist of scientific hypotheses.
I chose astrology because it has a reverse halo effect around here (and so would serve me rhetorically). Feel free to replace it with any other known to be false set of propositions.
I agree that falsifiability is not a complete definition. My point was only that falsifiability is not applicable to the principle of falsifiability, any more than it applies to mathematics.
That said, Newton’s physics and geocentric theories are false. Are they not science simply for that reason?
Yes. Falsifiability is a poor definition of science and is self-undermining in the sense that it can’t pass its own test.
Of course not. I’m not claiming a scientific theory must be true. I’m claiming that known falseness (which implies falsifiability) is not a sufficient condition for being scientific.
That statement does not itself constrain experience. That’s not a useful critique of the statement.
Know falseness is not really same thing as falsifiability. Known falseness is useless in deciding whether a theory is scientific. Both the Greek pantheon and geocentric theories are known to be false.
Falsifiability is simply the requirement that a scientific theory to list things that can’t happen under that theory. Falsifiability says scientific theory don’t look for evidence in support, they look for evidence to test the theory.
The fact that no false statements appear doesn’t mean that the scientific theory isn’t falsifiable. The fact that every statement of a theory has been true does not mean that the theory is falsifiable.
That doesn’t seem true. The statement seems to perfectly constrain experience: you will not experience situations where theories which do not constrain experience will still be falsified.
And indeed, watching the world go by over the years, I see theories like ‘Christianity’ or ‘psychoanalysis’ which do not constrain experience at all have yet to be falsified—exactly as predicted.
Fine, you want to be contrary. What experience would falsify the partial definition of scientific theory that I have labelled “the principle of falsifiability”? If no such experience exists, does this call into doubt the usefulness of the principle?
Are you even trying here? Here’s what would falsify falsifiability: observing superior predictions being made by unfalsifiable theories, theories which have no reason to work but which do. Imagine a Christianity which came with texts loaded with prophetic symbolism which could be interpreted any way and is unfalsifiable, but which nevertheless keep turning out literally true (writes my hypothetical self, as he is tormented by Satanic wasps with the faces of humans prior to the sea turning into blood or something like that). In such a universe, falsifiability would be pretty useless.
Isn’t that essentially the best case for things like Nostradamus? Even assuming that the prophecies are accurate, they aren’t useful because they are so vague. The moment that the predictions are specific enough to be useful, they could be falsified.
What use is it to call that science? How could it possibly produce superior predictions in a world in which science works at all?
Yes, that is rather the question you should be answering if you want to criticize the desirability of falsifiability as being unfalsifiable itself...
I don’t understand where we disagree, so let me clarify my position: A prophecy that is so vague that it can’t be disproved is so vague that it doesn’t tell you what will happen ahead of time. Calling that a prediction abuses the term to the point of incoherency.
Yes, that’s almost entirely a definitional point. Definitions aren’t necessarily empirical statements. They are either useful or not useful in thinking carefully. Thus, the fact that they cannot be falsified is not a relevant thing to say, in the same way that it isn’t useful to object that the Pythagorean theory can’t be falsified.
If you intend to invoke some other critique of Popper and his use of falsifiability to distinguish science from non-science, please by more explicit, because I don’t understand your argument.
Nothing in this reply contradicts anything I have asserted. I was merely claiming that if falsifiability is a sufficient condition for a hypothesis to be “scientific”, then all theories known to be false are scientific (because if we know they are false, then they must be falsifiable). I’m not being contrarian; I’m pointing out a deductive consequence of the very definition of falsifiability that you linked to. Hopefully this closes the inferential distance:
If a hypothesis is falsifiable, then it is scientific.
If a hypothesis is known to be false, then it is falsifiable.
Therefore, if a hypothesis is known to be false then it is scientific.
I am merely denying the first premise via reductio ad absurdum, because the conclusion is obviously false (and the second premise isn’t). If you took my claim to be something other than this, then you have simply misread me.
That’s much clearer. I didn’t intend to assert that falsifiability was a sufficient condition for a theory being scientific, only that it is a necessary condition. That’s what I mean by saying it was a partial definition.
Thus, I don’t intend to assert the first sentence of your syllogism. Instead, I would say, “If a hypothesis is not falsifiable, then it is not scientific.” Adding the second statement yields: “If a hypothesis is know to be false, then it might be scientific.” That’s a true statement, but I don’t claim it is very insightful.
I have read this post before and have agreed to it. But I read it again just now and have new doubts.
I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.
Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some vibration, the answer is yes. For the second person, the test is to monitor every person living on earth and see if their brains did the kind of auditory processing that the falling tree would make them do. Since the first person’s test has turned out to be positive and the second person’s test has turned out to be negative, they say “yes” and “no” respectively as answers to the question, “Did the tree make any sound?”
So the problem here doesn’t seem to be an absence of rent in anticipated experiences. There is some problem, true, because there is no single anticipated experience where the two people anticipate opposite outcomes even though one says that the tree makes a sound and the other one says it doesn’t. But it seems like that’s because of a different reason.
Say person A has a set of observations X, Y, and Z that he thinks are crucial for deciding whether the tree made any sound or not. For example, if X is negative, he concludes that the tree did make a sound otherwise it didn’t, if Y is negative, he concludes it did not make a sound and so on. Here, X could be “cause air vibration” for example. For all other kinds of observations, A has a don’t care protocol, i.e., the other observations do not say anything about the sound. Similarly, person B has a set X’, Y’, Z’ of crucial observations and other observations lie in the set of don’t cares. The problem here is just that X,Y, Z are completely disjoint from X’, Y’, Z’. Thus even though A and B differ in their opinions about whether the tree made a sound, there is no single aspect where they would anticipate completely opposite experiences.
Suppose someone, on inspecting his own beliefs to date, discovers a certain sense of underlying structure; for instance, one may observe a recurring theme of evolutionary logic. Then while deciding on a new set of beliefs, would it not be considered reasonable for him to anticipate and test for similar structure, just as he would use other ‘external’ evidence? Here, we are not dealing with direct experience, so much as the mere belief of an experience of coherence within one’s thoughts.. which may be an illusion, for all we know. But then again, assuming that the existing thoughts came from previous ‘external’ evidence, could one say that the anticipated structure is indeed well-rooted in experience already?
I was reading those ‘what good is math?’ and ‘what good is music’ comments. You can determine what if any ‘system’ is good or bad based on the understanding or misunderstanding of the variables involved.
i.e: one does not have any use for math if they do not understand any of the vast variables associated with the concepts of math. Math cannot be any good to this person who doesn’t understand.
This principle applies to any ‘system’ whether it be math, music, love, life… etc.
This might be challenging because our beliefs tend to shape the world we live in thus masking their error. Does anyone have any practical tips for discovering erroneous beliefs?
The post you replied to is helpful advice for doing just that.
When what you specifically anticipate doesn’t line up with what happens, that’s discovering a possible erroneuos belief.
Making predictions about the world based on your beliefs and seeing whether those predictions hold true.
If a belief encapsulates a value, if it’s about how you want the world to be, why shouldn’t it shape the world, and why should you evict in?
What about things I remember from long ago, which no one else remembers and for which I can find no present evidence or record of besides those memories themselves?
What if I had the belief that a certain coin was unfair, with a 51% chance of heads and only 49% chance of tails? Certainly I could observe an absurd amount of coin flips, and each bunch of them could nudge my belief—but short of an infinite number of flips, none would “definitely” falsify it. Certainly in this case, I could come to believe with an arbitrary level of certainty in the falsehood of the belief. But I don’t believe that would apply in general—what if to reach any arbitrary level of testing a belief, I’d need to think up and apply an indefinite number of unique tests? For example, a belief concerning the state of mind of another person—I can’t think of a definite test, nor can I repeat any test indefinitely to increase certainty.
On a related note, why abandon Bayes in this case for Popper, without any disclaimer? Eg falsificationism is useful because it fights magic explanations and positive bias, but it is still a predictive belief if observation causes you to slightly shift your probability for that belief.
What caused you to believe a 51 % chance of heads versus 49 % chance of tails?
Another example of these types of questions: “If a man who cannot count finds a four-leaf clover, is he lucky?” (Stanisław Jerzy Lec)
Suppose you, an invisible man, overheard 1,000,000 distinct individual humans proclaim “I believe that Velma Valedo and Wulky Wilkinsen are post-utopians based on several thorough readings of their complete bibliographies!”
Must there be some correspondence (probably an extremely complex connection) between the writings, and, quite possibly, between some of the 1,000,000 brains that believe this? The subjectively defined “post-utopian” does not hold much evidential weight when simply mentioned by one informed English professor, but when the attribute “post-utopian” is used to describe two distinct authors by many blind and informed subjects, does this (even a little bit) allow us to anticipate any similarities between (some of) the subjects’ brains or between (some of) the authors’ writings?
What evidence is there for floating beliefs being uniquely human? As far as I know, neuroscience hasn’t advanced far enough to be able to tell if other species have floating beliefs or not.
Edit: Then again, the question of if floating beliefs are uniquely human is practically a floating belief itself.
Interesting post. However, I do not agree completely in the conclutions on the end.
I am a student in math science, what involves me into an enviroment of researchers of this area. In this way, I am able to see that this people’s work is based on beliefs that ‘does not exists’, I mean, they work on abstract ideas that generally only exists in their minds. And now I wonder, does their efforts ‘does not pay rent’? They live from structures and stuff that, in the most of the cases, cannot be found in ‘real life’, and so, according to the article’s conclution, this would not be worth thinking, as is not flowing from a question of anticipation (what were we anticipating if it does not exists?).
Maybe I’m missunderstanding the post, or maybe it is just focus in other life experiences.
You’re definitely right that there’s some areas where it’s easier to make beliefs pay rent than others! I think there’s two replies to your concern:
1) First, many theories from math DO pay rent (the ones I’m most aware of are statistics and computer-science related ones). For example, better algorithms in theory (say Strassen’s algorithm for multiplying matrices) often correspond to better results in practice. Even more abstract stuff like number theory or recursion theory do yield testable predictions.
2) Even things that can’t pay rent directly can be logical implications of other things that pay rent. Eliezer wrote about this kind of reasoning here.
If we extend the concept of making beliefs pay rent to structures in computer memory, then AIs could better choose which structures are more valuable than they cost when many objects are shared in an acyclic network. Each object at the bottom could cost 1, and any objects pointing at x equally share the cost of x plus 1 for themself. If beliefs are stored in these memory structures, then a belief would be evicted when its objective cost exceeds some measure of its value, and total value would be in units of memory available. When some beliefs are evicted, those they depended on would become more expensive to others who depend on them because the number of beliefs sharing the cost decreases. On the other hand, if many beliefs depend on a certain structure in memory, those many sharing the cost each pay less.
This is enlightening.
I don’t believe this is a good example. That information actually can change your anticipation.
By knowing that information you can expect the book will be set in a post-utopian world. By anticipating that you can maybe take better notice at the setting and how exactly the world is post-utopian.
But a great article nevertheless.
I dont get it.Any belief could be said to “pay rent” if you can conceive a situation where it will be useful later on.
A general situation that I made up was.
Given any belief X and at least 2 people believe X,I always have utility in believing X(I think it should be knowing) as it helps me predict the actions of the other 2 people that believe in X.
Even in the example where the student regurgitates it onto the upcoming quiz-the belief had utility for him as he could use that to improve his grades(constraining reality in a way he wants it to be).
I believe you should judge your beliefs based on expected utility in the future(extremely hard to calculate).
PS:This is my first comment/post.Forgive me if is a bit rough
Just so. And a belief that leads to correct predictions will (generally) be more useful than a belief that doesn’t.
I think I see a confusion with the term “eviction” here. There is a difference between believing X exists (knowing about X) and believing X is true (believing X). So, “evicting X” should be understood as “no longer believing X”, rather than “erazing all knowledge of X” (which happens involuntary anyway).
I hope this was helpful, as this is my first comment, too. Anyway, I’ve lurked awhile and I don’t think anyone here would begrudge you raising an honest question.
P.S. Welcome to less wrong :) !!!
Edit: formatting.
Yes! And another way to think about the arguments about beliefs that aren’t predicting anything is that they are really about definitions. When I listen to people talk and argue, I often find myself thinking “well, this depends on how you define X”. For example, is sound something that a living creature perceives, or is it vibrations in the air?
Why is ‘constraining anticipation’ the only acceptable form of rent?
What if a belief doesn’t modify the predictions generated by the map, but it does reduce the computational complexity of moving around the map in our imaginations? It hasn’t reduced anticipation in theory, but in practice it allows us to more cheaply collapse anticipation fields, because it lowers the computational complexity of reasoning about what to anticipate in a given scenario? I find concepts like the multiverse very useful here—you don’t ‘need’ them to reduce your anticipation as long as you’re willing to spend more time and computation to model a given situation, but the multiverse concept is very, very useful in quickly collapsing anticipation fields about spaces of possibility outcomes.
Or, what if a belief just makes you feel really good and gives you a ton of energy, allowing you to more successfully accomplish your goals and avoid worrying about things that your rational mind knows are low probability, but which you haven’t been able to un-stuck from your brain? Does that count as acceptable rent? If not, why not?
Or, what if a belief just steamrolls over the ‘predictive making’ process and just hardwires useful actions in a given context? If you took a pill that made you become totally blissed out, wireheading you, but it made you extremely effective at accomplishing your goals prior to taking the pill ,why wouldn’t you take it?
What’s so special about making predictions, over, say, overcoming fear, anxiety and akrasia?
Then what is the difference between belief and assumption in our mental maps.
What about imagination? Is that belief or assumption or in-congruent map of reality.
Can imagination be part of mental processing without making us wrong about reality.
For instance, if I imagine that all buses in my city are blue, though they are red, can I then walk around with this model of reality in my head without a false belief? After all its just imagination?
Or is this model going to corrupt my thinking as I walk about thinking it, knowing full well its not true.
Further more !what does the question really ask!
Does the tree fall, first question? If it does, who is asking?
Who knows the tree? Who knows where it fell and how far and so on.
The question is more so nonsensical that it assumes the question can be asked without cognitive bias.
The question it self is cognitive bias.
If we tie down abstract thinking immediately to reality, there is no creative process to be had.
Imagination then leaves no room for us to abstract or use mental process, that bogs us down in every day life, thus we never form connections that allow us to think else.
Its either true or not, but result of sensory and thinking process such as logic is predictable, if done perfectly.
Even language can be cognitive bias.
So then if we translate the question of falling trees into reality, that is, you know what that looks like, the question is pointless. You have experienced a tree falling.
The question then makes zilch sense.
Its irrelevant.
You just know that there are no trees that fall and fail to make a sound.
There is no !if!.
There is no logic to be used.
Its like walking around and seeing a tree falling and asking people !Did you hear that?! It made a sound?
If however we word the question as such: Do all trees make a sound, all the time, under all conditions, here on Earth. Do all trees fall and hit ground and make a sound then the question is what to make of that?
For instance do all matches burn? How can we know if we don’t try them all out?
So in strict abstract sense we can be sure that our model is true, as long as all trees make a sound as we see them falling, but there is a chance that a tree falls, and we won’t hear it make a sound.
A couple of important limitations to the concept:
The concept assumes that beliefs should be tied to observable, testable phenomena. However, there are many important aspects of life and human experience (like emotions, subjective experiences, and certain philosophical or religious beliefs) that aren’t easily observable or testable. The concept can be less applicable or useful in these areas.
It also doesn’t address truth value: The concept encourages beliefs to be tied to specific anticipations, but it doesn’t necessarily address the truth value of those beliefs. A belief can generate specific anticipations and still be false, or not generate specific anticipations and still be true.
This concept doesn’t explain why certain beliefs persist even when they don’t lead to accurate anticipations. Factors such as cultural tradition, emotional comfort, cognitive biases, and lack of exposure to alternative viewpoints can all contribute to the persistence of beliefs, even when they don’t “pay rent” in terms of generating accurate predictions
There’s a risk that people might selectively interpret their experiences to confirm their existing beliefs. This can lead to a situation where beliefs seem to generate accurate anticipations, even when they’re not actually based on valid reasoning or evidence.
The post isn’t meant to be an explanation for why beliefs exist, it’s meant to highlight that by default, people have a bundle of things-that-feel-like beliefs that all seem to be a similar shape. But, if your goal is to figure out what’s true and make good plans, it’s very important to separate out which of your ‘beliefs’ are about predicting reality, and which are there for other reasons.
Do we know the atoms are in fact there? All “rationality” has to start on irrational beliefs or axioms in order to get anywhere. Like I assume people here believe in external reality and other minds, as do I, if not well that’s a whole other can of worms. I doubt folks here are solipsists.
I would say you do experience the floor directly as it does take more than just your eyes and brain to make it, like you said you see the light reflected OFF something. It’s also not really inferring the floor from seeing it, if I see a floor there is a floor unless something would cause me to doubt it. After all illusions are something that end up being disproven through testing.
Though my original point still stands, rationality can’t telling you everything. Some stuff you just gotta believe and some things can’t be determined rationally. External reality and atoms is just something you gotta believe since you cannot truly verify an external world or not. In matters of morality or taste rationality does nothing either. Choosing a flavor of ice cream doesn’t really have any rational basis after all.
But indeed, I experience the floor directly; the experience of the floor is not limited to visual perception but also involves direct sensory inputs. The sensation caused by gravitational pull and the counter-pressure from the floor are experienced directly. Additionally, the sound produced when stepping on the floor and the anticipation of the floor’s existence contribute to the direct experience of the floor. Therefore, the floor is experienced directly through a combination of sensory inputs, including but not limited to, visual, tactile, auditory, and proprioceptive sensations.
I guess that “paying rent” was not only a metaphor xD. But it’s really a good advice to cut back on everything that lacks practical use, when you’re on tight schedule/budget.