The Bottom Line
There are two sealed boxes up for auction, box A and box B. One and only one of these boxes contains a valuable diamond. There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable. There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp. Or one box has a shiny surface, and I have a suspicion—I am not sure—that no diamond-containing box is ever shiny.
Now suppose there is a clever arguer, holding a sheet of paper, and they say to the owners of box A and box B: “Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price.” So the box-owners bid, and box B’s owner bids higher, winning the services of the clever arguer.
The clever arguer begins to organize their thoughts. First, they write, “And therefore, box B contains the diamond!” at the bottom of their sheet of paper. Then, at the top of the paper, the clever arguer writes, “Box B shows a blue stamp,” and beneath it, “Box A is shiny,” and then, “Box B is lighter than box A,” and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A. And then the clever arguer comes to me and recites from their sheet of paper: “Box B shows a blue stamp, and box A is shiny,” and so on, until they reach: “and therefore, box B contains the diamond.”
But consider: At the moment when the clever arguer wrote down their conclusion, at the moment they put ink on their sheet of paper, the evidential entanglement of that physical ink with the physical boxes became fixed.
It may help to visualize a collection of worlds—Everett branches or Tegmark duplicates—within which there is some objective frequency at which box A or box B contains a diamond.1
The ink on paper is formed into odd shapes and curves, which look like this text: “And therefore, box B contains the diamond.” If you happened to be a literate English speaker, you might become confused, and think that this shaped ink somehow meant that box B contained the diamond. Subjects instructed to say the color of printed pictures and shown the word Green in red ink often say “green” instead of “red.” It helps to be illiterate, so that you are not confused by the shape of the ink.
To us, the true import of a thing is its entanglement with other things. Consider again the collection of worlds, Everett branches or Tegmark duplicates. At the moment when all clever arguers in all worlds put ink to the bottom line of their paper—let us suppose this is a single moment—it fixed the correlation of the ink with the boxes. The clever arguer writes in non-erasable pen; the ink will not change. The boxes will not change. Within the subset of worlds where the ink says “And therefore, box B contains the diamond,” there is already some fixed percentage of worlds where box A contains the diamond. This will not change regardless of what is written in on the blank lines above.
So the evidential entanglement of the ink is fixed, and I leave to you to decide what it might be. Perhaps box owners who believe a better case can be made for them are more liable to hire advertisers; perhaps box owners who fear their own deficiencies bid higher. If the box owners do not themselves understand the signs and portents, then the ink will be completely unentangled with the boxes’ contents, though it may tell you something about the owners’ finances and bidding habits.
Now suppose another person present is genuinely curious, and they first write down all the distinguishing signs of both boxes on a sheet of paper, and then apply their knowledge and the laws of probability and write down at the bottom: “Therefore, I estimate an 85% probability that box B contains the diamond.” Of what is this handwriting evidence? Examining the chain of cause and effect leading to this physical ink on physical paper, I find that the chain of causality wends its way through all the signs and portents of the boxes, and is dependent on these signs; for in worlds with different portents, a different probability is written at the bottom.
So the handwriting of the curious inquirer is entangled with the signs and portents and the contents of the boxes, whereas the handwriting of the clever arguer is evidence only of which owner paid the higher bid. There is a great difference in the indications of ink, though one who foolishly read aloud the ink-shapes might think the English words sounded similar.
Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. If your car makes metallic squealing noises when you brake, and you aren’t willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for. In this case, the real algorithm is “Never repair anything expensive.” If this is a good algorithm, fine; if this is a bad algorithm, oh well. The arguments you write afterward, above the bottom line, will not change anything either way.
This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don’t like. For it is indeed a clever argument to say “My opponent is a clever arguer,” if you are paying yourself to retain whatever beliefs you had at the start. The world’s cleverest arguer may point out that the Sun is shining, and yet it is still probably daytime.
1Max Tegmark, “Parallel Universes,” in Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity, ed. John D. Barrow, Paul C. W. Davies, and Charles L. Harper Jr. (New York: Cambridge University Press, 2004), 459–491, http://arxiv.org/abs/astro-ph/0302131.
- How To Write Quickly While Maintaining Epistemic Rigor by Aug 28, 2021, 5:52 PM; 453 points) (
- How To Get Into Independent Research On Alignment/Agency by Nov 19, 2021, 12:00 AM; 355 points) (
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by Sep 24, 2019, 4:12 AM; 299 points) (
- The Case Against AI Control Research by Jan 21, 2025, 4:03 PM; 289 points) (
- The Feeling of Idea Scarcity by Dec 31, 2022, 5:34 PM; 246 points) (
- Raising the Sanity Waterline by Mar 12, 2009, 4:28 AM; 239 points) (
- Mistakes with Conservation of Expected Evidence by Jun 8, 2019, 11:07 PM; 232 points) (
- An Alien God by Nov 2, 2007, 6:57 AM; 213 points) (
- More Dakka by Dec 2, 2017, 1:10 PM; 211 points) (
- Against EA-Community-Received-Wisdom on Practical Sociological Questions by Mar 9, 2023, 2:12 AM; 202 points) (EA Forum;
- Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk by Nov 2, 2023, 6:20 PM; 193 points) (
- Crisis of Faith by Oct 10, 2008, 10:08 PM; 179 points) (
- Slack matters more than any outcome by Dec 31, 2022, 8:11 PM; 163 points) (
- How to Convince my Son that Drugs are Bad by Dec 17, 2022, 6:47 PM; 140 points) (
- “Rationalist Discourse” Is Like “Physicist Motors” by Feb 26, 2023, 5:58 AM; 136 points) (
- Limerence Messes Up Your Rationality Real Bad, Yo by Jul 1, 2022, 4:53 PM; 127 points) (
- The Tragedy of Group Selectionism by Nov 7, 2007, 7:47 AM; 121 points) (
- Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by Jun 29, 2020, 2:45 AM; 105 points) (
- The Filan Cabinet Podcast with Oliver Habryka—Transcript by Feb 14, 2023, 2:38 AM; 101 points) (
- What are the biggest misconceptions about biosecurity and pandemic risk? by Feb 29, 2024, 2:52 PM; 99 points) (EA Forum;
- A summary of every “Highlights from the Sequences” post by Jul 15, 2022, 11:01 PM; 97 points) (
- What is the new EA question? by Mar 2, 2022, 8:40 PM; 95 points) (EA Forum;
- Building Phenomenological Bridges by Dec 23, 2013, 7:57 PM; 95 points) (
- Science: Do It Yourself by Feb 13, 2011, 4:47 AM; 91 points) (
- Unnatural Categories Are Optimized for Deception by Jan 8, 2021, 8:54 PM; 89 points) (
- A Priori by Oct 8, 2007, 9:02 PM; 86 points) (
- Novum Organum: Introduction by Sep 19, 2019, 10:34 PM; 86 points) (
- “Justice, Cherryl.” by Jul 23, 2023, 4:16 PM; 85 points) (
- An Outside View on Less Wrong’s Advice by Jul 7, 2011, 4:46 AM; 84 points) (
- Feb 25, 2022, 1:08 AM; 81 points) 's comment on Announcing Alvea—A COVID Vaccine Project by (EA Forum;
- Anthropomorphic Optimism by Aug 4, 2008, 8:17 PM; 81 points) (
- [Valence series] 3. Valence & Beliefs by Dec 11, 2023, 8:21 PM; 77 points) (
- On sincerity by Dec 23, 2022, 5:13 PM; 75 points) (
- Fake Optimization Criteria by Nov 10, 2007, 12:10 AM; 73 points) (
- 11 core rationalist skills by Dec 2, 2009, 8:09 AM; 72 points) (
- Practical Advice Backed By Deep Theories by Apr 25, 2009, 6:52 PM; 70 points) (
- Contra Yudkowsky on Epistemic Conduct for Author Criticism by Sep 13, 2023, 3:33 PM; 69 points) (
- Mythic Mode by Feb 23, 2018, 10:45 PM; 68 points) (
- Hearsay, Double Hearsay, and Bayesian Updates by Feb 16, 2012, 10:31 PM; 68 points) (
- What (standalone) LessWrong posts would you recommend to most EA community members? by Feb 9, 2022, 12:31 AM; 67 points) (EA Forum;
- A List of Nuances by Nov 10, 2014, 5:02 AM; 67 points) (
- What Can We Learn About Human Psychology from Christian Apologetics? by Oct 21, 2013, 10:02 PM; 66 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by Jul 23, 2022, 10:17 PM; 65 points) (
- Against “Context-Free Integrity” by Apr 14, 2021, 8:20 AM; 62 points) (
- LA-602 vs. RHIC Review by Jun 19, 2008, 10:00 AM; 62 points) (
- Signaling isn’t about signaling, it’s about Goodhart by Jan 6, 2022, 6:49 PM; 59 points) (
- About Less Wrong by Feb 23, 2009, 11:30 PM; 57 points) (
- How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? by Feb 16, 2023, 3:25 PM; 56 points) (
- How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? by Nov 27, 2021, 11:46 PM; 55 points) (EA Forum;
- Implicature Conflation by Aug 9, 2021, 7:48 PM; 54 points) (
- Back Up and Ask Whether, Not Why by Nov 6, 2008, 7:20 PM; 54 points) (
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by Sep 20, 2023, 12:46 PM; 52 points) (EA Forum;
- Towards a comprehensive study of potential psychological causes of the ordinary range of variation of affective gender identity in males by Oct 12, 2022, 9:10 PM; 52 points) (
- Nov 18, 2022, 9:56 PM; 50 points) 's comment on Jonas’s Quick takes by (EA Forum;
- Not Technically Lying by Jul 4, 2009, 6:40 PM; 50 points) (
- The Hostile Arguer by Nov 27, 2014, 12:30 AM; 50 points) (
- Challenges to Yudkowsky’s Pronoun Reform Proposal by Mar 13, 2022, 8:38 PM; 50 points) (
- 5 Axioms of Decision Making by Dec 1, 2011, 10:22 PM; 50 points) (
- Math is Subjunctively Objective by Jul 25, 2008, 11:06 AM; 49 points) (
- Covid 1/27/22: Let My People Go by Jan 27, 2022, 5:00 PM; 49 points) (
- Understanding vipassana meditation by Oct 3, 2010, 6:12 PM; 48 points) (
- Making Bad Decisions On Purpose by Nov 9, 2023, 3:36 AM; 48 points) (
- Formalizing the “AI x-risk is unlikely because it is ridiculous” argument by May 3, 2023, 6:56 PM; 48 points) (
- A summary of every “Highlights from the Sequences” post by Jul 15, 2022, 11:05 PM; 47 points) (EA Forum;
- On sincerity by Dec 23, 2022, 5:14 PM; 46 points) (EA Forum;
- Inherited Improbabilities: Transferring the Burden of Proof by Nov 24, 2010, 3:40 AM; 46 points) (
- I think Michael Bailey’s dismissal of my autogynephilia questions for Scott Alexander and Aella makes very little sense by Jul 10, 2023, 5:39 PM; 46 points) (
- Why Academic Papers Are A Terrible Discussion Forum by Jun 20, 2012, 6:15 PM; 44 points) (
- ChatGPT and Ideological Turing Test by Dec 5, 2022, 9:45 PM; 42 points) (
- The Missing Math of Map-Making by Aug 28, 2019, 9:18 PM; 40 points) (
- Sunk Cost Fallacy by Apr 12, 2009, 5:30 PM; 40 points) (
- Marketing Rationality by Nov 18, 2015, 1:43 PM; 39 points) (
- Selective processes bring tag-alongs (but not always!) by Mar 11, 2009, 8:17 AM; 39 points) (
- Jan 7, 2012, 9:28 AM; 39 points) 's comment on An argument that animals don’t really suffer by (
- Awful Austrians by Apr 12, 2009, 6:06 AM; 38 points) (
- Remote AI Alignment Overhang? by Feb 19, 2023, 10:30 PM; 37 points) (
- What are the most promising strategies for reducing the probability of nuclear war? by Nov 16, 2022, 6:09 AM; 36 points) (EA Forum;
- Sep 17, 2010, 5:53 PM; 36 points) 's comment on Compartmentalization in epistemic and instrumental rationality by (
- Jan 14, 2008, 8:33 AM; 35 points) 's comment on Beautiful Probability by (
- Formative Youth by Feb 24, 2009, 11:02 PM; 33 points) (
- SotW: Avoid Motivated Cognition by May 28, 2012, 3:57 PM; 33 points) (
- May 13, 2015, 2:40 PM; 31 points) 's comment on Wild Moral Dilemmas by (
- The Weak Inside View by Nov 18, 2008, 6:37 PM; 31 points) (
- Two kinds of Expectations, *one* of which is helpful for rational thinking by Jun 20, 2016, 4:04 PM; 30 points) (
- How to enjoy being wrong by Jul 27, 2011, 5:48 AM; 30 points) (
- Aug 5, 2024, 7:54 PM; 29 points) 's comment on Circular Reasoning by (
- What should I have for dinner? (A case study in decision making) by Aug 12, 2010, 1:29 PM; 29 points) (
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by Mar 2, 2024, 10:05 PM; 29 points) (
- Some of the best rationality essays by Oct 19, 2021, 10:57 PM; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by Sep 5, 2022, 2:07 PM; 29 points) (
- Intelligence in Commitment Races by Jun 24, 2022, 2:30 PM; 28 points) (
- May 13, 2020, 3:22 PM; 26 points) 's comment on Studies On Slack by (
- Aug 18, 2011, 3:50 PM; 26 points) 's comment on A Crash Course in the Neuroscience of Human Motivation by (
- A Genius for Destruction by Aug 1, 2008, 7:25 PM; 25 points) (
- A Hill of Validity in Defense of Meaning by Jul 15, 2023, 5:57 PM; 25 points) (
- The EA case for Trump 2024 by Aug 2, 2024, 7:32 PM; 24 points) (EA Forum;
- Decoherent Essences by Apr 30, 2008, 6:32 AM; 24 points) (
- Dec 5, 2019, 1:26 AM; 24 points) 's comment on Dialogue on Appeals to Consequences by (
- What is Rationality? by Apr 1, 2010, 8:14 PM; 22 points) (
- Feb 3, 2012, 4:50 AM; 22 points) 's comment on One last roll of the dice by (
- Feb 14, 2022, 5:34 PM; 20 points) 's comment on Some thoughts on vegetarianism and veganism by (
- Lighthaven Sequences Reading Group #8 (Tuesday 10/29) by Oct 27, 2024, 11:55 PM; 20 points) (
- Mar 23, 2016, 5:41 PM; 20 points) 's comment on Open Thread March 21 - March 27, 2016 by (
- Lighthaven Sequences Reading Group #3 (Tuesday 09/24) by Sep 22, 2024, 2:24 AM; 20 points) (
- May 13, 2015, 10:31 PM; 18 points) 's comment on LW should go into mainstream academia ? by (
- That Crisis thing seems pretty useful by Apr 10, 2009, 5:10 PM; 18 points) (
- Rationality, Cryonics and Pascal’s Wager by Apr 8, 2009, 8:28 PM; 18 points) (
- Aug 14, 2012, 11:57 AM; 18 points) 's comment on Why Are Individual IQ Differences OK? by (
- Aug 27, 2022, 10:33 PM; 17 points) 's comment on Common misconceptions about OpenAI by (
- Jul 25, 2011, 7:18 PM; 17 points) 's comment on Is it okay to take toilet-pills? / Rationality vs. the disgust factor by (
- Jan 10, 2021, 10:39 PM; 16 points) 's comment on The Power to Demolish Bad Arguments by (
- Explanation vs Rationalization by Feb 22, 2018, 11:46 PM; 16 points) (
- Dec 11, 2024, 7:49 PM; 16 points) 's comment on A shortcoming of concrete demonstrations as AGI risk advocacy by (
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by Sep 20, 2023, 12:46 PM; 16 points) (
- Is Effective Volunteering Possible? by May 19, 2023, 12:41 PM; 15 points) (EA Forum;
- List of Fully General Counterarguments by Jul 18, 2015, 9:49 PM; 15 points) (
- May 13, 2011, 12:35 AM; 15 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- Jan 26, 2018, 10:13 PM; 15 points) 's comment on What are the Best Hammers in the Rationalist Community? by (
- Mar 29, 2012, 6:21 PM; 14 points) 's comment on George Orwell’s Prelude on Politics Is The Mind Killer by (
- Jul 4, 2012, 7:03 PM; 14 points) 's comment on [Link] Why the kids don’t know no algebra by (
- Brief Response to Suspended Reason on Parallels Between Skyrms on Signaling and Yudkowsky on Language and Evidence by Apr 16, 2020, 3:44 AM; 13 points) (
- Nov 26, 2010, 6:38 AM; 13 points) 's comment on Traditional Capitalist Values by (
- Training Regime Day 7: Goal Factoring by Feb 21, 2020, 5:55 PM; 13 points) (
- Nov 30, 2019, 6:20 PM; 13 points) 's comment on Dialogue on Appeals to Consequences by (
- Is Effective Volunteering Possible? by May 19, 2023, 12:41 PM; 13 points) (
- Feb 20, 2013, 11:40 PM; 13 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- Apr 30, 2022, 9:22 PM; 13 points) 's comment on The Game of Masks by (
- Mar 28, 2012, 6:13 PM; 12 points) 's comment on Does anyone know any kid geniuses? by (
- May 4, 2021, 5:57 AM; 12 points) 's comment on Why I Work on Ads by (
- Feb 9, 2011, 12:26 PM; 12 points) 's comment on Why is reddit so negative? by (
- Oct 10, 2022, 2:33 PM; 12 points) 's comment on Vegetarianism and depression by (
- Jul 25, 2010, 12:58 AM; 12 points) 's comment on Against the standard narrative of human sexual evolution by (
- Mar 22, 2009, 1:19 AM; 12 points) 's comment on Don’t Revere The Bearer Of Good Info by (
- May 15, 2013, 10:15 PM; 12 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- Jul 20, 2019, 3:08 PM; 11 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- Charitable Cryonics by Aug 4, 2011, 12:42 AM; 11 points) (
- Jul 19, 2015, 2:09 PM; 11 points) 's comment on List of Fully General Counterarguments by (
- Online Privacy: Should you Care? by Feb 11, 2022, 1:20 PM; 11 points) (
- Aug 13, 2010, 3:38 PM; 11 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- Jun 15, 2010, 4:23 AM; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- Arguments Against Fossil Future? by May 31, 2023, 1:41 PM; 11 points) (
- Dec 30, 2021, 1:44 AM; 11 points) 's comment on What would you like from Microcovid.org? How valuable would it be to you? by (
- Feb 6, 2012, 5:21 PM; 11 points) 's comment on [LINK] Autistic woman banned from having sex in latest Court of Protection case by (
- Nov 4, 2010, 11:05 PM; 10 points) 's comment on The Curve of Capability by (
- [LINK] The NYT on Everyday Habits by Feb 18, 2012, 8:23 AM; 10 points) (
- Mar 2, 2009, 1:56 AM; 10 points) 's comment on The Most Frequently Useful Thing by (
- [ASoT] Some thoughts about LM monologue limitations and ELK by Mar 30, 2022, 2:26 PM; 10 points) (
- Mar 12, 2009, 5:54 PM; 10 points) 's comment on Raising the Sanity Waterline by (
- Jan 21, 2010, 4:52 PM; 10 points) 's comment on Normal Cryonics by (
- Mar 18, 2018, 5:18 PM; 10 points) 's comment on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms by (
- May 31, 2024, 2:21 AM; 10 points) 's comment on MIRI 2024 Communications Strategy by (
- Apr 22, 2010, 11:00 AM; 9 points) 's comment on Open Thread: April 2010, Part 2 by (
- Mar 4, 2011, 2:44 AM; 9 points) 's comment on Shikamaru vs. the Logical Fallacies by (
- Summarizing the Sequences Proposal by Aug 4, 2011, 9:15 PM; 9 points) (
- Mar 1, 2009, 8:11 PM; 9 points) 's comment on The Most Frequently Useful Thing by (
- Apr 9, 2010, 4:20 PM; 9 points) 's comment on Of Exclusionary Speech and Gender Politics by (
- Jul 2, 2009, 6:50 PM; 9 points) 's comment on Atheism = Untheism + Antitheism by (
- Feb 27, 2009, 11:47 PM; 9 points) 's comment on The Most Important Thing You Learned by (
- Aug 1, 2022, 6:44 PM; 9 points) 's comment on Conservatism is a rational response to epistemic uncertainty by (
- Nov 2, 2012, 6:17 PM; 9 points) 's comment on Rationality Quotes November 2012 by (
- Apr 15, 2021, 3:46 PM; 9 points) 's comment on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” by (
- Nov 9, 2018, 8:46 PM; 8 points) 's comment on Even non-theists should act as if theism is true by (EA Forum;
- May 12, 2023, 6:56 PM; 8 points) 's comment on Thoughts on LessWrong norms, the Art of Discourse, and moderator mandate by (
- Sep 4, 2012, 7:22 AM; 8 points) 's comment on Open Thread, September 1-15, 2012 by (
- Against the Bottom Line by Apr 21, 2012, 10:20 AM; 8 points) (
- Nov 28, 2021, 3:29 AM; 8 points) 's comment on How To Get Into Independent Research On Alignment/Agency by (
- [LINK] - Aaron Sell (Psychology Today) on the Politicisation of Science by Aug 28, 2013, 8:25 PM; 8 points) (
- Mar 2, 2023, 6:42 AM; 8 points) 's comment on “Rationalist Discourse” Is Like “Physicist Motors” by (
- Nov 24, 2016, 11:16 PM; 8 points) 's comment on Can anyone be rational and not vegan? by (
- May 10, 2013, 1:37 AM; 8 points) 's comment on Open Thread, May 1-14, 2013 by (
- Oct 3, 2010, 2:38 AM; 8 points) 's comment on Consciousness doesn’t exist. by (
- Oct 6, 2021, 12:36 PM; 8 points) 's comment on Alexander’s Shortform by (
- Mar 5, 2024, 6:29 PM; 8 points) 's comment on Many arguments for AI x-risk are wrong by (
- Dec 4, 2019, 2:32 PM; 7 points) 's comment on Hazard’s Shortform Feed by (
- Apr 26, 2011, 11:35 AM; 7 points) 's comment on Is Kiryas Joel an Unhappy Place? by (
- Oct 23, 2011, 11:26 PM; 7 points) 's comment on Amanda Knox: post mortem by (
- Clever arguers give weak evidence, not zero by Jul 18, 2023, 5:07 PM; 7 points) (
- May 3, 2012, 6:39 PM; 7 points) 's comment on Seeking links for the best arguments for economic libertarianism by (
- Feb 14, 2014, 9:16 AM; 7 points) 's comment on How to illustrate that society is mostly irrational, and how rationality would be beneficial by (
- Dec 3, 2016, 7:34 PM; 7 points) 's comment on Which areas of rationality are underexplored? - Discussion Thread by (
- Jan 7, 2012, 10:10 AM; 7 points) 's comment on An argument that animals don’t really suffer by (
- How not to move the goalposts by Jun 12, 2011, 3:45 PM; 7 points) (
- Apr 7, 2011, 3:24 PM; 7 points) 's comment on Popperian Decision making by (
- Mar 6, 2011, 6:28 AM; 7 points) 's comment on Rationality Quotes: March 2011 by (
- Rationality Reading Group: Part G: Against Rationalization by Aug 12, 2015, 10:09 PM; 7 points) (
- Mar 9, 2022, 7:25 PM; 6 points) 's comment on On presenting the case for AI risk by (EA Forum;
- Dec 27, 2012, 7:17 PM; 6 points) 's comment on Intelligence explosion in organizations, or why I’m not worried about the singularity by (
- Apr 26, 2014, 5:21 AM; 6 points) 's comment on Questions to ask theist philosophers? I will soon be speaking with several by (
- Apr 29, 2010, 2:16 PM; 6 points) 's comment on Belief in Belief by (
- Dec 18, 2009, 12:09 AM; 6 points) 's comment on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by (
- Dec 4, 2010, 4:39 PM; 6 points) 's comment on Rationality Quotes: December 2010 by (
- Dec 31, 2011, 12:11 AM; 6 points) 's comment on Welcome to Less Wrong! (2012) by (
- Mar 4, 2023, 8:13 AM; 6 points) 's comment on “Rationalist Discourse” Is Like “Physicist Motors” by (
- Oct 11, 2011, 8:13 PM; 6 points) 's comment on Beautiful Probability by (
- Sep 5, 2022, 6:32 PM; 6 points) 's comment on An unofficial “Highlights from the Sequences” tier list by (
- May 4, 2009, 10:42 AM; 6 points) 's comment on Essay-Question Poll: Dietary Choices by (
- Mar 11, 2021, 5:08 PM; 6 points) 's comment on [Lecture Club] Awakening from the Meaning Crisis by (
- May 28, 2024, 10:16 PM; 6 points) 's comment on jacquesthibs’s Shortform by (
- Apr 17, 2009, 8:38 PM; 6 points) 's comment on Anti-rationality quotes by (
- Jun 12, 2010, 2:01 AM; 6 points) 's comment on Belief as Attire by (
- Dec 4, 2012, 4:41 PM; 6 points) 's comment on Factions, inequality, and social justice by (
- Aug 27, 2020, 5:22 PM; 5 points) 's comment on Study results: The most convincing argument for effective donations by (EA Forum;
- Mar 1, 2010, 7:21 PM; 5 points) 's comment on Open Thread: March 2010 by (
- Jun 1, 2013, 11:11 PM; 5 points) 's comment on Above-Average AI Scientists by (
- Jun 27, 2023, 10:38 AM; 5 points) 's comment on Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell? by (
- Feb 22, 2020, 9:46 PM; 5 points) 's comment on Training Regime Day 7: Goal Factoring by (
- Jun 28, 2011, 11:06 AM; 5 points) 's comment on [SEQ RERUN] Blue or Green on Regulation? by (
- [SEQ RERUN] The Bottom Line by Sep 11, 2011, 3:25 AM; 5 points) (
- Nov 5, 2009, 5:00 AM; 5 points) 's comment on Open Thread: November 2009 by (
- Dec 9, 2010, 9:18 PM; 5 points) 's comment on Reliably wrong by (
- Nov 18, 2021, 4:57 AM; 5 points) 's comment on Why I am no longer driven by (
- Oct 1, 2010, 5:37 PM; 5 points) 's comment on Open Thread September, Part 3 by (
- May 16, 2024, 3:01 PM; 4 points) 's comment on Why not socialism? by (EA Forum;
- Mar 23, 2011, 6:02 PM; 4 points) 's comment on Positive Thinking by (
- May 30, 2010, 4:02 AM; 4 points) 's comment on Abnormal Cryonics by (
- Aug 15, 2018, 1:59 PM; 4 points) 's comment on Logical Counterfactuals & the Cooperation Game by (
- Supposing that the “Dead Internet Theory” is true or largely true, how can we act on that information? by Jan 27, 2025, 4:47 PM; 4 points) (
- Aug 20, 2023, 9:28 PM; 4 points) 's comment on Viliam’s Shortform by (
- Aug 24, 2011, 5:23 PM; 4 points) 's comment on Help Fund Lukeprog at SIAI by (
- Aug 4, 2011, 11:31 AM; 4 points) 's comment on The $125,000 Summer Singularity Challenge by (
- Jul 23, 2011, 6:09 AM; 4 points) 's comment on Religion’s Claim to be Non-Disprovable by (
- Jun 28, 2010, 9:15 PM; 4 points) 's comment on Unknown knowns: Why did you choose to be monogamous? by (
- The Skill of Writing Facetiously by Jan 27, 2022, 10:44 PM; 4 points) (
- Norfolk Social—VA Rationalists by Oct 10, 2022, 12:09 AM; 4 points) (
- Dec 30, 2012, 11:11 PM; 4 points) 's comment on Open Thread, December 16-31, 2012 by (
- Sep 3, 2011, 10:39 AM; 4 points) 's comment on [SEQ RERUN] Conjunction Controversy (Or, How They Nail It Down) by (
- [Discussion] Are academic papers a terrible discussion forum for effective altruists? by Jun 5, 2015, 11:30 PM; 3 points) (EA Forum;
- Aug 23, 2023, 2:17 AM; 3 points) 's comment on The Human Future (x-risk and longtermism-themed video by melodysheep) by (EA Forum;
- Jan 30, 2010, 7:24 AM; 3 points) 's comment on The Meditation on Curiosity by (
- Nov 4, 2010, 5:26 PM; 3 points) 's comment on What I would like the SIAI to publish by (
- Mar 24, 2011, 6:02 AM; 3 points) 's comment on Why Our Kind Can’t Cooperate by (
- Jul 7, 2011, 3:09 AM; 3 points) 's comment on Find yourself a Worthy Opponent: a Chavruta by (
- Jan 11, 2021, 1:40 AM; 3 points) 's comment on The Power to Demolish Bad Arguments by (
- Apr 24, 2022, 6:01 AM; 3 points) 's comment on David Udell’s Shortform by (
- Apr 13, 2009, 9:42 PM; 3 points) 's comment on It’s okay to be (at least a little) irrational by (
- May 13, 2009, 10:21 PM; 3 points) 's comment on Survey Results by (
- Jun 3, 2010, 5:32 AM; 3 points) 's comment on Open Thread: June 2010 by (
- Feb 15, 2013, 2:57 PM; 3 points) 's comment on LW Women: LW Online by (
- Jul 20, 2011, 8:54 AM; 3 points) 's comment on LW’s image problem: “Rationality” is suspicious by (
- Sep 23, 2019, 12:39 AM; 3 points) 's comment on hamnox’s Shortform by (
- Dec 11, 2010, 12:59 AM; 3 points) 's comment on A sense of logic by (
- Nov 16, 2021, 6:50 AM; 3 points) 's comment on [Book Review] “The Bell Curve” by Charles Murray by (
- notes on prioritizing tasks & cognition-threads by Nov 26, 2024, 12:28 AM; 3 points) (
- Feb 14, 2023, 7:22 AM; 3 points) 's comment on Important fact about how people evaluate sets of arguments by (
- Feb 8, 2016, 1:52 PM; 3 points) 's comment on Require contributions in advance by (
- Apr 21, 2015, 11:30 PM; 3 points) 's comment on Open Thread, Apr. 20 - Apr. 26, 2015 by (
- Oct 10, 2022, 2:32 PM; 2 points) 's comment on Vegetarianism and depression by (EA Forum;
- Apr 11, 2020, 3:04 AM; 2 points) 's comment on If you value future people, why do you consider near term effects? by (EA Forum;
- Jan 29, 2010, 8:13 PM; 2 points) 's comment on The Meditation on Curiosity by (
- Aug 22, 2024, 2:29 PM; 2 points) 's comment on Should LW suggest standard metaprompts? by (
- Oct 2, 2013, 2:19 AM; 2 points) 's comment on Group Rationality Diary, August 1-15 by (
- Feb 11, 2010, 7:25 PM; 2 points) 's comment on Shut Up and Divide? by (
- Apr 9, 2009, 12:52 PM; 2 points) 's comment on Extreme Rationality: It’s Not That Great by (
- Aug 14, 2011, 8:15 PM; 2 points) 's comment on Cryonics is Quantum Suicide (minus the suicide) by (
- May 4, 2023, 8:41 PM; 2 points) 's comment on [New] Rejected Content Section by (
- Jan 23, 2010, 7:01 AM; 2 points) 's comment on That Magical Click by (
- Oct 12, 2010, 1:48 AM; 2 points) 's comment on The Dark Arts—Preamble by (
- Apr 14, 2011, 6:43 PM; 2 points) 's comment on Arational quotes by (
- Apr 19, 2009, 12:12 PM; 2 points) 's comment on Rationality Quotes—April 2009 by (
- Feb 9, 2022, 3:50 PM; 2 points) 's comment on The ’Why’s of an International Auxiliary Language (IAL part 1) by (
- May 30, 2012, 7:49 PM; 2 points) 's comment on SotW: Avoid Motivated Cognition by (
- Apr 1, 2014, 1:41 PM; 2 points) 's comment on How valuable is volunteering? by (
- Jan 23, 2024, 8:42 AM; 2 points) 's comment on On “Geeks, MOPs, and Sociopaths” by (
- Mar 27, 2012, 7:58 PM; 2 points) 's comment on Memory in the microtubules by (
- 最下行の結論 by Jul 17, 2023, 5:19 PM; 1 point) (EA Forum;
- Sep 9, 2011, 5:13 AM; 1 point) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- Jul 6, 2012, 11:30 AM; 1 point) 's comment on Rationality Quotes July 2012 by (
- Nov 27, 2013, 6:53 PM; 1 point) 's comment on Only You Can Prevent Your Mind From Getting Killed By Politics by (
- Aug 19, 2011, 2:32 PM; 1 point) 's comment on Needing Better PR by (
- Feb 9, 2014, 10:04 PM; 1 point) 's comment on Open Thread for February 3 − 10 by (
- Aug 19, 2022, 8:56 PM; 1 point) 's comment on David Udell’s Shortform by (
- Apr 10, 2009, 8:58 PM; 1 point) 's comment on Extreme Rationality: It’s Not That Great by (
- May 1, 2018, 11:58 PM; 1 point) 's comment on Computational Morality (Part 1) - a Proposed Solution by (
- May 1, 2018, 4:55 AM; 1 point) 's comment on Computational Morality (Part 1) - a Proposed Solution by (
- Oct 24, 2018, 8:49 AM; 1 point) 's comment on Open Thread Feb 22 - Feb 28, 2016 by (
- Nov 17, 2012, 9:25 PM; 1 point) 's comment on Feeling Rational by (
- Book Review: Weapons of Math Destruction by Jun 4, 2017, 9:20 PM; 1 point) (
- Jun 7, 2014, 8:50 PM; 1 point) 's comment on Examples of Rationality Techniques adopted by the Masses by (
- May 27, 2015, 9:56 PM; 1 point) 's comment on The most important meta-skill by (
- Oct 9, 2011, 10:08 PM; 1 point) 's comment on Motivated skepticism: it’s harder to avoid than I’d think by (
- Jan 14, 2008, 4:28 PM; 1 point) 's comment on Beautiful Probability by (
- Feb 20, 2024, 11:13 AM; 1 point) 's comment on We Change Our Minds Less Often Than We Think by (
- May 16, 2012, 2:12 PM; 1 point) 's comment on Open Thread, May 16-31, 2012 by (
- Sep 18, 2010, 6:50 PM; 1 point) 's comment on Less Wrong: Open Thread, September 2010 by (
- Apr 9, 2009, 1:32 AM; 1 point) 's comment on Mandatory Secret Identities by (
- Jul 25, 2020, 8:02 PM; 1 point) 's comment on strangepoop’s Shortform by (
- May 5, 2013, 7:47 PM; 1 point) 's comment on Estimates vs. head-to-head comparisons by (
- Jul 21, 2024, 5:17 AM; 1 point) 's comment on quila’s Shortform by (
- Nov 15, 2009, 3:54 AM; 1 point) 's comment on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions by (
- Feb 27, 2011, 5:21 AM; 1 point) 's comment on Some Considerations Against Short-Term and/or Explicit Focus on Existential Risk Reduction by (
- May 10, 2013, 5:51 PM; 1 point) 's comment on Open Thread, May 1-14, 2013 by (
- Jun 21, 2024, 1:56 AM; 1 point) 's comment on I would have shit in that alley, too by (
- Apr 12, 2010, 1:45 AM; 1 point) 's comment on Rationality Quotes: February 2010 by (
- Oct 3, 2019, 4:16 PM; 1 point) 's comment on Honoring Petrov Day on LessWrong, in 2019 by (
- Nov 2, 2012, 4:03 AM; 1 point) 's comment on Rationality Quotes November 2012 by (
- Apr 9, 2010, 7:08 AM; 0 points) 's comment on Open Thread: April 2010, Part 2 by (
- Nov 30, 2010, 5:35 AM; 0 points) 's comment on “Nahh, that wouldn’t work” by (
- Mar 24, 2011, 5:33 AM; 0 points) 's comment on Why Our Kind Can’t Cooperate by (
- Nov 16, 2013, 7:35 PM; 0 points) 's comment on Diseased thinking: dissolving questions about disease by (
- Apr 8, 2017, 6:51 AM; 0 points) 's comment on When (Not) To Use Probabilities by (
- Aug 3, 2011, 4:22 PM; 0 points) 's comment on Experiment: Knox case debate with Rolf Nelson by (
- Sep 12, 2012, 10:09 AM; 0 points) 's comment on Rationality Quotes September 2012 by (
- Mar 6, 2011, 6:48 AM; 0 points) 's comment on Blues, Greens and abortion by (
- Jan 11, 2014, 5:11 PM; 0 points) 's comment on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription by (
- May 7, 2011, 2:57 AM; 0 points) 's comment on What causes people to believe in conspiracy theories? by (
- Feb 8, 2013, 2:24 AM; 0 points) 's comment on A Little Puzzle about Termination by (
- Sep 14, 2016, 9:09 PM; 0 points) 's comment on Open thread, Sep. 12 - Sep. 18, 2016 by (
- Feb 25, 2016, 10:42 AM; 0 points) 's comment on Open Thread Feb 22 - Feb 28, 2016 by (
- Jan 23, 2013, 2:06 AM; 0 points) 's comment on Open Thread, January 16-31, 2013 by (
- Apr 9, 2011, 3:08 PM; 0 points) 's comment on What is wrong with “Traditional Rationality”? by (
- Jul 30, 2009, 2:21 PM; 0 points) 's comment on The Obesity Myth by (
- Mar 24, 2012, 7:28 AM; 0 points) 's comment on You don’t need Kant by (
- Nov 6, 2009, 3:39 AM; 0 points) 's comment on Open Thread: November 2009 by (
- Apr 16, 2012, 5:49 PM; 0 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- Apr 15, 2012, 9:26 PM; 0 points) 's comment on AI Risk and Opportunity: A Strategic Analysis by (
- Jun 15, 2015, 6:29 PM; 0 points) 's comment on Epistemic Trust: Clarification by (
- Dec 26, 2012, 7:39 AM; 0 points) 's comment on What Evidence Filtered Evidence? by (
- Apr 2, 2015, 12:48 AM; 0 points) 's comment on What Evidence Filtered Evidence? by (
- May 10, 2012, 8:59 AM; 0 points) 's comment on Why do people ____? by (
- Oct 4, 2011, 6:41 AM; 0 points) 's comment on Pascal’s wager re-examined by (
- Nov 15, 2007, 7:04 PM; -1 points) 's comment on Thou Art Godshatter by (
- Apr 27, 2009, 4:57 PM; -1 points) 's comment on Should we be biased? by (
- Jul 16, 2024, 1:21 PM; -1 points) 's comment on Most smart and skilled people are outside of the EA/rationalist community: an analysis by (
- May 11, 2011, 10:59 PM; -2 points) 's comment on How should I help us achieve immortality? by (
- Apr 15, 2021, 6:55 PM; -3 points) 's comment on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” by (
- What Are Your Preferences Regarding The FLI Letter? by Apr 1, 2023, 4:52 AM; -4 points) (
- Apr 28, 2010, 7:28 PM; -6 points) 's comment on What is missing from rationality? by (
- Dec 31, 2012, 6:58 PM; -6 points) 's comment on By Which It May Be Judged by (
For the person who reads and evaluates the arguments, the question is: what would count as evidence about whether the author wrote the conclusion down first or at the end of his analysis? It is noteworthy that most media, such as newspapers or academic journals, appear to do little to communicate such evidence. So either this is hard evidence to obtain, or few readers are interested in it.
“What would count as evidence about whether the author wrote the conclusion down first or at the end of his analysis?”:
Past history of accuracy/trustworthiness;
Evidence of a lack of incentive for bias;
Spot check results for sampling bias.
The last may be unreliable if a) you’re the author, or b) your spot check evidence source may be biased, e.g. by a generally accepted biased paradigm.
In the real world this is complicated by the fact that the bottom line may have only been “pencilled in”, biased the argument, then been adjusted as a result of the argument—e.g.
“Pencilled in” bottom line is 65;
Unbiased bottom line would be 45;
Adjusted bottom line is 55; - neither correct, nor as incorrect as the original “pencilled in” value.
This “weak bias” algorithm can be recursive, leading eventually (sometimes over many years) to virtual elimination of the original bias, as often happens in scientific and philosophical discourse.
If you’re reading someone else’s article, then it’s important to know whether you’re dealing with a sampling bias when looking at the arguments (more on this later). But my main point was about the evidence we should derive from our own conclusions, not about a Fully General Counterargument you could use to devalue someone else’s arguments. If you are paid to cleverly argue, then it is indeed a clever argument to say, “My opponent is only arguing cleverly, so I will discount it.”
However, it is important to try to determine whether someone is a clever arguer or a curious inquirer when they are trying to convince you of something. i.e. if you were in the diamond box scenario you should conclude (all other things being roughly equal) the curious inquirer’s conclusion to be more likely to be true than the clever arguer’s. It doesn’t really matter whether the source is internal or external. As long as you’re making the right determination. Basically, if you’re going to think about whether or not someone is being a clever arguer or a curious inquirer, you have to be a curious inquirer about getting that information, not trying to cleverly make a Fully General Counterargument.
A sign S “means” something T when S is a reliable indicator of T. In this case, the clever arguer has sabotaged that reliability.
ISTM the parable presupposes (and needs to) that what the clever arguer produces is ordinarily a reliable indicator that box B contained the diamond, ie ordinarily means that. It would be pointless otherwise.
Therein lies a question: Is he neccessarily able to sabotage it? Posed in the contrary way, are there formats which he can’t effectively sabotage but which suffice to express the interesting arguments?
There are formats that he can’t sabotage, such as rigorous machine-verifiable proof, but it is a great deal of work to use them even for their natural subject matter. So yes with difficulty for math-like topics.
For science-like topics in general, I think the answer is probably that it’s theoretically possible. It needs more than verifiable logic, though. Onlookers need to be able to verify experiments, and interpretive frameworks need to be managed, which is very hard.
For squishier topics, I make no answer.
The trick is to counterfeit the blue stamps :)
Can anyone give me the link here between Designing Social Inquiry by KKV and this post, because I feel that there is one.
I don’t think it’s either. Consider the many blog postings and informal essays—often on academic topics—which begin or otherwise include a narrative along the lines of ‘so I was working on X and I ran into an interesting problem/a strange thought popped up, and I began looking into it...’ They’re interesting (at least to me), and common.
So I think the reason we don’t see it is that A) it looks biased if your Op-ed on, say, the latest bailout goes ‘So I was watching Fox News and I heard what those tax-and-spend liberals were planning this time...’, so that’s incentive to avoid many origin stories; and B) it’s seen as too personal and informal. Academic papers are supposed to be dry, timeless, and rigorous. It would be seen as in bad taste if Newton’s Principia had opened with an anecdote about a summer day out in the orchard.
Non Sequitur presents the bottom line literally.
...And your effectiveness as a person is determined by whichever algorithm actually causes your actions.
Define “effectiveness as a person”—in many cases the bias leading to the pre-written conclusion has some form of survival value (e.g. social survival). Due partly to childhood issues resulting in a period of complete? rejection of the value of emotions, I have an unusually high resistance to intellectual bias, yet on a number of measures of “effectiveness as a person” I do not seem to be measuring up well yet (on some others I seem to be doing okay).
Also, as I mentioned in my reply to the first comment, real world algorithms are often an amalgam of the two approaches, so it is not so much which algorithm as what weighting the approaches get. In most (if not all) people this weighting changes with the subject, not just with the person’s general level of rationality/intellectual honesty.
As it is almost impossible to detect and neutralize all of one’s biases and assumptions, and dangerous to attempt “counter-bias”, arriving at a result known to be truly unbiased is rare. NOTE: Playing “Devil’s Advocate” sensibly is not “counter-bias” and in a reasonable entity will help to reveal and neutralize bias.
I think bias is irrelevant here. My point was that, whatever your definition of “effectiveness as a person”, your actions are determined by the algorithm that caused them, not by the algorithm that you profess to follow.
I guess that this algorithm is called emotions and we are mostly an emotional dog wagging a rational a tail.
You might be tempted to say “Well, this is kinda obvious.” but from my experience, LW included, most people are not aware of and don’t spend any time considering what emotions are really driving their bottom line and instead get lost discussing superficial arguments ad nauseam.
The idea here has stuck with me as one of the best nuggets of wisdom from the sequences. My current condensation of it is as follows:
If you let reality have the final word, you might not like the bottom line. If instead you keep deliberating until the balance of arguments supports your preferred conclusion, you’re almost guaranteed to be satisfied eventually!
Inspired by the above, I offer the pseudo code version...
… the code above implements “the balance of arguments” as a function parameterized with weights. This allows for using an optimization process to reach one’s desired conclusion more quickly :)
Good fable. If we swap out the diamond macguffin for logic itself, it’s a whole new level of Gödelian pain, can weak bias priors iterations catch this out? Some argue analogue intuitions live through these formal paradox gardens this but my own intuition doubts this… maybe my intuition is too formal, who knows?
Also some “intuitions” are heavily resisted to forgetting about the diamond because they want it badly, and then their measures used to collect data often interfere with the sense of the world and thus reality. I suspect “general intelligence” and “race” are examples of these pursuits (separately and together)(I think they mean smarts and populations but proponents hate that). Thus AGI is a possible goose chase, especially when we are the measure of all things looking for greener pastures. This is how cognitive dissonance is possible in otherwise non-narcissistic members of humanity.
Also, beware of any enterprise that requires new clothes, this applies even if you are not an emperor.
Shiny diamond negligees in particular.