Expecting Short Inferential Distances
Homo sapiens’s environment of evolutionary adaptedness (a.k.a. EEA or “ancestral environment”) consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory.
In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period.
In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don’t have to explain to your fellow tribe members what an oasis is, or why it’s a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously.
In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.
In the ancestral environment, anyone who says something with no obvious support is a liar or an idiot. You’re not likely to think, “Hey, maybe this person has well-supported background knowledge that no one in my band has even heard of,” because it was a reliable invariant of the ancestral environment that this didn’t happen.
Conversely, if you say something blatantly obvious and the other person doesn’t see it, they’re the idiot, or they’re being deliberately obstinate to annoy you.
And to top it off, if someone says something with no obvious support and expects you to believe it—acting all indignant when you don’t—then they must be crazy.
Combined with the illusion of transparency and self-anchoring (the tendency to model other minds as though the were slightly modified versions of oneself), I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.
A biologist, speaking to a physicist, can justify evolution by saying it is the simplest explanation. But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase “simplest explanation” with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones. To someone else, “But it’s the simplest explanation!” may sound like an interesting but hardly knockdown argument; it doesn’t feel like all that powerful a tool for comprehending office politics or fixing a broken car. Obviously the biologist is infatuated with their own ideas, too arrogant to be open to alternative explanations which sound just as plausible. (If it sounds plausible to me, it should sound plausible to any sane member of my band.)
And from the biologist’s perspective, they can understand how evolution might sound a little odd at first—but when someone rejects evolution even after the biologist explains that it’s the simplest explanation, well, it’s clear that nonscientists are just idiots and there’s no point in talking to them.
A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.
If at any point you make a statement without obvious justification in arguments you’ve previously supported, the audience just thinks you’re crazy.
This also happens when you allow yourself to be seen visibly attaching greater weight to an argument than is justified in the eyes of the audience at that time. For example, talking as if you think “simpler explanation” is a knockdown argument for evolution (which it is), rather than a sorta-interesting idea (which it sounds like to someone who hasn’t been raised to revere Occam’s Razor).
Oh, and you’d better not drop any hints that you think you’re working a dozen inferential steps away from what the audience knows, or that you think you have special background knowledge not available to them. The audience doesn’t know anything about an evolutionary-psychological argument for a cognitive bias to underestimate inferential distances leading to traffic jams in communication. They’ll just think you’re condescending.
And if you think you can explain the concept of “systematically underestimated inferential distances” briefly, in just a few words, I’ve got some sad news for you . . .
- [SEE NEW EDITS] No, *You* Need to Write Clearer by 29 Apr 2023 5:04 UTC; 261 points) (
- So, geez there’s a lot of AI content these days by 6 Oct 2022 21:32 UTC; 257 points) (
- Elements of Rationalist Discourse by 12 Feb 2023 7:58 UTC; 223 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 203 points) (
- Beware of Other-Optimizing by 10 Apr 2009 1:58 UTC; 186 points) (
- Contra Ngo et al. “Every ‘Every Bay Area House Party’ Bay Area House Party” by 22 Feb 2024 23:56 UTC; 186 points) (
- Studies On Slack by 13 May 2020 5:00 UTC; 165 points) (
- Just another day in utopia by 25 Dec 2011 9:37 UTC; 144 points) (
- Double Illusion of Transparency by 24 Oct 2007 23:06 UTC; 118 points) (
- Explainers Shoot High. Aim Low! by 24 Oct 2007 1:13 UTC; 100 points) (
- Should ethicists be inside or outside a profession? by 12 Dec 2018 1:40 UTC; 97 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Things You Can’t Countersignal by 19 Feb 2010 0:18 UTC; 95 points) (
- Shared Frames Are Capital Investments in Coordination by 23 Sep 2021 23:24 UTC; 93 points) (
- Norm Innovation and Theory of Mind by 18 Sep 2021 21:38 UTC; 92 points) (
- No One Knows What Science Doesn’t Know by 25 Oct 2007 23:47 UTC; 91 points) (
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- EA Diversity: Unpacking Pandora’s Box by 1 Feb 2015 0:40 UTC; 89 points) (EA Forum;
- Novum Organum: Introduction by 19 Sep 2019 22:34 UTC; 86 points) (
- Rhetoric for the Good by 26 Oct 2011 18:52 UTC; 85 points) (
- Public Positions and Private Guts by 11 Oct 2018 19:38 UTC; 85 points) (
- My Childhood Role Model by 23 May 2008 8:51 UTC; 76 points) (
- Five-minute rationality techniques by 10 Aug 2010 2:24 UTC; 72 points) (
- Circling as Cousin to Rationality by 1 Jan 2020 1:16 UTC; 72 points) (
- [SEE NEW EDITS] No, *You* Need to Write Clearer by 29 Apr 2023 5:04 UTC; 71 points) (EA Forum;
- Practical Advice Backed By Deep Theories by 25 Apr 2009 18:52 UTC; 70 points) (
- Fake Utility Functions by 6 Dec 2007 16:55 UTC; 69 points) (
- A few analogies to illustrate key rationality points by 9 Oct 2011 13:00 UTC; 69 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Searching for Bayes-Structure by 28 Feb 2008 22:01 UTC; 63 points) (
- Timeless Identity by 3 Jun 2008 8:16 UTC; 61 points) (
- [Part 1] Amplifying generalist research via forecasting – models of impact and challenges by 19 Dec 2019 18:16 UTC; 60 points) (EA Forum;
- Inferential credit history by 24 Jul 2013 14:12 UTC; 58 points) (
- Zut Allais! by 20 Jan 2008 3:18 UTC; 57 points) (
- Identity Isn’t In Specific Atoms by 19 Apr 2008 4:55 UTC; 55 points) (
- [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges by 19 Dec 2019 15:50 UTC; 55 points) (
- Take heed, for it is a trap by 14 Aug 2011 10:23 UTC; 54 points) (
- You Provably Can’t Trust Yourself by 19 Aug 2008 20:35 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- 2 Jan 2020 0:02 UTC; 47 points) 's comment on Meta-discussion from “Circling as Cousin to Rationality” by (
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 46 points) (
- Internal Information Cascades by 25 Jun 2021 16:35 UTC; 45 points) (
- Why Academic Papers Are A Terrible Discussion Forum by 20 Jun 2012 18:15 UTC; 44 points) (
- The rationalist’s checklist by 16 Dec 2011 16:21 UTC; 44 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- Where’s Your Sense of Mystery? by 26 Apr 2009 0:45 UTC; 40 points) (
- Less Wrong and non-native English speakers by 6 Nov 2011 13:37 UTC; 39 points) (
- The Inadequacy of Current Science (Novum Organum Book 1: 1-37) by 23 Sep 2019 2:53 UTC; 38 points) (
- Abstracted Idealized Dynamics by 12 Aug 2008 1:00 UTC; 37 points) (
- Why Quantum? by 4 Jun 2008 5:34 UTC; 36 points) (
- Great Explanations by 31 Oct 2011 23:58 UTC; 34 points) (
- Degrees of Radical Honesty by 31 Mar 2009 20:36 UTC; 34 points) (
- My Model of Gender Identity by 1 Apr 2023 3:03 UTC; 34 points) (
- Mental Crystallography by 27 Feb 2010 1:04 UTC; 33 points) (
- Noise on the Channel by 2 Jul 2020 1:58 UTC; 31 points) (
- Teaching Ladders by 24 Jan 2018 19:01 UTC; 29 points) (
- My Strange Beliefs by 30 Dec 2007 12:15 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- 8 Mar 2014 18:35 UTC; 28 points) 's comment on In favour of terseness by (
- Book Review: Free Will by 11 Oct 2021 18:41 UTC; 28 points) (
- Time-Binding by 14 Aug 2015 17:38 UTC; 27 points) (
- Moral Error and Moral Disagreement by 10 Aug 2008 23:32 UTC; 26 points) (
- Why No Wireheading? by 18 Jun 2011 23:33 UTC; 26 points) (
- A Premature Word on AI by 31 May 2008 17:48 UTC; 26 points) (
- Learning how to explain things by 27 Jun 2011 20:28 UTC; 25 points) (
- 2 Jan 2012 12:09 UTC; 23 points) 's comment on For-Profit Rationality Training by (
- An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) by 5 Jan 2020 12:54 UTC; 22 points) (EA Forum;
- Contaminated by Optimism by 6 Aug 2008 0:26 UTC; 22 points) (
- 10 Nov 2012 6:48 UTC; 21 points) 's comment on If MWI is correct, should we expect to experience Quantum Torment? by (
- Privacy and writing by 6 Apr 2024 8:20 UTC; 20 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Formal epistemiology for extracting truth from news sources by 17 Mar 2022 2:06 UTC; 18 points) (
- 18 Oct 2011 18:48 UTC; 17 points) 's comment on Overcoming the Curse of Knowledge by (
- 11 Sep 2011 4:06 UTC; 17 points) 's comment on A Rationalist’s Tale by (
- Leaving a line of retreat for theists by 23 Apr 2011 1:04 UTC; 17 points) (
- 4 Oct 2019 16:29 UTC; 17 points) 's comment on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More by (
- 5 Oct 2021 20:41 UTC; 17 points) 's comment on Dominic Cummings : Regime Change #2: A plea to Silicon Valley by (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- 15 Feb 2013 21:59 UTC; 16 points) 's comment on What are your rules of thumb? by (
- 26 Jan 2018 22:13 UTC; 15 points) 's comment on What are the Best Hammers in the Rationalist Community? by (
- Chains, Bottlenecks and Optimization by 21 Jul 2020 2:07 UTC; 14 points) (
- 15 Sep 2012 6:55 UTC; 14 points) 's comment on High School Lectures by (
- 17 Mar 2009 18:16 UTC; 14 points) 's comment on Rational Me or We? by (
- 11 Jul 2010 12:49 UTC; 13 points) 's comment on Raising the Sanity Waterline by (
- 1 Jan 2015 19:00 UTC; 12 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 11 Jul 2022 20:37 UTC; 11 points) 's comment on The Alignment Problem by (
- 23 May 2022 10:19 UTC; 11 points) 's comment on PSA: The Sequences don’t need to be read in sequence by (
- [SEQ RERUN] Expecting Short Inferential Distances by 5 Oct 2011 2:34 UTC; 11 points) (
- 15 Jun 2010 4:23 UTC; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- 10 Jan 2023 17:54 UTC; 10 points) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (EA Forum;
- 24 Jan 2012 19:07 UTC; 10 points) 's comment on Could Democritus have predicted intelligence explosion? by (
- 3 Mar 2012 21:18 UTC; 10 points) 's comment on How to Fix Science by (
- 14 Sep 2014 22:10 UTC; 10 points) 's comment on Rationality Quotes September 2014 by (
- 24 Jul 2009 7:39 UTC; 10 points) 's comment on Missing the Trees for the Forest by (
- 7 Dec 2007 12:49 UTC; 10 points) 's comment on Fake Utility Functions by (
- How It Feels to Improve My Rationality by 18 Mar 2016 9:59 UTC; 9 points) (
- Complexity: inherent, created, and hidden by 14 Sep 2011 14:33 UTC; 9 points) (
- 16 Jan 2023 0:53 UTC; 8 points) 's comment on Thread for discussing Bostrom’s email and apology by (EA Forum;
- 23 Jul 2012 22:57 UTC; 8 points) 's comment on Evolutionary psychology as “the truth-killer” by (
- 4 Aug 2011 17:05 UTC; 8 points) 's comment on Martinenaite and Tavenier on cryonics by (
- 3 Jan 2013 1:57 UTC; 8 points) 's comment on Politics Discussion Thread January 2013 by (
- 23 Oct 2011 13:53 UTC; 7 points) 's comment on Things you are supposed to like by (
- 2 Sep 2014 9:27 UTC; 7 points) 's comment on Rationality Quotes September 2014 by (
- 6 Sep 2011 18:33 UTC; 7 points) 's comment on Gender differences in spatial reasoning appear to be nurture by (
- 1 Sep 2011 12:43 UTC; 7 points) 's comment on How come everyone that disagrees with me is wrong? by (
- 30 Jun 2019 18:15 UTC; 7 points) 's comment on Causal Reality vs Social Reality by (
- 10 Jan 2023 21:01 UTC; 6 points) 's comment on Read The Sequences by (EA Forum;
- 29 Nov 2022 19:54 UTC; 6 points) 's comment on Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation by (EA Forum;
- 22 Jan 2017 4:17 UTC; 6 points) 's comment on Projects-in-Progress Thread by (
- 7 Aug 2013 23:50 UTC; 6 points) 's comment on What Would it Take to “Prove” a Speculative Cause? by (
- 5 Dec 2012 8:13 UTC; 6 points) 's comment on Train Philosophers with Pearl and Kahneman, not Plato and Kant by (
- 22 Jan 2018 12:18 UTC; 6 points) 's comment on A simpler way to think about positive test bias by (
- AI Safety Sphere by 27 Apr 2024 1:49 UTC; 6 points) (
- Interactive model knob-turning by 28 Oct 2017 19:42 UTC; 6 points) (
- 20 Sep 2013 22:35 UTC; 6 points) 's comment on Help us Optimize the Contents of the Sequences eBook by (
- Arguing from a Gap of Perspective by 1 May 2021 22:42 UTC; 6 points) (
- Article idea: Good argumentation by 27 Nov 2011 19:57 UTC; 5 points) (
- 19 Oct 2011 11:10 UTC; 5 points) 's comment on Overcoming the Curse of Knowledge by (
- 14 Dec 2011 17:32 UTC; 5 points) 's comment on How to Not Lose an Argument by (
- 7 Dec 2020 19:27 UTC; 5 points) 's comment on The Incomprehensibility Bluff by (
- 9 Aug 2011 22:51 UTC; 5 points) 's comment on Theory of Knowledge (rationality outreach) by (
- 8 Dec 2023 7:00 UTC; 5 points) 's comment on adamzerner’s Shortform by (
- 19 Jan 2010 0:20 UTC; 5 points) 's comment on Advice for AI makers by (
- 21 Apr 2019 21:24 UTC; 5 points) 's comment on Arguing “By Definition” by (
- 25 Nov 2013 2:25 UTC; 5 points) 's comment on The Craft And The Community: The Basics: Apologizing by (
- 12 Apr 2015 15:23 UTC; 5 points) 's comment on Transhumanist Nationalism and AI Politics by (
- 6 May 2023 17:45 UTC; 5 points) 's comment on Do you work at an AI lab? Please quit by (
- Nothing Is Ever Taught Correctly by 20 Feb 2023 22:31 UTC; 5 points) (
- 14 Aug 2013 14:35 UTC; 5 points) 's comment on Open thread, August 12-18, 2013 by (
- 24 Jun 2012 21:49 UTC; 5 points) 's comment on Reply to Holden on ‘Tool AI’ by (
- 18 Feb 2023 23:12 UTC; 5 points) 's comment on Bing chat is the AI fire alarm by (
- 23 Mar 2013 0:01 UTC; 4 points) 's comment on Don’t Get Offended by (
- 14 Apr 2012 23:54 UTC; 4 points) 's comment on Configurations and Amplitude by (
- 28 May 2012 15:10 UTC; 4 points) 's comment on The rational rationalist’s guide to rationally using “rational” in rational post titles by (
- 26 Apr 2012 15:08 UTC; 4 points) 's comment on A Kick in the Rationals: What hurts you in your LessWrong Parts? by (
- 18 Oct 2011 3:10 UTC; 4 points) 's comment on Mike Darwin on Kurzweil, Techno-Optimism, and Delusional Stances on Cryonics by (
- 25 May 2012 14:16 UTC; 4 points) 's comment on A digitized belief network? by (
- 22 Apr 2013 17:59 UTC; 4 points) 's comment on Litany of a Bright Dilettante by (
- 20 Aug 2020 3:18 UTC; 4 points) 's comment on The Wrong Side of Risk by (
- 12 Aug 2020 19:10 UTC; 4 points) 's comment on You Need More Money by (
- If it seems intuitive, you’re not building intuition by 30 Sep 2022 7:46 UTC; 4 points) (
- 4 Mar 2012 14:16 UTC; 4 points) 's comment on Value of Information: Four Examples by (
- 22 Mar 2012 10:01 UTC; 3 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 11 by (
- Rationalist Scriptures? by 9 Jan 2020 16:59 UTC; 3 points) (
- 26 Nov 2014 17:03 UTC; 3 points) 's comment on Open thread, Nov. 24 - Nov. 30, 2014 by (
- 9 Jan 2012 4:55 UTC; 3 points) 's comment on Welcome to Less Wrong! by (
- 11 Feb 2013 9:18 UTC; 3 points) 's comment on Philosophical Landmines by (
- 21 Oct 2021 4:52 UTC; 3 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- Request for advices on small presentation about LW community by 26 May 2015 7:30 UTC; 3 points) (
- 28 Jun 2013 5:39 UTC; 3 points) 's comment on Bad Concepts Repository by (
- 17 Feb 2021 18:11 UTC; 3 points) 's comment on The feeling of breaking an Overton window by (
- 18 Apr 2012 13:57 UTC; 3 points) 's comment on Avoiding Your Belief’s Real Weak Points by (
- 21 Feb 2018 3:56 UTC; 3 points) 's comment on How to not talk about probability estimates by (
- 16 May 2012 14:45 UTC; 3 points) 's comment on Open Thread, May 16-31, 2012 by (
- I (with the help of a few more people) am planning to create an introduction to AI Safety that a smart teenager can understand. What am I missing? by 14 Nov 2022 16:12 UTC; 3 points) (
- 18 Dec 2014 16:25 UTC; 3 points) 's comment on Understanding Agency by (
- 15 May 2012 8:08 UTC; 2 points) 's comment on Thoughts on the Singularity Institute (SI) by (
- 9 Jan 2012 17:45 UTC; 2 points) 's comment on Describe your personal Mount Stupid by (
- 1 Dec 2017 19:10 UTC; 2 points) 's comment on An Intuitive Explanation of Inferential Distance by (
- 7 Apr 2012 19:11 UTC; 2 points) 's comment on Rationally Irrational by (
- 16 Dec 2020 12:55 UTC; 2 points) 's comment on Debugging the student by (
- 30 Dec 2013 11:30 UTC; 2 points) 's comment on Open thread for December 24-31, 2013 by (
- 3 Feb 2023 23:56 UTC; 2 points) 's comment on [Linkpost] Google invested $300M in Anthropic in late 2022 by (
- Cooperative model knob-turning by 3 Nov 2017 16:18 UTC; 2 points) (
- 27 Jun 2019 2:09 UTC; 2 points) 's comment on Embedded Agency: Not Just an AI Problem by (
- 23 Mar 2009 21:07 UTC; 2 points) 's comment on Playing Video Games In Shuffle Mode by (
- 25 Sep 2011 8:13 UTC; 2 points) 's comment on Logical Rudeness by (
- 4 May 2009 17:29 UTC; 2 points) 's comment on The mind-killer by (
- 4 Jul 2014 19:06 UTC; 2 points) 's comment on Self-Congratulatory Rationalism by (
- 18 Sep 2023 16:18 UTC; 1 point) 's comment on More to explore on ‘The Effectiveness Mindset’ by (EA Forum;
- 27 May 2009 16:17 UTC; 1 point) 's comment on Dissenting Views by (
- 11 Feb 2010 5:34 UTC; 1 point) 's comment on Open Thread: February 2010 by (
- 7 Aug 2011 13:31 UTC; 1 point) 's comment on Beware of Other-Optimizing by (
- 30 Nov 2011 5:40 UTC; 1 point) 's comment on LW Philosophers versus Analytics by (
- 29 Jan 2013 21:11 UTC; 1 point) 's comment on Rationality Quotes January 2013 by (
- 4 Dec 2010 17:22 UTC; 1 point) 's comment on Starting point for calculating inferential distance? by (
- 26 Nov 2014 0:52 UTC; 1 point) 's comment on Breaking the vicious cycle by (
- 26 Sep 2011 21:34 UTC; 1 point) 's comment on Undiscriminating Skepticism by (
- 23 May 2009 11:53 UTC; 1 point) 's comment on Homogeneity vs. heterogeneity (or, What kind of sex is most moral?) by (
- 16 Nov 2014 4:18 UTC; 1 point) 's comment on A discussion of heroic responsibility by (
- 11 Jan 2023 17:12 UTC; 1 point) 's comment on A Defense of Functional Decision Theory by (
- 4 Nov 2011 23:12 UTC; 1 point) 's comment on Calibrate your self-assessments by (
- 1 May 2013 21:07 UTC; 1 point) 's comment on LW Women Entries- Creepiness by (
- 27 Oct 2019 19:33 UTC; 1 point) 's comment on Artificial general intelligence is here, and it’s useless by (
- 30 Nov 2014 15:22 UTC; 1 point) 's comment on Is arguing worth it? If so, when and when not? Also, how do I become less arrogant? by (
- 21 Feb 2011 19:49 UTC; 1 point) 's comment on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields by (
- 4 Jan 2013 5:13 UTC; 1 point) 's comment on Welcome to Less Wrong! (July 2012) by (
- 7 May 2024 3:19 UTC; 1 point) 's comment on Unintentionally Creating Value by (
- 19 Jun 2015 18:44 UTC; 1 point) 's comment on In praise of gullibility? by (
- 19 Apr 2010 20:48 UTC; 1 point) 's comment on Attention Lurkers: Please say hi by (
- 30 Nov 2014 16:46 UTC; 0 points) 's comment on You have a set amount of “weirdness points”. Spend them wisely. by (EA Forum;
- 27 Dec 2012 20:09 UTC; 0 points) 's comment on Intelligence explosion in organizations, or why I’m not worried about the singularity by (
- 21 Apr 2009 16:24 UTC; 0 points) 's comment on The True Epistemic Prisoner’s Dilemma by (
- 4 Aug 2009 16:55 UTC; 0 points) 's comment on Suffering by (
- 4 Dec 2010 17:22 UTC; 0 points) 's comment on Starting point for calculating inferential distance? by (
- 22 Jan 2012 13:34 UTC; 0 points) 's comment on Undiscriminating Skepticism by (
- 15 Aug 2015 5:44 UTC; 0 points) 's comment on [Link] My review of Rationality: From AI to Zombies by (
- 5 Sep 2011 22:48 UTC; 0 points) 's comment on What Direct Instruction is by (
- 6 Jun 2009 19:40 UTC; 0 points) 's comment on Mate selection for the men here by (
- 3 Mar 2012 10:27 UTC; 0 points) 's comment on Rationality Quotes March 2012 by (
- 3 Nov 2011 20:22 UTC; 0 points) 's comment on Best Nonfiction Writing on Less Wrong by (
- 3 Nov 2011 14:36 UTC; 0 points) 's comment on Open thread, November 2011 by (
- 28 Nov 2013 10:56 UTC; 0 points) 's comment on The Relevance of Advanced Vocabulary to Rationality by (
- 15 Aug 2013 16:54 UTC; -2 points) 's comment on [LINK] Hyperloop officially announced — predictions, anyone? by (
- 4 Oct 2011 5:35 UTC; -2 points) 's comment on Rationality Lessons Learned from Irrational Adventures in Romance by (
- 15 May 2015 5:48 UTC; -3 points) 's comment on LW should go into mainstream academia ? by (
- 20 Aug 2013 1:35 UTC; -6 points) 's comment on Open thread, July 29-August 4, 2013 by (
- Elitism isn’t necessary for refining rationality. by 10 Sep 2012 5:41 UTC; -32 points) (
The explanation from ancestral environment seems likely. However, there is also a rational argument for refusing to accept a claim unless all the steps have been laid out from your own knowledge to the claim. While there are genuine truth seekers who have genuinely found truth and who we therefore should, ideally, believe, nevertheless a blanket policy of simply taking these people at their word has the unfortunate side-effect of also rendering us vulnerable to humbug, because we are not equipped to tell apart the humbug from the true statements many steps removed from our knowledge.
At the same time, people do not universally reject claims that are many steps removed from their own experience. After all, scientists have made headway with the public. And unfortunately, humbug also regularly makes headway. There have always been niches in society for people claiming esoteric knowledge.
I think it’s about the extent you have reason to believe you trust authority without evidence. What if someone meets ‘omega’ who is as 100% trustable as the laws of gravity, in their empirical experience? Then, it’s 100% rational to trust them, perhaps over their own senses, which are sometimes illusory.
Induce fear to get people to stick with the status quo or make a non-choice, and frustrated anger to get to them to take risks. When people are told something without explanation, they might react with fear out of awe, or anger out of frustration that you haven’t presented something rational to them. Vice-versa is possible too. Therefore, I would predict that the inferential distance doesn’t have a 1:1 relationship with the uptake of that information.
Eliezer, this is a great insightful observation.
The young seem especially vulnerable to accepting whatever they are told. Santa Claus and all that, but also any nonsense fed to them by their schools. Schools for the young are particularly effective instruments for indoctrinating a population. In contrast, the old tend to be quite a bit more resistant to new claims—for better and for worse.
An evolutionary explanation for this is fairly easy to come up with, I think. Children have a survival need to learn as much as they can as quickly as they can, and adults have a vital role as their teachers. In their respective roles, it is best for adults to be unreceptive to new claims, so that their store of knowledge remains a reliable archive of lessons from the past, and it is best for the young to accept whatever they are told without wasting a lot of time questioning it.
It is too easy to come up with a just so story like this. How would you rephrase it to make it testable?
Here is a counterstory:
Children have a survival need to learn only well-tested knowledge; they cannot afford to waste their precious developmental years believing wrong ideas. Adults, however, have already survived their juvenile years, and so they are presumably more fit. Furthermore, once an adult successfully reproduces, natural selection no longer cares about them; neither senescence nor gullibility affect an adult’s fitness. Therefore, we should expect children to be skeptical and adults to be gullible.
This counterstory doesn’t function.
A child’s development is not consciously controlled; and they are protected by adults; so believing incorrect things temporarily doesn’t harm their development at all.
If you wish to produce a counterstory, make it an actual plausible one. Even if it were the case that children tended to be more skeptical of claims, your story would REMAIN obviously false; whereas Constant’s story would remain an important factor, and would raise the question of why we don’t see what would be expected given the relevant facts.
I’ve just learned that there is interesting research on this topic. Sorry I don’t have better links.
Interesting. Although that strongly suggests that children in fact are more gullible are specifically religious stories. I’d have to wonder if they are actually more gullible than those, have been primed to think that religious stories are allowed to have more fantastic elements and still be true, or have found out that expressing skepticism of such stories is more likely to result in negative consequences. The last seems unlikely to me.
As long as we’re on the subject of evolutionary-psychology/sociobiology/whatever if someone tries to argue against it by saying it’s just a bunch of reactionaries trying to justify inequity you can point to the data which says it ain’t so. Another soldier sent against the army of reductionism defeated, surely a signal from Providence that all will be assimilated.
For example, talking as if you think “simpler explanation” is a knockdown argument for evolution (which it is)
I don’t quite agree—by itself, X being “simpler” is a reason to increase my subjective belief that X is true (assuming that X is in a field where simplicity generally works) but it’s not enough to prove e.g. creationism false. Rather, it is the total lack of evidence for anything supernatural that is the knockdown argument—if I had reason to believe that even one instance of say, ghosts or the effects of prayer were true, then I’d have to think that creationism was possible as well.
This is certainly an insightful post. I’m not sure the example is that compelling though.
If you argue with a young earth creationist, they could full well understand what you mean, but simply disagree and claim “God did it,” is a simpler explanation still. In fact, if we were to presuppose an intelligent being of infinite power existed and created things, it seems it would actually be a simpler explanation.
Most people, though perhaps not all, who have no belief in an omnipotent designer will pretty quickly accept evolution. So that might not be the cognitive problem in that situation.
I’m sure there are examples out there (opposition to free trade, perhaps?), but the rejection of evolution in favor of creationism is rather more complex and deep rooted.
Not necessarily. The introduction of God into the story actually makes the theory quite a bit more complex, as far as amount of information stored goes. The length of time it takes to explain your theory does not necessarily correlate to how simple it is. “God did it” is monumentally more complex than “The random process of natural selection ensures that those organisms which have mutations that lend them a better chance of survival will, on average, be more likely to survive and pass those mutated genes on to the next generation than an organism without beneficial mutations, etc etc etc.”
Though actually, if you look closely at the two arguments above, they don’t necessarily contradict each other. :3 I personally feel that “God did it” is a simpler explanation than “Amino acids magically combined via processes we don’t understand and haven’t been able to duplicate, creating life essentially ex nihilo”… but that doesn’t at all mean taht either of these explanations are objectively simple!
Have you read Occam’s Razor?
I just reread it; thank you for allowing to see one of Eliezer’s posts in a new light. Always a pleasure.
However, I have other data at hand that seems to lend credence to the “God exists” theory; I don’t have to reply on the results of one test. If I did, then by that same logic, we would always have to assume that a coin once flipped would be 100% biased toward the side upon which is landed.
Your program, in order to describe the universe, has to be the best model of every single point in the universe. I’m sure there were people who argued that Newton’s equations were simpler than General Relativity. But the data cannot be denied.
I think there are two distinct concepts here: One of them is Bayesian reasoning, and the other is Solomonoff induction (which is basically Occam’s Razor taken to its logical extreme).
Bayesian reasoning is applicable when you have some prior beliefs, usually formalized as probabilities for various theories being true, (e.g 50% chance God did it, 50% amino acids did it), and then you encounter some evidence (e.g. observe angels descend from the sky), and you now want to update your beliefs to be consistent with the evidence you encountered (e.g. 90% chance God did it, 10% amino acids did it). To emphasize, Bayesian reasoning is simply not applicable unless some prior belief to update.
Sounds like you’re referring to Bayesian reasoning here. You’re saying without that “other data”, you have some probabilities for your various theories, but then when you add in that data, you’re inclined to update your probabilities such that “God did it” becomes more probable.
In contrast, Occam’s Razor and Solomonoff induction do not work with “prior beliefs” (in fact, Solomonoff is often used, in theory, to bootstrap the Bayesian process, providing the “initial belief”, from which you can start using Bayesian to update from). When using Solomonoff, you enumerate all conceivable theories, and then for each theory, you check whether it is compatible with the the data you currently have. You don’t think in terms of “this theory is more probable given data set 1, but that theory is more probable given data set 2”. You simply mark each theory as “compatible” or “not compatible”. Once you’ve done that, you eliminate all theories which are “not compatible” (or equivalently, assign them a probability of 0). Now all that remains is to assign probabilities to the theories the remain (i.e. the ones which are compatible with the data you have). One naive way to do that is to just assign uniform probability to all remaining theories. Solomonoff induction actually states that you should assign probabilities based on the complexity of the theory.
That’s actually not true. Coincidentally, I wrote a web app which illustrates a similar point: http://nebupookins.github.io/binary-bayesian-update/
Mentally relabel the button “Witnessed Failure” with “Saw a coin come up tails” and “Witnessed Success” with “Saw a coin come up heads”, then click the “Witnessed Success”/”Saw a coin come up heads” button.
Note that the results is not “You should assume that a coin is 100% biased towards head.”
Instead, the results are “There’s a 0% chance that the coin is 100% biased towards tail, a tiny chance that the coin is 99% biased towards tail, a slightly larger chance that the coin is 98% biased towards tail” and so on until you reach “about a 2% chance the coin is 100% biased towards head”, which is currently your most probable theory. But note that while “100% biased towards head” is your most probable theory, you’re extremely non-confident in that theory (only a 2% chance that the theory is true). You need to witness a lot more coin flips to increase you confidence levels (go ahead and click on the buttons a few more times).
Disclaimer: This web app actually uses the naive solution of initially assigning uniform probability to all possible theories, rather than the Solomonoff solution of assigning probability according to complexity.
Is “God did it” a simpler explanation than “amino acids combined via complex and unlikely processes we understand and can even replicate crudely, creating life from a perhaps murky but essentially non-magical source”?
What is your gut reaction?
When isolated in this manner, my gut reaction is “no”.
There is no scientist who claims amino acids magically appeared on earth. We have been able to simulate amino acid synthesis using conditions and simple inorganic molecules present on the young earth. Read the Wikipedia article for abiogenesis for a primer if you want to educate yourself.
Once you have posited a God to take care of the creation of the amino acids, “God did it” becomes much simpler an explanation of the rest—referring to an entity that has been established to exist is not a terribly long message.
When you say “A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.”
this strongly reminds me of what it is like to try talking, as an atheist, with a christian about any religious issue. I have concluded years ago that I just shouldn’t try anymore, that reasonable verbal exchange is not possible...
I suppose that I should recurse… but how and how far where I am not sure.
I’m sure that the Christian feels the same way. ;D The problem there isn’t inferential differences. It’s belief in belief. The best way to disabuse a Christian of any false notions—under the assumption that those notions are false—would be to lead them to Less Wrong. :P
Of course, you can lead a horse to water...
I don’t agree. I think the best way to disabuse them of such notions would be to lead them to extremely high status atheists including a community of highly attractive potential mates. You change group affiliation beliefs by changing desired group affiliation.
I think our disagreement stems from a fuzzy definition of the word “best”. I believe that it is better to believe something for good (read: valid) reasons than to believe it for bad reasons, regardless of the truth value of the thing being believed. So yes, your suggestion may lead more Christians to toss their Christianity, but mine makes them more rational thinkers, which (under the assumption that their Christian beliefs are wrong, which assumption I decline to assign a truth value in this post) leads them to atheism as a side benefit.
Essentially, this is the question posed: Which is the greater sin, if Christianity is wrong? Christianity, or irrationality?
The same influences that make people toss Christianity are also what will influence people to become more rational. Leading people to lesswrong on average makes them scoff then add things to their stereotype cache.
If Christianity is wrong then I’d say neither. ;)
This, if true, is horribly sad, and I concede the point, letting go of my faith in the inherent open-mindedness of humanity. Of course, I might have known better; my own efforts have reaped no fruit except my wife thinking of Eliezer Yudkowsky as a rabid crackpot. :/
Ha! Then let me elucidate, and define the term “sin” to mean that action which runs against a given moral code.
This is probably because of the site design and not necessary.
That no doubt makes a difference but my appeal was to universal human behavior. Exposure to new, unusual behaviours from a foreign tribe will most often invoke a rejection and tweaking of social/political positions rather than an object level epistemic update. Because that’s what humans care about.
(This doesn’t preclude directing interested parties to lesswrong or other sources of object level information. We must just allow that there will be an extremely low rate of updating.)
You often say things with a certain simple realism that jives with me. I’ve definitely learned to appreciate the style more since I joined LW, and 10 times moreso since really absorbing a few subskills of a few SingInst folk. How much social psychology-like stuff have you studied? I get a weak impression that it’s not much more than the average LW regular but that unlike the average LW regular you have the good habit of regularly explicitly talking about (and thus assuredly explicitly thinking about) certain simple but oft-ignored phenomena of standard social epistemology—or perhaps they’d generally be better described as signalling games/competitions with an epistemic flavor. The very-related skill of “being constantly up a meta level” is really the only prerequisite skill for building the master-skill of being able to automatically immediately generate decent models of any real or imagined social epistemic scenario or automatically with-some-effort generate thorough complex models. You strike me as one of the people on LW who could build up this skill and make it a very sharp weapon, which would be generally useful to any community or organization in the coming years that is trying to raise its sanity waterline. (Vladimir_M also obviously has some kind of related skillset.)
I could link you to a concrete example or two in LW comments if you don’t quite follow what skill it is I’m getting at or how it’s cool.
Quite a lot but it is not specialised (into PUA etc). I’ve also probably forgotten a lot, since my interest peaked a few years back.
I think this would depend considerably on which particular non-Christian set of beliefs turned out to be right. Asking “how should we behave in a non-Christian universe?” sounds to me like asking “what should we feed to a non-cat?”.
I’ll ask you to review the child of this post wherein I provide a clearer definition of the term “sin”. It is a generally held consensus that there is in fact an objective morality which is causally disconnected from (or at least causally unaffected by) any extant religion. In that sense, my question is, I believe, sensical.
The above is predicated upon my inference, from your comment, that you read into my use of the word “sin” a religious connotation. Another possible inference is that you legitimately believe that we live in a Christian universe, and therefore that supposing counterfactuals is useless. In that case, I wonder how you get by during the day without making any plans based upon hypothetical events.
.… and I also, in that case, appreciate not being the only Christian on this site. ;D But that doesn’t forgive your error.
I did see the comment in which you defined sin.
I’m not sure where our assumptions disconnect, so I’ll just try to spell out as many of mine as I can think of.
I assume that Christianity contains or constitutes claims about what the correct moral code is, such that accepting Christianity is true necessarily implies accepting a certain standard of right and wrong. I further assume that there exist at least two mutually-incompatible non-Christian claims about what the correct moral code is.
That is, if we reject Christian moral values, we still have to decide between Buddhism and Hinduism.
Let me verify your meaning before I respond in earnest: You are operating under the proposition that morality necessarily derives from religion?
...not exactly. It would be more accurate to say that I’m assuming that most religions, and Christianity in particular, imply moralities, but there may also be nonreligious moralities.
I realize I’m hugely oversimplifying (for example, by treating “Christianity” as internally homogeneous), but I need to omit most of the variables in order to get anything done in finite time.
This started with the phrase “if Christianity is wrong”; are you saying that this was not meant to imply anything along the lines of “if Christian morality is wrong”, that it was meant entirely as an empirical proposition, holding moral values constant? [edit: …holding terminal moral values constant?]
Oh! I see. :3 Yes, that is what I’m saying. If I wasn’t Christian, I certainly wouldn’t start murdering people.
Interesting.
Do you believe, then, that God commands a thing because it is good, rather than that a thing is good because God commands it?
Yes and no. :3 This is one of those “large inferential distances” things, but I’ll take a stab at explaining.
First, there are laws that God is bound to; laws of morality, not just laws of physics, although I think He’s also, in all probability, bound by the laws of physics (not necessarily as we understand them). This is evidenced by the number of times that God has told us that He is “bound”; if He did not follow these rules, He would “cease to be God”.
On the other hand! God gave rules to the Jews (a la all of Deuteronomy) that do not apply to modern-day Christians, because Jesus’ sacrifice “fulfilled” that law. God gives different commands at different times to different people: for example, God has at various times in history endorsed polygamy for various peoples, but He has indicated that polygamy outside His explicit instructions is sinful (cf. Jacob 2, D&C 132).
So: Everything that God commands us to do is Good, but not everything that is Good is something that God has explicitly commanded us to do.
Is reviving dead threads frowned upon here? That was an incredibly insightful comment to me because it explains my deconversion (from Catholicism) and Leah Libresco’s conversion to it (she has a blog on patheos called unequally yoked)*. I wonder how general this is?
*Status is obviously defined by the person whose group affiliation is changing. The high status atheists that changed my desired group affiliation were some atheists on debate.org, who were a lot more like me than any catholics I had met. The high status Catholics that changed Leah’s desired group affiliation were her friends, the people in her debating club and her Catholic boyfriend, whom she went to mass with (willingly) for more than a year.
As wedrifid said, reviving “dead threads” is fully acceptable and even encouraged in many occasions, AFAICT.
The one thing to be careful of is to enter argument mode or ask questions or offer specific, targeted insight to a particular poster on a very old post. Many of us have wasted some time early on by answering the questions or debating the assertions of an old comment originally made on Overcoming Bias before the transfer and where the author is long gone or never came to LessWrong in the first place.
No, by all means go ahead and comment wherever you have something to say.
That is what happened to me.
This reminds me of teaching. I think good teachers understand short inferential distances at least intuitively if not explicitly. The ‘shortness’ of inference is why good teaching must be interactive.
I think Vygotsky’s expression “zone of proximal development” means “one inferential step away”, so in theory professional teachers should understand this. I prefer to imagine knowledge like a “tech tree” in a computer game.
When teaching one student, it is possible to detect their knowledge base and use their preferred vocabulary. I remember explaining some programming topics to a manager: source code is like a job specification; functions are employees; data are processed materials; exceptions are emergency plans.
Problem is, when teaching the whole class, everyone’s knowledge base is very different. In theory it shouldn’t be so, because they all supposedly learned the same things in recent years, but in reality there are huge differences—so the teacher basicly has to choose a subset of class as target audience. Writing a textbook is even more difficult, when there is no interaction.
Psychohistorian: it depends on what you mean by “simple” and “explanation”. The sense in which “it’s the simplest explanation” is a powerful argument for something is not one in which “God did it” is the simplest explanation for anything.
Eliezer_Yudkowsky: I’ve seen the kinds of failures of explanation you refer to, and there’s also the possibility that the explainer just isn’t capable of explaining all of the inferential steps because he doesn’t know them. In that case, the explainer is basically “manipulating symbols without understanding them”. This is why I’ve formulated that principle (sort of a corollary to what you’ve argued here) that:
“If you can’t explain your idea/job/research to a layman, given enough time, and starting from reference to things he already understands, you don’t understand it yourself.”
That seems so simple as to be tautological. After all, you were a layman yourself once.Ideas/jobs/researches don’t spring whole-spun from the ether. You have to be led along that same path yourself—either by a teacher, or by your own mind bumping along down dark corridors.
But it’s not true. Consider by analogy: if you can’t explain something to a 4-year-old, you don’t understand it yourself. After all, you were a 4-year-old once yourself.
No, actually, sometimes you can’t explain something to someone because you don’t have a good enough understanding of their mental processes. It doesn’t matter if you once experienced those same mental processes; the relevant memories of that time are very likely lost to you now. Explaining math to novices is a different skill than understanding math. It requires the ability to figure out why the other person has got it wrong and what they need to hear. That isn’t a mathematical skill.
A distinguished math professor is probably inferior at explaining arithmetic to 8 year olds than an experienced mathematics educator, but it doesn’t mean the latter has the better understanding of math. They just have a better understanding of 8 year olds.
I have experienced this problem before—the teacher assumes you have prior knowledge that you just do not have, and all of what he says afterwards assumes you’ve made the logical leap. I wonder to what extent thoughtful people will reconstruct the gaps in their knowledge assuming the end conclusion is correct and working backwards to what they know in order to give themselves a useful (but possibly incorrect) bridge from B to A. For example, I recently heard a horrible biochem lecture about using various types of protein sequence and domain homology to predict function and cellular localization. Now, the idea that homology could be used to partially predict these things just seemed logical, and I think my brain just ran with the idea and thought about how I would go about using the technique, and placed everything he said piece-wise into that schema. When I actually started to question specifics at the end of the lecture, it became clear that I didn’t understand anything the man was saying at all outside of the words “homology” and “prediction”, and I had just filled in what seemed logical to me. How dangerous is it to try to “catch up” when people take huge inferential leaps?
Yes, this is good stuff, I wish I could identify the inferential gaps when I communicate!
Silas, aren’t there some things it is simply impossible for some people to understand?
Yes (maybe?), but that lends no argument against Silas’ corollary.
If you cannot explain, then you do not understand.
Therefore: If you do understand, then you can explain.
If no one can understand, then the precedent in the above is false, meaning that we cannot give the consequent any truth value.
TGGP: Yes for people below some IQ threshold. No for someone of the same IQ as the explainer.
(I probably should have added the intelligence criterion the first time around, I guess, but I was simplifying a bit.)
This is an excellent post, Eliezer!
Taking this phenomenon into consideration not only gives me cause to go back over my own teaching technique (of a rather specialized trade) and make sure I am not leaving out any steps that seem obvious to me (the specialist), but, like Laura, it helps me to understand times when I was baffled by a speaker or writer whose tone implied I’d be an idiot not to follow along easily.
When I write for a very bright “puzzle-solving-type” audience, I do the mental equivalent of deleting every fourth sentence or at least the tail part of every fourth sentence to prevent the reader from getting bored. I believe that practice helps my writings to compete with the writings around it for the critical resource of attention. There are of course many ways of competing for attention, and this is one of the least prejudicial to rational thought. I recommend this practice only in forums in which the reader can easily ask followup questions. Nothing about this practice is incompatible with the practices Eliezer is advocating. This week I am experimenting with adding three dots to the end of a sentence to signal to the reader the need mentally to complete the sentence.
So, what sentence did I delete from the above? A sentence to the effect that I only do this for writing that resembles mathematical proof fairly closely: “Suppose A. Because B, C. Therefore D, from which follows E, which is a contradiction, so our original assumption A must be false.”
After writing a first draft, I go back and add a lot more words than I had saved with the “do not bore the reader” practice. E.g. I add sentences explicitly to contradict interpretations that would lead to my being dismissed as hopelessly socially inept, eccentric or evil. Of course because I advocate outlandish positions here, I still get dismissed a lot.
Richard, you may or may not care that having read the above my willingness to read anything you write in future has somewhat decreased.
I would add, Richard, that writing “dear reader” on a medium like this comes off as patronizing.
Some of your claims about the EEA are counterintuitive to me. Basically, it’s not obvious that all information not strictly private would have been public. I’m thinking, for example, of present-day isolated cultures in which shamans are trained for several years: surely not all of their knowledge can be produced in a publicly comprehensible form. There must be a certain amount of “Eat this herb—I could tell you why, but it would take too long to explain”. Or so I imagine.
So how much of your description of knowledge in the EEA is your guessimation, and how much is the consensus view? And where can I find papers on the consensus view? My Google-fu fails me.
I present to you Exhibit A from the field of computer programming.
I find an easy way to get some of the complicated inferential jumps for free is to find a similar set of inferential jumps they have made in a similar subject. It is much easier to correct a “close” inferential jump than it is to create a new one out of thin air.
Example: When discussing the concept of programming you can use the concept of an assembly line to get their head into a procedural mode of thinking. Once they think about an object visiting a bunch of stations in a factory you can replace “object” with “program” and “station” with “line of code.” They still have no idea how programming works, but they can suddenly create a bunch of inferential jumps based on assembly lines.
In my experience, they now start asking questions about programming as related to assembly lines and you can fill in the gaps as you find them.
“So what happens at the end of the line?”
“Well, the program generally loops back around and starts over.”
”Oh. So it follows the same line forever?”
“Not necessarily. Sometimes the line takes a detour and heads off into a new area of the plant for awhile. But it generally will come back to the main assembly line.”
”But what’s the point? Like, how does that make my computer run?”
“Think of the computer like the company. The company owns a whole bunch of assembly lines all over the place and, periodically, it will ask a certain plant to start up and keep running. One of the stations in the assembly line is something like, ‘Give report to company’. The company looks at the report, realizes it is a visual report, and hands it to the assembly line that processes visual reports. That assembly line takes the report, breaks it down into RGB pixels, and puts it on the monitor for you to see. For that to happen, all of these programs have to keep spinning on their assembly lines and doing the work at each station and keep sending reports back to the company. You don’t have to worry about all of the lines because the computer is doing it for you.”
”Wow, that’s complicated.”
″Yeah, it can get pretty crazy. As a programmer, I design the assembly lines.”
Or whatever.
I like how it took me until the end to realise you’d re-inventedthe concept of analogies :-)
And I had to read past the end to realize that...
As someone who has done (some) teaching, I think this is absolutely correct. In fact, the most difficult thing I find about teaching is trying to find the student’s starting knowledge, and then working from there. If the teacher does not goes back enough ‘inferential steps’, the student won’t learn anything—or worse, they might think they know when they don’t.
Excellent stuff.
Now I think of it, this reminds of something Richard Dawkins used to say at some talks: that we (the modern audience) could give Aristotle a tutorial. Being a fantasist myself, I’ve sometimes wondered how that could be possible. Leaving aside the complications of building a time machine (I leave that to other people), I wondered how would it be to actually meet Aristotle and explain to him some of the things we now know about life, the universe & everything.
First of all, I’d have to learn ancient greek, of course, or no communication would be possible. That would be the easy (and the only easy) part. More complicated would be that, to teach anything modern to Aristotle, one would have to teach an incredible amount of previous stuff. That is, one would have to step quite a large number of inferential steps. If I wanted to explain, for example, the theory of evolution, that would require a lot of anatomy, geography, zoology, botany, and even mathematics and philosophy. One would have to be a true polymath to achieve the feat. It’s not that we don’t know more about the universe than Aristotle, it is that to cross the inferential ‘gap’ between Aristotle and us would require an inordinate amount of knowledge.
Maybe a good metaphor is based on Dennett’s crane idea: we develop ideas that help us reach higher levels of understanding, but as soon as we reach those upper levels we discard them to build new ones for higher levels. To help someone on the floor, one has to ‘rebuild’ these old cranes no longer in use.
Actually, evolution might be the easiest one. It’s inevitable if you have variation and selection. It’s a really pretty theory.
I don’t know how hard it would be to convey that observation and experimentation will take you farther than just theorizing.
If I brought back some tech far advanced over Aristotle’s period (and I wonder what would be most convincing), it might add weight to my arguments.
And personally, even if I had a time machine and the knowledge of ancient Greek, I don’t know what hard it would be to get him to listen to a woman.
One more thing beside a time machine, knowledge of ancient Greek, and a stash of cool stuff—the ability to argue well enough to convey your ideas to Aristotle and convince him you’re right.
This is probably at least as hard as it sounds.
I would sort of expect any woman who showed up with apparently magical powers to be put into the goddess category. Even someone like Aristotle, who probably didn’t believe that gods and goddesses literally existed, would be culturally conditioned to treat a woman who appeared to have super-powers with some respect.
How would you implement that? What do we have the tech to build today for a reasonable outlay of money (less than a million euros, for example) that could blow minds in that era?
A plastic bottle out of the trash. It’s transparent but flexible and almost weightless. See how well the lid has been made? It makes a water-tight seal.
It might be the most valuable object in Greece.
And then when you’ve got his attention, show him decimal notation.
And stirrups for his horse. And lances.
Once he’s hooked, show him why things float. And how a ball rolling down an inclined plane moves 1, 4, 9, 16 as it accelerates.
Show him Cartesian geometry. And how to play go with lines scratched in the ground and coloured stones. Make a recorder and play him some songs.
He’ll teach you Greek.
Show him how to send messages using flashing mirrors. Show him Playfair’s cipher. Perspective drawing. How to make a magnifying glass. Newton’s cradle. Make a model boat out of bronze.
I suspect in a day in Ancient Greece, you’d see so many easily solved problems that my list would look naive. You don’t need modern technology. You need the things that were discovered just after the mediaevals recovered what the Greeks already knew.
This is one of the more interesting approaches to the Connecticut Yankee in King Arthur’s Court (as I dub this species of thought problem) - that you don’t need any special preparation because your basic background means that you’ll spend the rest of your life in the past rushing around yelling ‘don’t do that, you idiot, do it this way!’
Diplomacy might actually be the best preparation.
Oh god. That is actually just humongous in it’s possible effect on warfare.
I mean add simple ciphers to it and you literally add another whole dimension to warfare.
Communication lines setup this way are almost like adding radio. Impractical in some situation, but used in regional warfare with multiple engagements? This is like empire forming stuff just from reflective stone plus semi-trivial education equals dominance.
Heck, you don’t need a million Euros. I could easily blow minds with 100.
A simple Zippo lighter should do the trick. So could an adjustable-beam flashlight, for that matter. A music player with earphones or speakers is another obvious choice. Candy bars maybe? They’d be shocked you brought ambrosia...
Pretty much anything that emits light, sound, heat, cold, etc. is likely to have some serious impact. Remember superstimulus.
Ultimately, I suppose the key question is, “how long do you need to keep up the act?”
With a budget closer to 5,000EUR, access to firearms, and enough willingness to use Dark Arts, I could probably keep it up for a decade or more. Possibly even pass on knowledge to selected disciples who would likewise guard these technological secrets, even as they rule the ignorant peasants.
If, on the other hand, our purpose is as originally stated—prove to the scholars of the time that I have knowledge worthy of them becoming my disciples so I can impart as much knowledge to them as possible—I probably won’t need much more than the superstimuli I described and a couple of afternoons. Something decidedly useful could cement this, and still on budget: A map of the world, a geographically appropriate taxonomy book, and a wristwatch (doubles as nautical navigational aide) would be enough. And I don’t think I’ve even reached 100EUR yet, all told :)
What we can achieve with today’s technology is so marvelous, it’s amazing how ordinary it seems to us. One day I turned on the faucet at my house and just marveled at the incredible and unlikely wonder of having fresh drinking water at practically limitless capacity being instantly transported to my residence, at my whim.
This isn’t just magic. It’s better than magic.
A gun could blow minds in any era.
I’m sorry, I couldn’t help myself.
I don’t know what it would take to pass as a goddess, but a stash of cool stuff could be impressive. An ipad (with appropriate power source—what would it take to power it from a water wheel?). Stainless steel blades. A Jacquard loom. What else?
A hunting or sniper rifle, a pistol, a remote controlled helicopter with wireless video, broad spectrum antibiotics, powerful painkillers, explosives.
A better list than mine. Do you think you’d need to go back with a group just to not have your stuff stolen?
If you want to bring back something useful for the educational project rather than just being impressive, a batch of slide rules would be good.
Could ancient Greeks make printing presses if they had designs for them? I’m sure they could at least do wood block printing.
I think that would somewhat depend on how convinced people were by your ‘godlike’ powers. That’s where modern day weaponry would prove quite effective I’d imagine. A taser would probably be useful as a bullet wound would be recognizable physical damage whereas the effect of a taser would probably seem like the power of the gods. If I was on my own I’d probably want body armor, motion sensors and other defensive equipment as well to be on the safe side.
Mixing healing powers in would be just as valuable in self-preservation as demonstrating offensive capability. You would probably want to obfuscate the nature of your ‘healing magic’ so that people would not easily be able to replicate it if they managed to steal some of your stock of medical supplies. Special pills that had to be given in combination to be effective would be useful.
If you’re planning to teach a scientific worldview, it might be well to not be too godlike.
True, but at least initially personal survival and not getting all your stuff nicked would probably require some compromises on teaching science. Once you had an established power base and some loyal local followers you could start to focus on the teaching.
You’re right—evolution might be easier than, say, how and iPhone works (not that an iPhone would work very well in Ancient Greece, or for much long, anyway). Having some high tech to show to good old Aristotle maybe would convince him you come from a very strange land, and maybe he would want to hear more of what you have to say instead of just dismissing you as a lunatic.
But imagine how much you would have to explain to make him even dimly aware of the way an iPhone works! Electronics, electricity, computation, satellites and astronomy (goodbye lunar sphere), calculus, chemistry, physics… I can barely think of all the relevant topics!
Of course, as you point out, mysoginy would be a great obstacle too. One more of the ‘steps’ that separate ancient peoples from modern societies.
What you want to teach depends on what you’re trying to accomplish. I don’t think there’s much point in trying to give Aristotle an overview of modern scientific conclusions.
Assuming we want to accelerate technological progress, I’d rather teach him scientific method, decimal notation, evolution, and maybe what Feynman said (iirc) was the most important conclusion—that matter is made of tiny bits of elements. I don’t know what other specific subjects might be a good idea. Bayes? Calculus?
I don’t know what would be convincing experiments for atoms.
One more I’d want to teach him that you can learn a lot by doing careful measurement and thinking about the results.
I don’t know what Aristotle would come up with, given all that—he was very smart.
From a practical point of view teaching the germ theory of disease would probably have the most immediate benefit.
Using water droplets as rudimentary microscopes.
How big a jump would it be to give them lens-making tech?
You could probably explain geometrical optics without too much trouble.
Assuming you convinced him of the epistemological primacy of experiment, I see two obvious paths:
The kinetic theory of gases, particularly the ideal gas law;
Stoichiometry in chemistry—for example, electrolysis of water.
I would add Brownian motion to that list.
See also: www.justfuckinggoogleit.com, www.lmgtfy.com, Reddit anti-”repost” rage and the comments like this that appear in practically every online community.
This is the reason it’s a Bad Thing that so many of the deeper concepts of Mormonism have become public knowledge. The first question I get asked, upon revealing that I’m a Mormon, is often, “So, you believe that if you’re good in this life, you’ll get your own planet after you die?” There are at least three huge problems with this question, and buried deep beneath them, a tiny seed of truth. But I can’t just say “The inferential distance is too great for me to immediately explain the answer to that question. Let me tell you about the Plan of Salvation, and we’ll move from there,” because that sounds like I’m Trying To Convert You, which is a Scary and a Bad Thing, because… out of explanations come brainwashing. Or something.
2 Nephi 28:30:
The “your own planet” thing isn’t a huge selling point that you’d want to lead with?
Ha! I’d never thought of it like that! :3 Unfortunately, I have a problem with the idea of “selling” a religion. Just because you like an idea doesn’t mean it’s true...
Besides, the types of person who bother saying “You get your own planet?” instead of “You’re religious?” usually views getting your own planet as The Ultimate Sacrilege, so it’s not the best selling point, no. :/
This is one of those things that I realize is so obvious once I thought about it, but until it was pointed out to me, I would have never seen it.
http://harvardmagazine.com/2012/03/twilight-of-the-lecture
If there is a probability of faulty inference, then longer inferences are exponentially less likely to be valid, with the length of the valid inference being proportional to logarithm of the process fidelity. Long handwaved inferences can have unbelievably low probability of correctness, and thus be incredibly weak as evidence.
Furthermore, informal arguments very often rely on ‘i can’t imagine an alternative’ in multiple of their steps, and this itself has proven unreliable. It is also too easy to introduce, deliberately or otherwise, a huge number of implicit assumptions all of which must be true for the argument to be valid.
With logarithmic dependence to the fidelity of argument, even dramatically more reliable informal arguments do not lead to dramatically longer inference chains that can be done. One has to use formal methods to produce long inference chains.
The lack of this knowledge got me a nice big “most condescending statement of the day award” in lab a year ago.
I don’t think this is quite right, but taking up the challenge may be helpful when writing:
Daniel Dennett
“I know [evolution] sounds crazy—it didn’t make sense to me at first either. I can explain how it works if you’re curious, but it will take me a long time, because it’s a complicated idea with lots of moving parts that you probably haven’t seen before. Sometimes even simple questions like ‘where did the first humans come from?’ turn out to have complicated answers.”
Of course it’s not actually a simple question, it’s really a broad inquiry. In fact it doesn’t even need to have an answer and even when it does, it usually alters the question slightly… the hard part is asking the right questions not finding the answer.
(It just dawned on me that this was the whole point of The Question in A Hitchhiker’s Guide to the Galaxy, thanks for that.)
“This is going to take a while to explain.”
Did I do it? Did I win rationalism?!
I’d go with “echo chambers.” Or if I weren’t feeling pedantic, I’d say “There’s a reason this concept takes a whole semester to teach.”
“If you understood everything I said, you’d be me” ― Miles Davis
Expecting short Inferential Distances wouldn’t that be a case of rational thought producing beliefs which are themselves evidence? :P Manifested in over-explaining to the point of cognitive-dissonance? How about applying Occam’s Razor and going the shorter distance: improve the clarity of the source by means of symbolism though a reflective correction (as if to compensate for the distortion in the other lens). To me it means to steel-man the opponent’s argument to the point where it becomes not-falisifible. See, the fact that science works by falsification and pseudoscience by verification puts them in different paradigms that will only be reconciled by verification alone. Meaning also, science will have value because it can predict, so who cares about its inner workings of reason! This to me makes sense, because right now we seem to rank our intelligence superior to that of a virus, which is a problem of underestimating your enemy :). We are Neurotribes, autistic kids for ex. think in pictures; a different type intelligence may be emerging, may be without beliefs :)
“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.” Alfred North Whitehead
that last sentence ha
I think this concept may be fundamental in explaining the mecanics of cohesion and division in society. This could help understand why politics tend to get more and more divided. Especially on the internet, but also IRL, people tend to confirm their ideas rather than confront them to different ones, as first observed by C. Wason (Peter C. Wason, « On the failure to eliminate hypotheses in a conceptual task », in Quarterly Journal of Experimental Psychology, 1960), and confirmed since. Or, one could argue, that reinforcing one’s ideas (the ideas of a person or a community), building a shield of protective arguments around them, rather these arguments are solid and rationnal or just every means possible to deter ennemy attacks, is comparable to building evermore steps further away from people attacking these ideas, trying to change or oppose them.
When it comes to people who have radicalized themselves to the point that they refuse to accept reality, in the sense of something we all agree to build from (global warming is real and manmade, there no such thing as “races” in humanity, if it’s raining it is not not raining etc.), one could say they have build not inferential steps but an infertential wall. Conspiracy theorists for instance, or at least some of them, share some mecanims with people suffering from schizophrenia, in the sense that they will actually take arguments against their position as reinforcing this very position that is being contradicted, thinking that counterarguments are just trying to prevent them from discovering the truth, or that all people disagreeing with them are part of the conspiracy. It is as if they are building new steps when one tries to cross the already existing ones.
Nevertheless, this grim presentation of mine shouldn’t undermine the one great thing about this concept of inferential distance, that is, if this distance can be divided into steps, we can all reach for each other, even people that are staircases away.
It’s been nearly a century since relativity and quantum physics took the stage, and we have gathered considerably more data, and filled in a great many areas since then, yet we are still waiting for the next big breakthrough.
The problem may be in the way we approach scientific discovery.
Currently, when a new hypothesis is advanced, it is carefully considered as to how well it stands on its own and whether, eventually, it is worthy of being adopted as a recognized theory and becoming a part of our overall model of how everything works.
This might be the problem. Suppose the next advance cannot be made in this fashion? By this I am proposing that the next breakthrough may involve not a single new concept, that can be tested independently for worthiness, but several that cannot be tested individually.
For example, when you build a stone wall, you can test its strength and stability with each new stone placed. When you build a stone arch, attempting to test its strength and stability after each piece is going to get you labeled as an incompetent as the uppermost pieces will always fall if you merely try to set them in place, one at a time. Insisting that there are several pieces that must be placed at once before it can be tested is necessary, yet in theoretical physics, it well get you labeled as a crackpot.
For example, suppose one were to approach a transportation company with a radical new idea on how to improve airplanes, and even air travel in general. The company would want to test it and validate the concept before adoption. But suppose you told them that the idea could not even be evaluated unless it simultaneously includes the research and development of radical new ideas in such seemingly unrelated fields as personnel management, inventory control, and submarine transports. Chances are they would politely (or perhaps not politely) decline any further involvement.
Yet that is precisely the problem with the Standard Model.
We have conundrums in explaining consciousness, the dual-slit experiment, Schrodinger’s cat, the number of spatial dimensions required, the expansion of the universe, the acceleration of the expansion of the universe, dark matter, dark energy, quantum uncertainty, the speed of light, singularities, the Big Bang, the heat-death (or Big Chill) of the universe, the Big Rip, gravity, entropy, and the list continues. Anyone that attempts to address more than one or two of these things at one time is likely to be dismissed at once as a crackpot.
Yet is fairly well believed that Einstein’s classic papers, submitted in today’s climate, would go straight to the crackpot slush pile.
There is also the problem that, for proposed idea in physics to be given a hearing of any sort almost invariably requires advanced degree work in physics, with appropriately degreed instructors and sponsors. A paid position in the field is very nearly a prerequisite as well. Further, given the preoccupation with string theory that has consumed so many of them, and so restricted the opportunities of those who are not adherents . . . I’ve heard there may be less than 200 individuals in the world employed as theoretical physicists that are not dedicated to string theory (which doesn’t seem to yield useful results in terms of advancing or redefining the Standard Model).
Additionally, given that they all go through a similar process to become recognized theoretical physicists, it almost certainly channels and colors their thinking on the subject, which is the same thing as saying that it limits them.