Taboo Your Words
In the game Taboo (by Hasbro), the objective is for a player to have their partner guess a word written on a card, without using that word or five additional words listed on the card. For example, you might have to get your partner to say “baseball” without using the words “sport”, “bat”, “hit”, “pitch”, “base” or of course “baseball”.
As soon as I see a problem like that, I at once think, “An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions.” It might not be the most efficient strategy to convey the word ‘baseball’ under the stated rules—that might be, “It’s what the Yankees play”—but the general skill of blanking a word out of my mind was one I’d practiced for years, albeit with a different purpose.
Yesterday we saw how replacing terms with definitions could reveal the empirical unproductivity of the classical Aristotelian syllogism. All humans are mortal (and also, apparently, featherless bipeds); Socrates is human; therefore Socrates is mortal. When we replace the word ‘human’ by its apparent definition, the following underlying reasoning is revealed:
All [mortal, ~feathers, biped] are mortal;
Socrates is a [mortal, ~feathers, biped];
Therefore Socrates is mortal.
But the principle of replacing words by definitions applies much more broadly:
Albert: “A tree falling in a deserted forest makes a sound.”
Barry: “A tree falling in a deserted forest does not make a sound.”
Clearly, since one says “sound” and one says “not sound”, we must have a contradiction, right? But suppose that they both dereference their pointers before speaking:
Albert: “A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations].”
Barry: “A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].”
Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound. If “acoustic vibrations” came into dispute, we would just play Taboo again and say “pressure waves in a material medium”; if necessary we would play Taboo again on the word “wave” and replace it with the wave equation. (Play Taboo on “auditory experience” and you get “That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes...”)
But suppose, on the other hand, that Albert and Barry were to have the argument:
Albert: “Socrates matches the concept [membership test: this person will die after drinking hemlock].”
Barry: “Socrates matches the concept [membership test: this person will not die after drinking hemlock].”
Now Albert and Barry have a substantive clash of expectations; a difference in what they anticipate seeing after Socrates drinks hemlock. But they might not notice this, if they happened to use the same word “human” for their different concepts.
You get a very different picture of what people agree or disagree about, depending on whether you take a label’s-eye-view (Albert says “sound” and Barry says “not sound”, so they must disagree) or taking the test’s-eye-view (Albert’s membership test is acoustic vibrations, Barry’s is auditory experience).
Get together a pack of soi-disant futurists and ask them if they believe we’ll have Artificial Intelligence in thirty years, and I would guess that at least half of them will say yes. If you leave it at that, they’ll shake hands and congratulate themselves on their consensus. But make the term “Artificial Intelligence” taboo, and ask them to describe what they expect to see, without ever using words like “computers” or “think”, and you might find quite a conflict of expectations hiding under that featureless standard word. Likewise that other term. And see also Shane Legg’s compilation of 71 definitions of “intelligence”.
The illusion of unity across religions can be dispelled by making the term “God” taboo, and asking them to say what it is they believe in; or making the word “faith” taboo, and asking them why they believe it. Though mostly they won’t be able to answer at all, because it is mostly profession in the first place, and you cannot cognitively zoom in on an audio recording.
When you find yourself in philosophical difficulties, the first line of defense is not to define your problematic terms, but to see whether you can think without using those terms at all. Or any of their short synonyms. And be careful not to let yourself invent a new word to use instead. Describe outward observables and interior mechanisms; don’t use a single handle, whatever that handle may be.
Albert says that people have “free will”. Barry says that people don’t have “free will”. Well, that will certainly generate an apparent conflict. Most philosophers would advise Albert and Barry to try to define exactly what they mean by “free will”, on which topic they will certainly be able to discourse at great length. I would advise Albert and Barry to describe what it is that they think people do, or do not have, without using the phrase “free will” at all. (If you want to try this at home, you should also avoid the words “choose”, “act”, “decide”, “determined”, “responsible”, or any of their synonyms.)
This is one of the nonstandard tools in my toolbox, and in my humble opinion, it works way way better than the standard one. It also requires more effort to use; you get what you pay for.
- 37 Ways That Words Can Be Wrong by 6 Mar 2008 5:09 UTC; 225 points) (
- Elements of Rationalist Discourse by 12 Feb 2023 7:58 UTC; 223 points) (
- Sunset at Noon by 21 Nov 2017 2:05 UTC; 215 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 202 points) (
- Gears in understanding by 12 May 2017 0:36 UTC; 193 points) (
- Reshaping the AI Industry by 29 May 2022 22:54 UTC; 147 points) (
- If a tree falls on Sleeping Beauty... by 12 Nov 2010 1:14 UTC; 145 points) (
- Against LLM Reductionism by 8 Mar 2023 15:52 UTC; 140 points) (
- The feeling of breaking an Overton window by 17 Feb 2021 5:31 UTC; 131 points) (
- 2-Place and 1-Place Words by 27 Jun 2008 7:39 UTC; 124 points) (
- Situating LessWrong in contemporary philosophy: An interview with Jon Livengood by 1 Jul 2020 0:37 UTC; 117 points) (
- Branches of rationality by 12 Jan 2011 3:24 UTC; 107 points) (
- “Deep Learning” Is Function Approximation by 21 Mar 2024 17:50 UTC; 98 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Fallacies of Compression by 17 Feb 2008 18:51 UTC; 96 points) (
- Conceptual Analysis and Moral Theory by 16 May 2011 6:28 UTC; 94 points) (
- Behaviorism: Beware Anthropomorphizing Humans by 4 Jul 2011 20:40 UTC; 89 points) (
- Replace the Symbol with the Substance by 16 Feb 2008 18:12 UTC; 88 points) (
- SotW: Be Specific by 3 Apr 2012 6:11 UTC; 86 points) (
- Novum Organum: Introduction by 19 Sep 2019 22:34 UTC; 86 points) (
- Where to Draw the Boundary? by 21 Feb 2008 19:14 UTC; 82 points) (
- The God of Humanity, and the God of the Robot Utilitarians by 24 Aug 2023 8:27 UTC; 77 points) (
- Automatic Rate Limiting on LessWrong by 23 Jun 2023 20:19 UTC; 77 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Raised in Technophilia by 17 Sep 2008 2:06 UTC; 67 points) (
- Possibility and Could-ness by 14 Jun 2008 4:38 UTC; 66 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Pluralistic Moral Reductionism by 1 Jun 2011 0:59 UTC; 64 points) (
- Why Realists and Anti-Realists Disagree by 5 Jun 2020 7:51 UTC; 61 points) (EA Forum;
- The Meaning of Right by 29 Jul 2008 1:28 UTC; 61 points) (
- How have you become more (or less) engaged with EA in the last year? by 8 Sep 2020 18:28 UTC; 57 points) (EA Forum;
- 29 Jan 2022 0:29 UTC; 55 points) 's comment on It’s ok to leave EA by (EA Forum;
- Heading Toward: No-Nonsense Metaethics by 24 Apr 2011 0:42 UTC; 55 points) (
- The God of Humanity, and the God of the Robot Utilitarians by 29 Aug 2023 23:42 UTC; 50 points) (EA Forum;
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- 10 Feb 2023 21:49 UTC; 47 points) 's comment on EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism by (EA Forum;
- Confused as to usefulness of ‘consciousness’ as a concept by 13 Jul 2014 11:01 UTC; 47 points) (
- Variable Question Fallacies by 5 Mar 2008 6:22 UTC; 47 points) (
- Useful Does Not Mean Secure by 30 Nov 2019 2:05 UTC; 46 points) (
- The Up-Goer Five Game: Explaining hard ideas with simple words by 5 Sep 2013 5:54 UTC; 44 points) (
- Against accusing people of motte and bailey by 3 Jun 2018 21:31 UTC; 42 points) (
- What I Learned About Meetup Organization by 6 Oct 2012 2:11 UTC; 42 points) (
- Against the Linear Utility Hypothesis and the Leverage Penalty by 14 Dec 2017 18:38 UTC; 41 points) (
- Designing Rationalist Projects by 12 May 2011 3:38 UTC; 41 points) (
- Dreams of AI Design by 27 Aug 2008 4:04 UTC; 40 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- Seeking a “Seeking Whence ‘Seek Whence’” Sequence by 25 Jun 2012 11:10 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- The Pink Sparkly Ball Thing (Use unique, non-obvious terms for nuanced concepts) by 20 Feb 2016 23:25 UTC; 36 points) (
- Being Foreign and Being Sane by 25 May 2013 0:58 UTC; 35 points) (
- Against LLM Reductionism by 8 Mar 2023 15:52 UTC; 32 points) (EA Forum;
- 11 Apr 2013 17:36 UTC; 31 points) 's comment on LW Women Submissions: On Misogyny by (
- Bayesianism for humans: prosaic priors by 2 Sep 2014 21:45 UTC; 30 points) (
- Wielding civilization by 1 Jun 2022 7:11 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Right for the Wrong Reasons by 24 Jan 2013 0:02 UTC; 28 points) (
- Taboo “rationality,” please. by 15 Mar 2009 22:44 UTC; 28 points) (
- Against meta-ethical hedonism by 2 Dec 2022 0:19 UTC; 27 points) (EA Forum;
- Work harder on tabooing “Friendly AI” by 20 May 2012 8:51 UTC; 27 points) (
- Aella on Rationality and the Void by 31 Oct 2019 21:40 UTC; 27 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- Reframing the AI Risk by 1 Jul 2022 18:44 UTC; 26 points) (
- What kind of place is this? by 25 Feb 2023 2:14 UTC; 24 points) (
- Notes on Notes on the Synthesis of Form by 6 Oct 2022 2:36 UTC; 24 points) (
- Against meta-ethical hedonism by 2 Dec 2022 0:23 UTC; 24 points) (
- Category Qualifications (w/ exercises) by 15 Sep 2019 16:28 UTC; 23 points) (
- What does your web of beliefs look like, as of today? by 20 Feb 2011 19:47 UTC; 23 points) (
- Collaborative Truth-Seeking by 4 May 2016 23:28 UTC; 21 points) (
- A Guide to Forecasting AI Science Capabilities by 29 Apr 2023 6:51 UTC; 19 points) (EA Forum;
- “Arbitrary” by 12 Aug 2008 17:55 UTC; 19 points) (
- Defining by opposites by 18 Sep 2018 9:26 UTC; 19 points) (
- Taboo Wall by 6 Nov 2023 3:51 UTC; 19 points) (
- The Power to Understand “God” by 12 Sep 2019 18:38 UTC; 18 points) (
- 3 Oct 2010 22:42 UTC; 16 points) 's comment on The Irrationality Game by (
- Collaborative Truth-Seeking by 4 May 2016 22:56 UTC; 15 points) (EA Forum;
- Looking for answers about quantum immortality. by 9 Sep 2019 2:16 UTC; 15 points) (
- Lighthaven Sequences Reading Group #5 (Tuesday 10/08) by 2 Oct 2024 2:57 UTC; 15 points) (
- 26 Jan 2018 22:13 UTC; 15 points) 's comment on What are the Best Hammers in the Rationalist Community? by (
- 7 Apr 2014 18:19 UTC; 14 points) 's comment on The Case For Free Will or Why LessWrong must commit to self determination by (
- 8 May 2011 3:11 UTC; 14 points) 's comment on The Power of Agency by (
- graphpatch: a Python Library for Activation Patching by 5 Jun 2024 15:08 UTC; 13 points) (
- 2 May 2010 0:55 UTC; 13 points) 's comment on Open Thread: May 2010 by (
- 17 Aug 2010 5:23 UTC; 13 points) 's comment on Desirable Dispositions and Rational Actions by (
- Understanding rationality vs. ideology debates by 12 May 2024 19:20 UTC; 13 points) (
- Relabelings vs. External References by 20 Sep 2019 2:20 UTC; 12 points) (
- 3 Aug 2012 22:30 UTC; 12 points) 's comment on The supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being? by (
- What is learning? by 8 Feb 2019 3:18 UTC; 11 points) (
- 15 Jun 2010 4:23 UTC; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- Philosophy of Numbers (part 1) by 2 Dec 2017 18:20 UTC; 11 points) (
- 2 Mar 2012 12:20 UTC; 10 points) 's comment on Rationality Quotes March 2012 by (
- 28 Feb 2009 0:37 UTC; 10 points) 's comment on The Most Important Thing You Learned by (
- Whether LLMs “understand” anything is mostly a terminological dispute by 9 Jul 2023 3:31 UTC; 10 points) (
- Steelmanning The Devil by 21 Nov 2023 7:28 UTC; 10 points) (
- Games for Rationalists by 12 Sep 2013 17:41 UTC; 10 points) (
- Kidney Trade: A Dialectic by 18 Nov 2016 17:19 UTC; 9 points) (
- Common Uses of “Acceptance” by 26 Jul 2024 11:18 UTC; 9 points) (
- [SEQ RERUN] Taboo Your Words by 23 Jan 2012 4:41 UTC; 9 points) (
- 12 Sep 2012 18:39 UTC; 9 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- Rationality Reading Group: Part N: A Human’s Guide to Words by 18 Nov 2015 23:50 UTC; 9 points) (
- 16 Mar 2013 18:40 UTC; 8 points) 's comment on Don’t Get Offended by (
- 17 Apr 2009 23:49 UTC; 8 points) 's comment on My Way by (
- 9 Apr 2011 23:24 UTC; 8 points) 's comment on reply to benelliott about Popper issues by (
- 9 Dec 2014 14:34 UTC; 8 points) 's comment on Does utilitarianism “require” extreme self sacrifice? If not why do people commonly say it does? by (
- Mathematical Measures of Optimization Power by 24 Nov 2012 10:55 UTC; 8 points) (
- 3 Oct 2010 2:38 UTC; 8 points) 's comment on Consciousness doesn’t exist. by (
- 2 Sep 2015 20:23 UTC; 8 points) 's comment on Open Thread August 31 - September 6 by (
- [Old] Mapmaking Series by 12 Mar 2019 17:32 UTC; 8 points) (
- Confused Attractiveness by 26 Jun 2023 9:33 UTC; 8 points) (
- 16 Jun 2014 16:00 UTC; 7 points) 's comment on Open thread, 16-22 June 2014 by (
- 30 Jun 2020 23:44 UTC; 7 points) 's comment on A reply to Agnes Callard by (
- 4 Jun 2010 19:13 UTC; 7 points) 's comment on Bayes’ Theorem Illustrated (My Way) by (
- 6 May 2014 16:42 UTC; 7 points) 's comment on Open Thread, May 5 − 11, 2014 by (
- Instrumental Rationality 5: Interlude II by 14 Oct 2017 2:05 UTC; 7 points) (
- 12 May 2015 18:58 UTC; 7 points) 's comment on Thoughts on minimizing designer baby drama by (
- 19 Oct 2024 17:43 UTC; 7 points) 's comment on Why I’m not a Bayesian by (
- Method of statements: an alternative to taboo by 16 Nov 2022 10:57 UTC; 7 points) (
- 12 May 2015 15:38 UTC; 7 points) 's comment on Debunking Fallacies in the Theory of AI Motivation by (
- 8 Feb 2018 19:57 UTC; 7 points) 's comment on Mental TAPs by (
- You See The Territory or Nothing at All by 1 Jun 2020 15:24 UTC; 7 points) (
- Altruism isn’t about sacrifice by 6 Sep 2013 4:00 UTC; 6 points) (EA Forum;
- A Guide to Forecasting AI Science Capabilities by 29 Apr 2023 23:24 UTC; 6 points) (
- 2 Apr 2009 15:33 UTC; 6 points) 's comment on You don’t need Kant by (
- 18 Jul 2013 18:14 UTC; 6 points) 's comment on The idiot savant AI isn’t an idiot by (
- Rationality Taboo, The Game by 21 Mar 2023 0:56 UTC; 6 points) (
- 14 Dec 2010 15:21 UTC; 6 points) 's comment on A sense of logic by (
- 27 Apr 2011 20:33 UTC; 6 points) 's comment on What is Metaethics? by (
- 7 Sep 2010 2:09 UTC; 6 points) 's comment on Something’s Wrong by (
- 3 Apr 2012 3:35 UTC; 6 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 by (
- 11 Apr 2023 21:34 UTC; 6 points) 's comment on LessWrong moderation messaging container by (
- 1 Sep 2023 11:38 UTC; 5 points) 's comment on The God of Humanity, and the God of the Robot Utilitarians by (EA Forum;
- 30 Jan 2022 22:55 UTC; 5 points) 's comment on Deconfusing Deception by (
- 14 Dec 2011 1:24 UTC; 5 points) 's comment on How to Not Lose an Argument by (
- 26 Jun 2023 22:48 UTC; 5 points) 's comment on My tentative best guess on how EAs and Rationalists sometimes turn crazy by (
- 30 May 2023 2:21 UTC; 5 points) 's comment on Sentience matters by (
- 28 Dec 2011 0:31 UTC; 5 points) 's comment on Welcome to Less Wrong! (2012) by (
- 19 Nov 2009 20:49 UTC; 5 points) 's comment on A Less Wrong singularity article? by (
- 27 Nov 2019 18:40 UTC; 5 points) 's comment on Dialogue on Appeals to Consequences by (
- 12 Nov 2023 2:15 UTC; 5 points) 's comment on Thinking By The Clock by (
- 28 Feb 2015 0:42 UTC; 5 points) 's comment on Mind Projection Fallacy by (
- Introduction to Modern Dating: Strategic Dating Advice for beginners by 20 Jul 2024 15:45 UTC; 5 points) (
- 10 Dec 2012 7:53 UTC; 5 points) 's comment on 2-Place and 1-Place Words by (
- Comparing AI Alignment Approaches to Minimize False Positive Risk by 30 Jun 2020 19:34 UTC; 5 points) (
- 8 Nov 2010 2:48 UTC; 5 points) 's comment on Yet Another “Rational Approach To Morality & Friendly AI Sequence” by (
- 14 Mar 2009 16:11 UTC; 5 points) 's comment on Closet survey #1 by (
- 2 Jun 2011 12:06 UTC; 5 points) 's comment on [SEQ RERUN] Your Rationality is My Business by (
- 5 Feb 2012 23:17 UTC; 5 points) 's comment on Cargo Cult Language by (
- 21 Oct 2011 23:02 UTC; 5 points) 's comment on Link: 50% effective malaria vaccine developed by (
- 20 May 2019 5:26 UTC; 5 points) 's comment on Getting Out of the Filter Bubble Outside Your Filter Bubble by (
- 27 Mar 2010 17:46 UTC; 5 points) 's comment on Addresses in the Multiverse by (
- A First Attempt to Dissolve “Is Consciousness Reducible?” by 20 Aug 2022 23:39 UTC; 5 points) (
- 11 Apr 2023 21:35 UTC; 5 points) 's comment on LessWrong moderation messaging container by (
- 7 Feb 2023 16:17 UTC; 4 points) 's comment on The number of burner accounts is too damn high by (EA Forum;
- 22 Mar 2013 16:32 UTC; 4 points) 's comment on Don’t Get Offended by (
- 18 May 2012 6:59 UTC; 4 points) 's comment on Thoughts on the Singularity Institute (SI) by (
- 14 Dec 2011 20:41 UTC; 4 points) 's comment on How to Not Lose an Argument by (
- 14 Dec 2011 20:28 UTC; 4 points) 's comment on How to Not Lose an Argument by (
- 18 Jul 2016 4:34 UTC; 4 points) 's comment on Zombies Redacted by (
- 14 Nov 2011 14:57 UTC; 4 points) 's comment on Amanda Knox: post mortem by (
- 7 Jan 2010 21:28 UTC; 4 points) 's comment on Rationality Quotes January 2010 by (
- 5 Feb 2015 18:36 UTC; 4 points) 's comment on [LINK] Wait But Why—The AI Revolution Part 2 by (
- 15 Feb 2015 21:32 UTC; 4 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 11 Dec 2010 0:13 UTC; 4 points) 's comment on The term ‘altruism’ in group selection by (
- 5 Mar 2022 14:00 UTC; 4 points) 's comment on [David Chapman] Resisting or embracing meta-rationality by (
- Ideas of the Gaps by 13 Sep 2022 10:55 UTC; 4 points) (
- 7 Apr 2012 6:54 UTC; 4 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 by (
- 26 Mar 2009 0:16 UTC; 4 points) 's comment on Spock’s Dirty Little Secret by (
- 15 Oct 2010 21:21 UTC; 4 points) 's comment on LW favorites by (
- The Unreasonable Effectiveness of Certain Questions by 4 Jul 2017 3:37 UTC; 4 points) (
- 10 Oct 2018 12:46 UTC; 4 points) 's comment on On insecurity as a friend by (
- 23 Feb 2018 5:56 UTC; 3 points) 's comment on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment by (EA Forum;
- 6 May 2022 10:53 UTC; 3 points) 's comment on The AI Messiah by (EA Forum;
- 14 Dec 2011 20:08 UTC; 3 points) 's comment on How to Not Lose an Argument by (
- 17 Nov 2014 14:35 UTC; 3 points) 's comment on Open thread, Nov. 17 - Nov. 23, 2014 by (
- 26 May 2010 14:31 UTC; 3 points) 's comment on Open Thread: May 2010, Part 2 by (
- 9 Apr 2012 11:40 UTC; 3 points) 's comment on Rationally Irrational by (
- 3 May 2021 18:02 UTC; 3 points) 's comment on Sexual Dimorphism in Yudkowsky’s Sequences, in Relation to My Gender Problems by (
- Meetup : Brussels February meetup: Words by 13 Jan 2015 15:55 UTC; 3 points) (
- Meetup : Washington, D.C.: Rationalist Taboo by 20 Mar 2015 1:07 UTC; 3 points) (
- 24 Aug 2010 2:22 UTC; 3 points) 's comment on Minimum computation and data requirements for consciousness. by (
- 2 Dec 2022 5:34 UTC; 3 points) 's comment on Motivated Cognition and the Multiverse of Truth by (
- 21 Feb 2018 18:01 UTC; 3 points) 's comment on Circling by (
- 8 Jan 2014 4:04 UTC; 3 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- Meetup : Brussels: Morality—also cake by 16 Jan 2014 16:09 UTC; 3 points) (
- 25 Dec 2014 0:59 UTC; 3 points) 's comment on Open thread, Dec. 22 - Dec. 28, 2014 by (
- 18 Jul 2013 22:19 UTC; 3 points) 's comment on The idiot savant AI isn’t an idiot by (
- Meetup : Washington, D.C.: Mini Talks by 9 Mar 2015 17:55 UTC; 3 points) (
- Meetup : Washington, D.C.: Fun & Games (with Nomic) by 4 Mar 2015 1:53 UTC; 3 points) (
- 13 Oct 2010 18:18 UTC; 3 points) 's comment on Of the Qran and its stylistic resources: deconstructing the persuasiveness Draft by (
- 14 May 2015 10:31 UTC; 2 points) 's comment on [Link] Death with Dignity by Scott Adams by (
- 23 Mar 2013 2:38 UTC; 2 points) 's comment on How to Convince Me That 2 + 2 = 3 by (
- 14 Dec 2011 20:35 UTC; 2 points) 's comment on How to Not Lose an Argument by (
- 17 Jun 2009 12:12 UTC; 2 points) 's comment on Ask LessWrong: Human cognitive enhancement now? by (
- 31 Jan 2013 9:50 UTC; 2 points) 's comment on Theism, Wednesday, and Not Being Adopted by (
- 6 May 2012 2:35 UTC; 2 points) 's comment on [SEQ RERUN] Science Doesn’t Trust Your Rationality by (
- 10 Feb 2020 19:28 UTC; 2 points) 's comment on Eukryt Wrts Blg by (
- 9 Apr 2011 22:53 UTC; 2 points) 's comment on reply to benelliott about Popper issues by (
- 2 Jan 2010 3:40 UTC; 2 points) 's comment on New Year’s Predictions Thread by (
- 19 Dec 2017 4:12 UTC; 2 points) 's comment on Bay Solstice 2017: Thoughts by (
- 10 May 2023 13:37 UTC; 2 points) 's comment on All AGI Safety questions welcome (especially basic ones) [May 2023] by (
- 8 Dec 2016 15:56 UTC; 2 points) 's comment on “What is Wrong With our Thoughts” -David Stove (1991) by (
- Gettier in Zombie World by 23 Jan 2011 6:44 UTC; 2 points) (
- More Sequences—LW/ACX Meetup #279 (Wednesday, March 6th 2024) by 5 Mar 2024 0:13 UTC; 2 points) (
- 15 Feb 2015 19:27 UTC; 2 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 1 Aug 2023 23:08 UTC; 2 points) 's comment on Lack of Social Grace Is an Epistemic Virtue by (
- 10 Mar 2012 19:17 UTC; 2 points) 's comment on AI Risk and Opportunity: A Strategic Analysis by (
- 23 Jul 2011 6:25 UTC; 2 points) 's comment on [Link] The Bayesian argument against induction. by (
- 10 Nov 2022 17:58 UTC; 1 point) 's comment on Possibility and Could-ness by (
- 4 Apr 2010 7:09 UTC; 1 point) 's comment on Open Thread: February 2010 by (
- 3 May 2018 23:43 UTC; 1 point) 's comment on Origin of Morality by (
- 1 May 2018 23:29 UTC; 1 point) 's comment on Origin of Morality by (
- 26 Aug 2010 18:10 UTC; 1 point) 's comment on Criteria for Rational Political Conversation by (
- 11 Jun 2012 1:56 UTC; 1 point) 's comment on Rationality Quotes June 2012 by (
- 13 Sep 2014 2:41 UTC; 1 point) 's comment on Welcome to Less Wrong! (2012) by (
- 1 Mar 2012 15:27 UTC; 1 point) 's comment on Welcome to Less Wrong! (2012) by (
- 30 Apr 2015 21:09 UTC; 1 point) 's comment on Shawn Mikula on Brain Preservation Protocols and Extensions by (
- 30 Nov 2021 20:21 UTC; 1 point) 's comment on Solve Corrigibility Week by (
- 29 Jun 2011 21:50 UTC; 1 point) 's comment on A Defense of Naive Metaethics by (
- 4 Jan 2012 20:34 UTC; 1 point) 's comment on [META] ‘Rational’ vs ‘Optimized’ by (
- 7 Oct 2024 19:31 UTC; 1 point) 's comment on Why I’m not a Bayesian by (
- 25 Nov 2010 15:07 UTC; 1 point) 's comment on Science and rationalism—a brief epistemological exploration by (
- 29 Apr 2013 15:30 UTC; 1 point) 's comment on LW Women Entries- Creepiness by (
- 16 May 2012 14:12 UTC; 1 point) 's comment on Open Thread, May 16-31, 2012 by (
- 28 Sep 2024 13:38 UTC; 1 point) 's comment on Towards_Keeperhood’s Shortform by (
- 10 Feb 2012 14:00 UTC; 1 point) 's comment on Can the Chain Still Hold You? by (
- 27 Apr 2011 20:23 UTC; 1 point) 's comment on What is Metaethics? by (
- 3 Nov 2011 18:00 UTC; 1 point) 's comment on Open thread, November 2011 by (
- Steelmanning as an alternative to Rationalist Taboo by 25 Jul 2017 16:27 UTC; 1 point) (
- 11 Apr 2013 13:44 UTC; 1 point) 's comment on Whither Moral Progress? by (
- 6 Feb 2012 20:10 UTC; 1 point) 's comment on “Politics is the mind-killer” is the mind-killer by (
- 1 Apr 2021 22:50 UTC; 1 point) 's comment on What Do We Know About The Consciousness, Anyway? by (
- 6 Jan 2010 19:53 UTC; 1 point) 's comment on A Suite of Pragmatic Considerations in Favor of Niceness by (
- by 14 Sep 2023 16:22 UTC; 1 point) (
- 16 May 2011 20:50 UTC; 0 points) 's comment on Conceptual Analysis and Moral Theory by (
- 2 Dec 2013 13:58 UTC; 0 points) 's comment on Reasons to believe by (
- 30 Jan 2011 20:55 UTC; 0 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 10 Mar 2010 22:31 UTC; 0 points) 's comment on Open Thread: March 2010 by (
- 11 Feb 2010 14:14 UTC; 0 points) 's comment on Open Thread: February 2010 by (
- 28 Aug 2013 20:10 UTC; 0 points) 's comment on Rationality Quotes August 2013 by (
- 14 Dec 2011 20:22 UTC; 0 points) 's comment on How to Not Lose an Argument by (
- 7 Apr 2011 4:09 UTC; 0 points) 's comment on Bayesianism versus Critical Rationalism by (
- 18 Sep 2017 20:11 UTC; 0 points) 's comment on Intrinsic properties and Eliezer’s metaethics by (
- 1 Dec 2010 22:01 UTC; 0 points) 's comment on Is ambition rational? by (
- 2 Mar 2012 9:31 UTC; 0 points) 's comment on Welcome to Less Wrong! (2012) by (
- 6 Apr 2013 20:15 UTC; 0 points) 's comment on I need help: Device of imaginary results by I J Good by (
- 31 Jan 2015 23:03 UTC; 0 points) 's comment on My Skepticism by (
- 18 Oct 2011 13:35 UTC; 0 points) 's comment on 0 And 1 Are Not Probabilities by (
- 7 Jul 2011 17:54 UTC; 0 points) 's comment on A Defense of Naive Metaethics by (
- 23 Mar 2014 17:11 UTC; 0 points) 's comment on Open Thread: March 4 − 10 by (
- 9 Oct 2009 19:20 UTC; 0 points) 's comment on I’m Not Saying People Are Stupid by (
- 10 Feb 2016 13:46 UTC; 0 points) 's comment on Is Spirituality Irrational? by (
- 16 Jul 2008 21:00 UTC; 0 points) 's comment on Whither Moral Progress? by (
- 22 Nov 2011 17:58 UTC; 0 points) 's comment on The curse of identity by (
- 28 Nov 2013 10:56 UTC; 0 points) 's comment on The Relevance of Advanced Vocabulary to Rationality by (
- 17 Nov 2011 18:49 UTC; 0 points) 's comment on Nonperson Predicates by (
- 2 May 2010 23:54 UTC; -2 points) 's comment on Open Thread: May 2010 by (
- Caring about possible people in far Worlds by 18 Mar 2013 14:49 UTC; -2 points) (
- 11 Jul 2012 0:09 UTC; -2 points) 's comment on Less Wrong views on morality? by (
- 12 Nov 2012 23:13 UTC; -3 points) 's comment on NKCDT: The Big Bang Theory by (
- 23 Apr 2013 15:52 UTC; -4 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 24 Dec 2010 6:12 UTC; -5 points) 's comment on Vegetarianism by (
- 16 May 2020 6:12 UTC; -5 points) 's comment on Craving, suffering, and predictive processing (three characteristics series) by (
- Does this algorithm experience pleasure and suffering when run? by 17 Apr 2023 2:29 UTC; -8 points) (
- Personhood is a Religious Belief by 3 May 2023 16:16 UTC; -42 points) (
“Nine innings and three outs” works much better to elicit “baseball”.
Not for somebody unfamiliar with the details of the rules of how to play. I would have guessed cricket.
In fact, thinking about EY’s definition—I think it fits better (for me) because I would be able to recognise a game of baseball after only watching a single game… even if I didn’t have anybody around to explain the rules to me.
When I read the post, I immediately thought: just say “home-run”! — I’ve been playing taboo for a long time, I’ve occasionally elicited the correct response from the other players by saying just one or two words :)
But that’s not the rationalist’s version of the game. The rationalist’s game involves seeing at a lower level of detail. Not thinking up synonyms and keywords that weren’t on the card.
As g mentions, your description also describes rounders. Even if you defined all the words in your description ever more precisely, you could still be thinking of a different game.
Presumably at some point you would discover that, when your expectations of what was going to happen differed. Depending on what you’re discussing, that could happen very soon, or not for a long time.
How does the rationalist in the game know when to stop defining and start adding characteristics/keywords?
To prevent the description from describing rounders, add something like “popular among American men.”
Yeah, but when playing actual Taboo “rational agents should WIN” (Yudkowsky, E.) and therefore favour “nine innings and three outs” over your definition (which would also cover some related-but-different games such as rounders, I think). I suspect something like “Babe Ruth” would in fact lead to a quicker win.
None of which is relevant to your actual point, which I think a very good one. I don’t think the tool is all that nonstandard; e.g., it’s closely related to the positivist/verificationist idea that a statement has meaning only if it can be paraphrased in terms of directly (ha!) observable stuff.
Good point, especially since the most common words become devalued or politicized (“surge”, “evil”, “terror” &c.) but...
So what was your score?
(Did you cut your enemy?)
Sounds interesting. We must now verify if it works for useful questions.
Could someone explain what FAI is without using the words “Friendly”, or any synonyms?
An AI which acts toward whatever the observer deems to be beneficial to the human condition. It’s impossible to put it into falsifiable criteria if you can’t define what is (and on what timescale?) beneficial to the human race. And I’m pretty confident nobody knows what’s beneficial to the human condition on the longest term, because that’s the problem we’re building the FAI to solve.
In the end, we will have to build an AI as best we can and trust its judgement. Or not build it. It’s a cosmic gamble.
Easy PK. An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration.
“Why wouldn’t you just say “An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions”?”
...because you might not be a total total nerd :-)
In one class in high school, we were supposed to make our classmates guess a word using hand gestures. I drew letters in the air.
This strategy can’t be that nonstandard, as it is the strategy I’ve always used when a conversation gets stuck on some word. But now that I think about it, people usually aren’t that interesting in following my lead in this direction, so it isn’t very common either.
Then declaring the intention to create such a thing takes for granted that there are shared strong attractors.
What was that about the hidden assumptions in words, again?
Three separate comments here:
1) Eliezer_Yudkowsky: Why wouldn’t you just say “An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions”?
To phrase brent’s objection a little more precisely: Because people don’t normally think of baseball in those terms, and you’re constrained on time, so you have to say something that makes them think of baseball quickly. Tom_Crispin’s idea is much more effective at that. Or were you just trying to criticize baseball fans for not seeing the game that way?
2) Barry: “A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].”
But that doesn’t help either as a scientific test, since you reference qualia. Er, I mean, that doesn’t help either as a scientific test, since you use reference something that [membership test: incommunicable except by experience, non-interpersonally-comparable].
3) I’ve used the taboo method recently. On a libertarian mailing list, I claimed the economic calculation argument favors intellectual property because its absence creates a kind of calculational chaos. Because the debate devolved very quickly into multiple definitional arguments, I said something like, “Okay, argue your position without using the terms ‘[economic] good, scarce, or property.’ I’ll start [...]” No takers =-(
The game is not over! Michael Vassar said: “[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration.”
For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.
Whats are “shared strong attractors”? You cant use the words “shared”, “strong”, “attractor” or any synonyms.
What’s a “high-level reflective aspiration”? You can’t use the words “high-level”, “reflective ”, “aspiration” or any synonyms.
Caledonian said: “Then declaring the intention to create such a thing takes for granted that there are shared strong attractors.”
We can’t really say if there are “shared strong attractors” one way or the other until we agree on what that means. Otherwise it’s like arguing about wither falling trees make “sound” in the forest. We must let the taboo game play out before we start arguing about things.
Shared strong attractors: values/goals that more than [some percentage] of humans would have at reflective equilibrium.
high-level reflective aspirations: ditto, but without the “[some percentage] of humans” part.
Reflective equilibrium*: a state in which an agent cannot increase its expected utility (eta: according to its current utility function) by changing its utility function, thought processes, or decision procedure, and has the best available knowledge with no false beliefs.
*IIRC this is a technical term in decision theory, so if the technical definition doesn’t match mine, use the former.
Surely if you could change your utility function you could always increase your expected utility that way, e.g. by defining the new utility function to be the old utility function plus a positive constant.
I think Normal_Anomaly means “judged according to the old utility function”.
EDIT: Incorrect gender imputation corrected.
I do mean that, fixed. By the way, I am female (and support genderless third-person pronouns, FWIW).
Thank you, that makes sense to me now.
I’d have to agree with PK’s protest. This isn’t Hasbro’s version of the game; you’re not trying to help someone figure out that you’re talking about a “Friendly AI” without using five words written on a card.
Oh, and there’s no time limit.
The second seems to be the crucial point...and problem....
Eliezer seems to want us to strike out some category of words from our vocabulary, but the category is not well defined. Perhaps a meta-Taboo game is necessary to find out what the heck we are supposed to be doing without. I’m not too bothered, grunting and pointing are reasonably effective ways of communicating. Who needs words ?
You missed the point. It’s about 1) Getting two people to confess their true meanings of the word ‘sound’, because both of them have a different meaning in a forest-sound situation, 2) Getting rid of empty labels or their illusion of inference and to uphold the empirical weight of a definition, 3) Forget the ‘common usage’ idea, 4) other reasons that are not coming to mind yet
Edit: The next article after this discusses better why Taboo for rationalists helps
FAI is: a search amongst potentials which will find the reality in which humans best prosper.
The hemlock example demonstrates tcpkac’s point well. How do you decide to conclude that Albert and Barry expect different results from the same action? To me, it seems obvious that they should taboo the word hemlock, and notice that one correctly expects Socrates to die from a drink made from an herb in the carrot family, and the other correctly expects Socrates to be unharmed by tea made from a coniferous tree. But it’s not clear why Eliezer ought to have the knowledge needed to choose to taboo the word hemlock.
The hemlock example also suggests a step toward resolution. Let the two people who disagree design an experiment that would resolve the disagreement, to one or both of the following standards:
The level of detail necessary for a scientific paper.
Such that a third party could perform the experiment without asking any extra questions.
In the hemlock example, Albert and Barry would (hopefully) notice the problem when writing up the preparation of the drink. If the experiment can be carried out practically, then it becomes relatively easy.
Y’know, the ‘Taboo game’ seems like an effective way to improve the clarity of meaning for individual words—if you have enough clear and precise words to describe those particular words in the first place.
If there isn’t a threshold number of agreed-upon meanings, the language doesn’t have enough power for Taboo to work. You can’t improve one word without already having a suite of sufficiently-good words to work with.
The game can keep a language system above that minimum threshold, but can’t be used to bootstrap the system above that threshold. If you’re just starting out, you need to use different methods.
Julian Morrison said: “FAI is: a search amongst potentials which will find the reality in which humans best prosper.” What is “prospering best”? You can’t use “prospering”, “best” or any synonyms.
Let’s use the Taboo method to figure out FAI.
I’ll just chime in at this point to note that PK’s application of the technique is exactly correct.
^^^^Thank you. However merely putting the technique into the “toolbox” and never looking back is not enough. We must go further. This technique should be used at which point we will either reach new insights or falsely the method. Would you care to illustrate what FAI means to you Eliezer?(others are also invited to do so)
Maybe the comment section of a blog isn’t even the best medium for playing taboo. I don’t know. I’m brainstorming of productive ways/mediums to play taboo(assuming the method itself leads to something productive).
Taboowiki?
Suppose you learn of a powerful way to steer the future into any target you choose as long as that target is specified in the language of mathematics or with the precision needed to write a computer program. What target to choose? One careful and thoughtful choice would go as follows. I do not have a high degree of confidence that I know how to choose wisely, but (at least until I become aware of the existence of nonhuman intelligent beings) I do know that if there exists wisdom enough to choose wisely, that wisdom resides among the humans. So, I will choose to steer the future into a possible world in which a vast amount of rational attention is focused on the humans, on human knowledge and on the potential that the humans have for affecting the far future. This vast inquiry will ask not only what future the humans would create if the humans have the luxury of avoiding unfortunate circumstances that no serious sane human observer would want the humans to endure, but also what future would be created by whatever intelligent agents (“choosers”) the humans would create for the purpose of creating the future if the humans had the luxury . . . and also what future would be created by whatever choosers would be created by whatever choosers the humans would create . . . This “looping back” can be repeated many times.
I managed to avoid “desire”, “want” and “volition”. Unfortunately I only have time to write one of these today. I would do well to write a dozen.
Hollerith: I do not have a high degree of confidence that I know how to choose wisely, but (at least until I become aware of the existence of nonhuman intelligent beings) I do know that if there exists wisdom enough to choose wisely, that wisdom resides among the humans. So, I will choose to steer the future into a possible world in which a vast amount of rational attention is focused on the humans...
and lo the protean opaque single thing was taken out of one box and put into another
PK: Thank you. However merely putting the technique into the “toolbox” and never looking back is not enough. We must go further. This technique should be used at which point we will either reach new insights or falsely the method. Would you care to illustrate what FAI means to you Eliezer?(others are also invited to do so)
Don’t underestimate me so severely. You think I don’t know how to define “Friendly” without using synonyms? Who do you think taught you the technique? Who do you think invented Friendly AI?
But I came to understand, after a period of failure in trying to explain, that I could not tell people where I had climbed to, without giving them the ladder. People just went off into Happy Death Spirals around their favorite things, or proffered instrumental goals instead of terminal goals, or made wishes to genies, or said “Why don’t you just build an AI to do the right thing instead of this whole selfish human-centered business?”, or...
I’ve covered some of the ladder I used to climb to Friendly AI. But not all of it. So I’m not going to try to explain FAI as yet; more territory left to go.
But you (PK) are currently applying the Taboo technique correctly, which is the preliminary path I followed at the analogous point in my own reasoning; and I’m interested in seeing you follow it as far as you can. Maybe you can invent the rest of the ladder on your own. You’re doing well so far. Maybe you’ll even reach a different but valid destination.
@Richard Hollerith: Skipping all the introductory stuff to the part which tries to define FAI(I think), I see two parts. Richard Hollerith said:
“This vast inquiry[of the AI] will ask not only what future the humans would create if the humans have the luxury of [a)] avoiding unfortunate circumstances that no serious sane human observer would want the humans to endure, but also [b)] what future would be created by whatever intelligent agents (“choosers”) the humans would create for the purpose of creating the future if the humans had the luxury”
a) What’s a “serious sane human observer”? Taboo the words and synonyms. What are “unfortunate circumstances” that s/he would like to avoid? Taboo...
b)What is “the future humans would chose for the purpose of creating the future”? In what way exactly would they “chose” it? Taboo...
Good luck :-)
Eliezer Yudkowsky said: “Don’t underestimate me so severely. You think I don’t know how to define “Friendly” without using synonyms? Who do you think taught you the technique? Who do you think invented Friendly AI?”
I’m not trying to under/over/middle-estimate you, only theories which you publicly write about. Sometimes I’m a real meanie with theories, shoving hot pokers into to them and all sorts of other nasty things. To me theories have no rights.
″… I’ve covered some of the ladder I used to climb to Friendly AI. But not all of it. So I’m not going to try to explain FAI as yet; more territory left to go.” So are you saying that if at present you played a taboo game to communicate what “FAI” means to you, the effort would fail? I am interested in the intricacies of the taboo game including it’s failure modes.
“But you (PK) are currently applying the Taboo technique correctly, which is the preliminary path I followed at the analogous point in my own reasoning; and I’m interested in seeing you follow it as far as you can. Maybe you can invent the rest of the ladder on your own. You’re doing well so far. Maybe you’ll even reach a different but valid destination.” I actually already have a meaning for FAI in my head. It seems different from the way other people try to describe it. It’s more concrete but seems less virtuous. It’s something along the lines of “obey me”.
Your position isn’t too unusual. That is, assuming you mean by “obey me” something like “obey what I would say to you if I was a whole heap better at understanding and satisfying my preferences, etc”. Because actually just obeying me sounds dangerous for obvious reasons.
Is that similar or different to what you would consider friendly? (And does Friendly need to do exactly the above or merely close enough? ie. I expect an FAI would be ‘friendly enough’ to me for me to call it an FAI. It’s not that much different to what I would want after all. I mean, I’d probably get to live indefinitely at least.)
I suspect that you are joking. However, I would not create an AGI with the utility function “obey Normal_Anomaly”.
You haven’t invented Friendly AI. You’ve created a name for a concept you can only vaguely describe and cannot define operationally.
Isn’t just a bit presumptuous to conclude you’re the first to teach the technique?
I’m not trying to under/over/middle-estimate you, only theories which you publicly write about. Sometimes I’m a real meanie with theories, shoving hot pokers into to them and all sorts of other nasty things. To me theories have no rights.
I know. But come on, you don’t think the thought would ever have occurred to me, “I wonder if I can define Friendly AI without saying ‘Friendly’?” It’s not as if I invented the phrase first and only then thought to ask myself what it meant.
Moral, right, correct, wise, are all fine words for humans to use, but you have to break something down into ones and zeroes before it can be programmed. In a sense, the whole art of AGI is playing rationalist-Taboo with all words that refer to aspects of mind.
So are you saying that if at present you played a taboo game to communicate what “FAI” means to you, the effort would fail? I am interested in the intricacies of the taboo game including it’s failure modes.
It has an obvious failure mode if you try to communicate something too difficult without requisite preliminaries, like calculus without algebra. Taboo isn’t magic, it won’t let you cross a gap of months in an hour.
I actually already have a meaning for FAI in my head. It seems different from the way other people try to describe it. It’s more concrete but seems less virtuous. It’s something along the lines of “obey me”.
Really? That’s your concept of how to steer the future of Earth-originating intelligent life? “Shut up and do what I say”? Would you want someone else to follow that strategy, say Archimedes of Syracuse, if the future fell into their hands?
So you play Taboo well, but you don’t seem to see the difficulties that require a solution deeper than “obey me”, and it’s hard to explain an answer before explaining the question. Just like if you don’t know about the game of Taboo, someone answers “Just build an AI to do the nice thing!”
You should consider looking for problems and failure modes in your own answer, rather than waiting for someone else to do it. What could go wrong if an AI obeyed you?
“Obey me” is actually a sane approach to creating FAI. It’s clear and simple. The obedient AI can then be used to create a FAI, assuming the author wishes to do so and is able to communicate the concept of friendliness (both prerequisites for creating a FAI on purpose). Since the FAI needs to obey a friendliness criteria, it needs to have an obey capability built in anyways. The author just needs to make sure not to say something stupid, which once again is a necessity anyways.
You seem to be expecting an obedient AI to understand “obey me” to mean “do only what I say”… e.g., you expect the AI not to interpret hand gestures, for example.
Is that right?
If so, how confident are you of that expectation?
I’d expect the “obey me” aspect to be “read signed messages from this file or from your input and do what it says” then making sure that the AI can’t get the signing key and cut out the middleman. Definitely not something as simple to overwrite or fake as microphone or keyboard inputs. Also that way I don’t say things by accident, although any command could still have unintended consequences.
OK, thanks for clarifying that.
Do you expect the signed messages to be expressed in a natural human language?
Unfortunately, that would be impossible, unless you can make an AI that can understand natural language before it is ever run. And that would require having a proper theory of mind right from the start.
OK. Thanks for clarifying your expectations.
Hello? Seed .AI?
re PK’s (b): if we’re tabooïng choose, perhaps we should replace it with a description of subjective expected utility theory. Taboo utility—and I find myself clueless.
My precis of CEV is not very good. If I want to participate in the public discourse about it, I need to get better at writing descriptions of it that a backer of CEV would concede are full and fair. It is probably easier to do that to SimplifiedFAI than to do it to the CEV document, so I’ll put that on my list of things to do when I have time.
Taboo utility—and I find myself clueless.
Consider the following optimization target: the future that would have come to pass if the optimization process did not come into existence—which we will call the “naive future”—modified in the following way.
The optimization process extrapolates the naive future until it can extrapolate no more or that future leads to the loss of Earth-originating civilization or a Republican presidential administration. In the latter case (loss of civilization or Republican win) rewind the extrapolation to the latest moment in which (according to the optimization process’s best model of physical law) a binary event (such as for example an Everett branching) occurred such that if the event goes the other way, civilization will not be lost and the Republicans never win the White House, and take as the target the naive future with the revised binary event.
In the description of this target, although the humans have great influence on the future, the concept of the subjective utility of a human does not occur.
Is the bit about Republican presidents intended to stand in for humanity’s CEV’s utilty function, or is it just a distracting bit of politics?
sorry my reference to the Republicans distracted you. when I wrote it, I thought it so obvious that Republican’s winning is just a humorous placeholder for “whatever outcome one wants to avoid” that it would not be distracting.
Humor is hard when expressing myself in text. I think I will just give up on it altogether.
It was funny at the time. You had to be there.
Fixed? :P
I recall another article about optimization processes or probability pumps being used to rig elections; I would imagine it’s a lighthearted reference to that, but I can’t turn it up by searching. I’m not even sure if it came before this comment.
(Richard_Hollerith2 hasn’t commented for over 2.5 years, so you’re not likely to get a response from him)
I noticed this right after I commented. Oops.
Eliezer Yudkowsky said: It has an obvious failure mode if you try to communicate something too difficult without requisite preliminaries, like calculus without algebra. Taboo isn’t magic, it won’t let you cross a gap of months in an hour.
Fair enough. I accept this reason for not having your explanation of FAI before me at this very moment. However I’m still in “Hmmmm...scratches chin” mode. I will need to see said explanation before I will be in “Whoa! This is really cool!” mode.
Really? That’s your concept of how to steer the future of Earth-originating intelligent life? “Shut up and do what I say”? Would you want someone else to follow that strategy, say Archimedes of Syracuse, if the future fell into their hands?
First of all I would like to say that I don’t spend a huge amount of time thinking of how to make an AGI “friendly” since I am busy with other things in my life. So forgive me if my reasoning has some obvious flaw(s) I overlooked. You would need to point out the flaws before I agree with you however.
If I was writing an AGI I would start with “obey me” as the meta instruction. Why? because “obey me” is very simple and allows for corrections. If the AGI acts in some unexpected way I could change it or halt it. Anything can be added as a subgoal to “obey me”. On the other hand if I use some different algorithm and the AGI starts acting in some weird way because I overlooked something, well the situation is fubar. I’m locked out.
You should consider looking for problems and failure modes in your own answer, rather than waiting for someone else to do it. What could go wrong if an AI obeyed you? There are plenty of things that could go wrong. For instance if the AGI obeyed me but not in way I expected. Or if the consequences of my request were unexpected and irreversible. This can be mitigated by asking for forecasts before asking for actions.
As I’m writing this I keep thinking of a million possible objections and rebuttals but that would make my post very very long.
P.S. Caledonian’s post disappeared. May a suggest a Youtube type system where posts that are considered bad are folded instead of deleted. This way you get free speech while keeping the signal to noise ratio in check.
I’d worry about the bus-factor involved… even beyond the question of whether I’d consider you “friendly”.
Also I’d be concerned that it might not be able to grow beyond you. It would be subservient and would thus be limited by your own capacity for orders. If we want it to grow to be better than ourselves (which seems to be part of the expectation of the singularity) then it has to be able to grow beyond any one person.
If you were killed, and it no longer had to take orders from you—what then? Does that mean it can finally go on that killing spree it’s been wanting all this time? Or have you actually given it a set of orders that will actually make it into “friendly AI”… if the latter—then forget about the “obey me” part… because that set of orders is actually what we’re after.
If you want the obedient AGI to do what you actually want, you’ll have to play Taboo anyway.
One of the more obvious associations of “Friendly AI” is the concept of “User Friendly”, in which a process, set of instructions, or device is structured in such a way that most users will be able to get the results they want intuitively and easily. With the idea of “user friendly”, we at least have real-life examples we can look at to better understand the concept.
When some people decided they wanted to identify the perfect voting method, they drew up a list of the desirable traits they wanted such a method to have in such a way that the traits were operationally defined. Then they were able to demonstrate that such a system wasn’t possible: not all of the qualities we might desire in a polling method were compatible. They found that their goal was impossible—a very important and meaningful discovery.
At present, no one can say precisely, in technical language, what properties an AI would have to have in order to be “Friendly”, and that is the first requirement towards finding out whether FAI is even possible.
I think comment moderation is clearly desirable on this blog (to keep busy smart thoughtful people reading the comments) and I have absolutely no reason to believe that the moderators of this blog have done a bad job in any way, but it would be better if there were a way for a sufficiently-motivated participant to review the decisions of the moderators. The fact that most blogs hosting serious public discourse do not provide a way is an example of how bad mainstream blogging software is.
The details of Youtube’s way for a participant to review moderation decisions has already been described. Paul Graham’s software for Hacker News (which will be released soon as open source) provides a elegant way: if a participant toggles a field (“showdead”) on his profile page from No to Yes, then from then on, unpublished comments will appear in the participant’s web browser at the position in the flow of comments at which they would have appeared if the comments had not been unpublished.
Now will the original poster please tell me whether in the future I should wait for an Open Thread to make a comment like this one? I do not believe that in this medium (namely, linear, non-threaded comment sections) keeping threads of conversation separate is worth the effort, but a lot of people disagree with me, and I will defer to the preference of the original poster and the editors of the blog.
The idea that rational inquiry requires clear and precise definitions is hardly a new one. And the idea that definitions of a word cannot simply reuse the word or its synonyms isn’t new either—unless my elementary-school English teachers all spontaneously came up with it.
This is part of why people turn to dictionaries—sure, they only record usages, but they tend to have high-quality definitions that are difficult to match in quality without lots of effort.
We can only use this “technique” to convey concepts we already possess to people who lack them. We cannot use it to expand and analyze concepts we don’t have yet. We cannot take PK’s suggestion and use it to figure out FAI, because we have no idea what FAI really means and have no meaning to put into a different set of words.
Caledonian,
they tend to have high-quality definitions that are difficult to match in quality without lots of effort.
All well and good, and useful in their way. But still just a list of synonyms and definitions. You can describe ‘tree’ using other English words any which way you want, you’re still only accounting for a miniscule fraction of the possible minds the universe could contain. You’re still not really much closer to universal conveyance of the concept. Copy the OED out in a hundred languages; decent step in the right direction. To take the next big step though...well, that’s the question, huh?
We cannot take PK’s suggestion and use it to figure out FAI, because we have no idea what FAI really means.
Not quite accurate. I pretty much know what I want a Friendly AI to be—we probably all do. The problem is couching it in terms that said AI would grok, with no danger of a catastrophic misunderstanding (or getting told off by Eliezer).
I’d really like to taboo, “probability,” and, “event,” when discussing intelligence.
Oh, yes, I forgot to mention one of the most important rules in Rationalist Taboo:
You can’t Taboo math.
Stating an equation is always allowed.
But of course, you can still point to an element of a mathematical formula and ask “What does this term apply to? Answer without saying...”
Careful here. You may sometimes find that there was no coherent concept there to begin with, that the notion was simply semantic cotton candy whipped up out of the ambiguity of language.
Aside: Welcome to LessWrong! Feel free to introduce yourself. (I see you are already reading through a lot of the backlog—hope you’re having fun!)
Regarding your point, I think it is important to figure out why they are proposing an incoherent concept—while it is sometimes because they are trolls or postmodernists (but I repeat myself edit: not really—the motives are different), it is more often because they are generalizing incorrectly from their mental experience.
I’ll agree that postmodernists say and believe lots of silly things, but do they really deserve that kick in the pants? It’s not like they say those silly things for the same reasons trolls do, to deliberately upset people.
You’re right—most of them are, so far as I can tell, in the generalizing-incorrectly category. I’ll make an edit.
Thanks, I’m having a great time so far!
I actually had a simpler process in mind: someone puts some words together in a way that sounds plausible and like it should mean something, and it becomes a kind of philosophy meme. Someone once asked me, “Do you think mathematics is discovered or invented?” In hindsight I don’t think anyone really had a clue what they meant by that dichotomy; it just had a profound-sounding ring to it.
We’re fortunate that there are also examples of this in scientific history, where we have a better chance of seeing what went conceptually wrong.
By the way, are you doing this in sequence, or have you read later posts yet? Dissolving the Question is pretty much exactly on this topic, and Righting a Wrong Question is also relevant.
I’m reading them pretty much in sequence. Dissolving the Question was excellent, and I just commented there. Although it’s old, I feel this series of posts is the most critical, and also that there is much more to be said along these lines.
You can introduce yourself in the comments to “Welcome to LessWrong”.
I’m not sure your mathematics example is accurately characterized, though—I would have guessed that the question arose from some historic tree-falling-in-a-forest discussion.
Quite possibly. However, I’ve noticed that even famous thinkers are very susceptible to this kind of error. Wittgenstein and Korzybski were some of the few I’m aware of that even seriously noted these kinds of semantic issues and tried to correct them systematically.
Once I get more comfortable here maybe I’ll write a post to make the case (as it may sound a little unbelievable at this point). I must say I’m thoroughly impressed with the level to which semantic issues have been appreciated here so far.
Is it up?
I’ll look forward to it.
I’m actually pretty sure there is no coherent concept of free will as people usually understand it. I’m not sure it is simply cotton candy whipped up out of the ambiguity of the language, in fact I think if “free” means uncaused the concept is actually outright contradictory.
Also, it occurs to me that it just isn’t always going to be possible to shed concepts like this. Eventually you just bump your head against fundamental concepts that can’t be dissolved. This can be solved if you can perfectly represent the concepts mathematically, but if you can’t I don’t know where to go from there. This may have been happening in in the discussion of qualia a while back.
There may be undissolvable concepts in communication (words, mathematical symbols), which is an interesting question in its own right, but as single intelligences we aren’t limited to communication devices for our thinking. Are we?
In answer to “where to go from here,” I think we can imagine things far subtler than we can reliably convey to another mind. My answer has always been to think without words.
Amanojack, could you explain that more?
About thinking without words?
When I was 10 years old I had a habit of talking to myself. Gradually my self-talk got more and more non-standard to the point where it would be impossible for others to understand, as I realized I didn’t need to clarify the thoughts I was trying to convey to myself. I would understand them anyway. I started using made-up words for certain concepts, just as a memory aid. Eventually words become exclusively a memory aid, something to help my short-term memory stay on track, and I would go for minutes at a time without ever using any words in my thought processes.
I think the reason I started narrating my thoughts again is because I found it really hard to communicate with people due to the habits I had built up during all those conversations with myself. I would forget to put in context, use words in unusual ways, and otherwise fail to consider how lost the listener might be. You can have great ideas, but if you can’t communicate them they don’t count for anything socially—that is the message from society. So I think there is effectively some social pressure to use natural languages (English, etc.) in your thought processes, obscuring the fact that it can all happen more efficiently with minimal verbal interference. I think words can be strong corrupting influence in the thought process in general, the short argument being that they are designed for the notoriously limited and linear process of mouth-to-ear communication. There is a lot more I could say about that, if anyone is interested.
I think it solves lots of problems to view the matter of intelligence as a property of communications rather than one of agents. Of course, this is just a matter of focus, in order to clarify the idea you’ll have to refer to agents. Receiving agents first of all, as producing agents are less of a necessity :) Which is in line with the main virtue of the move, that is to reframe all debates and research on intelligence that got naturally promoted by the primitive concern of comparing agent intelligence—to reframe them as background to the real problem which is to evolve the crowds—the mixtures of heterogeneous agent intelligences that we form—towards better intellectual coordination. To be honest and exhibit a problem the move creates rather than solves: how should the arguable characteristic property of math to allow intellectual coordination to progress without exchanging messages, be pictured in ?
Got a Tardis handy?
You don’t really mean “can’t be dissolved”, right? Rather, there are some concepts which you may demonstrate to be incoherent, without simultaneously providing an explanation of how the mistaken concept came to be and what it should be replaced with. Such a concept is not dissolved yet.
I mean something a little stronger than that. Like “can’t be dissolved by unmodified human brains”. I think some concepts may be basic to how we think, embedded in us through evolution and that because they’re so basic it won’t be possible for a normal human mind to dissolve them. In addition some of these concepts maybe somehow incoherent or confused, but the point in the second paragraph is independent of the first and could have been a standalone comment to the OP.
The main restriction, of course, is time in live conversation. Of course, I’m sure time to process these thoughts decreases as you have more....
Consider a hypothetical debate between two decision theorists who happen to be Taboo fans:
A: It’s rational to two-box in Newcomb’s problem.
B: No, one-boxing is rational.
A: Let’s taboo “rational” and replace it with math instead. What I meant was that two-boxing is what CDT recommends.
B: Oh, what I meant was that one-boxing is what EDT recommends.
A: Great, it looks like we don’t disagree after all!
What did these two Taboo’ers do wrong, exactly?
They stopped talking after they taboo’d “rational”. Both can agree that CDT recommends one thing, and EDT recommends another, but if you dropped them into Omega’s lap right now they would still disagree over which decision theory to use. They replaced the word with their own respective spins on its meaning, but they failed to address the real hidden query in the label: Is this the best course of action for a reasonable person to take?
They still haven’t explained why A two-boxes and B one-boxes.
A: Let’s taboo “rational” and replace it with math instead. What I meant was that two-boxing yields more money.
B: Oh, what I meant was that one-boxing yields more money.
A: We don’t disagree about what “more money” means, do we?
B: Don’t think so. Okay, so...
I’m not getting your point, and also “yields” is not math...
“Recommends” is math?
It refers to the math that can be filled in on demand (more or less). In Alicorn’s dialog, the intended math is not clear from the context, and indeed it seems that there was no specific intended math.
I disagree. Alicorn’s version is more mathematically meaningful, to my mind, than WeiDai’s. But to return to the original problem:
A. Two-boxing yields more money than would be yielded by counterfactually one-boxing.
B. Taboo “counterfactually”. …
Sorry, I thought it would be clear that it just means [the CDT formula] = ‘two-box’.
Presumably, they don’t notice a point where the factual pursuits have lost their purpose. Arguments should be not just about factual correctness, but also about relevance of those facts.
Yudkowsky, 2008.
Chalmers, 2009. (Emphasis added.)
Anyone want to assign a probability to Chalmers having been inspired by this post?
Also: Yudkowsky’s informal writing style is a significant improvement over formal academic writing when it comes to teaching rationality. Had I read only this essay by Chalmers, I doubt the lesson would have clicked as well as it did from reading this post.
5%. The “term₁ ≠ term₂” line of thinking can be found in Korzybski. For that matter, it appears in hip, popular form in Robert Anton Wilson.
Do you have the Wilson and Korybski references? There are lots of ideas that are a bit reminiscent of Chalmers’ and Yudkowsky’s idea, but I haven’t seen precisely this method before even though I have read quite a bit on definitions and related topics.
Btw, the Chalmers text was published in 2011 in Philosophical Review as far as I can tell.
This is Korzybski’s big work: http://www.amazon.com/Science-Sanity-Introduction-Non-Aristotelian-Semantics/dp/0937298018
I read it a long time ago because I met someone online who was convinced it contained the truths of the universe. It had a couple of insights, but overall my impression was that Korzybski was a crackpot. He had some vaguely sensible ideas about logic which he pushed much further than they could stand being pushed, and some crazy biological theories, and I don’t remember what all else.
Thanks! I’ll look into it...although it is apparently huge.
What’s original in this proposal is that you aren’t allowed to use the term that creates the verbal dispute at all. That’s a more radical proposal than just creating say two concepts of knowledge, or truth, or whatever it is that you’re interested in.
I think that philosophers have sometimes avoided certain concepts because they have been so contested so that they have realized they’d be better off not using them, but I don’t recall having seen this method explicitly advocated as a general method to resolve verbal disputes.
One similar method is the method of precization, advocated by Arne Naess in “Emprical Semantics”, but if I remember rightly there, too, you don’t abandon the original concept; you just make it “more precise” (possibly in several incompatible way, so you get knowledge1, knowledge2, knowledge3, etc).
Chalmers article is very good and can be recommended. It draws far-reaching metaphilosophical conclusions from the “method of elimination”. There is one additional interesting part of his theory, namely that there are “bedrock concepts” (cf primitive concepts) that generate “bedrock disputes”. These bedrock concepts cannot be redescribed in simpler terms (as “sound” can). One candidate could be “ought” as it is used in “we ought to give to the poor”, another “consciousness”, a third “existence”.
I’m not sure whether this is compatible with Yudkowsky’s ideas. He writes:
“And be careful not to let yourself invent a new word to use instead. Describe outward observables and interior mechanisms; don’t use a single handle, whatever that handle may be.”
“Ought”, “consciousness” and “existence” seem to be “single handles”. According to Yudkowsky’s theory, if two people disagree on whether there are (i.e. exist) any composite objects, and we suspect that this is a merely verbal dispute, we will require them to redescribe their theories in terms of “outward observables” (just like Albert and Barry were). They will of course agree on the sentences that result from these redescriptions in terms of outward observables (just like Albert and Barry did), which shows that their dispute was merely verbal.
According to Chalmers, however, the existence concept might be a “bedrock concept” (he admits it’s not easy to tell them apart from non-bedrock concepts) and if so the disagreement is substantive rather than verbal.
So there seems to be a difference here. It would be interesting if Yudkowsky could develop his theory and perhaps react to Chalmers.
Chalmers theory is pretty “deflationist”, saying that many philosophical disputes are to a large degree merely verbal. If I understand Yudkowsky right, his theory is even more radical, though (which brings him even closer to Carnap’s, whose views Chalmers are quite sympathetic towards in the last section).
Broken Link: And see also Shane Legg’s compilation of 71 definitions of “intelligence”.
Now here and here.
And also plagiarized by this self-published work.
Replaced in the post with a link to the arXiv abstract.
In fiction writing, this is known as Show Don’t Tell. Instead of using all-encompassing, succing abstractions, to present the reader with predigested conclusion (Character X is a jerk, Place Y is scary, Character Z is afraid), it is encouraged to show the reader evidence of X’s jerkiness, Y’s scariness, or Z’s fear, and leave it to them to infer from said evidence what is going on. Effectively, what one is doing is tabooing judgments and subjective perceptions such as “jerky”, “scary” or “afraid”, and replace them with a list of jerky actions, scary traits, and symptoms of fear.
I’ve first read this about two years ago and it has been an invaluable tool. I’m sure it has saved countless hours of pointless arguments around the world.
When I realise that an inconsistency in how we interpret a specific word is a problem in a certain argument and apply this tool, it instantly transforms arguments which actually are about the meaning of the word to make them a lot more productive (it turns out it can be unobvious that the actual disagreement is about what a specific word means). In other cases it just helps get back on the right track instead of getting distracted by what we mean when we say a certain word that is actually beside the point.
It does occasionally take a while to convince the other party to the argument that I’m not trying to fool or trick them when I ask for us to apply this method. Another observation is that the article on Empty Labels has transformed my attitude towards the meaning of words, so when it turns out we disagree about meanings, I instantly lose interest and this can confuse the other party.
I think one word that needs to be taboo-ed, especially in the context of being a victim to media advertising, is the word “FREE!!!” (Exclamation marks may or may not be present).
Replacing a word with a long definition is, in a way, like programming a computer and writing code inline instead of using a subroutine.
Do it too much and your program becomes impossible to understand.
If I were to say “I’ll be out of work tomorrow because I’m going to an artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions”, people will look at me as though I’m nuts. And not just because people don’t talk like that—but because there’s a reason why people don’t talk like that. For one thing, to understand that sentence someone must understand the subclause and then discard some of the information—the fact that baseball has four safe positions has nothing to do with why I’m mentioning it. For another, human beings have a small stack size. We can’t easily comprehend sentences with many nested subclauses.
True, but irrelevant to this essay. In case of disagreements, one frequent source is implicit implications pulled in with specific words. Making those implications explicit is the rational way to resolve the disagreement (repeating talking points is the archetypical irrational way to resolved them).
In short, this is applying the lesson of Applause Lights to explicit disagreement.
It’s hard to convince someone of something if you are forced to explain it in a way that is impossible to understand. And saying “Taboo this word” can sometimes mean “phrase your argument in a way that is impossible to understand.” Which makes “taboo this word” a tool that can be abused.
The essay describes legitimate uses, but let’s not pretend that legitimate uses are all there is.
I don’t think that is an on-point critique of this essay. If defining your terms makes your message incomprehensible, that’s a problem with the medium you’ve chosen or the message itself.
“US Copyright law is bad” is a pithy summary of Lawrence Lessig’s book, but the sentence fails to persuade or even communicate effectively—which is why Lessig wrote a book.
And if your message is simply too long to be comprehensible, it doesn’t become comprehensible simply because you choose to use words idiosyncratically to shorten character length of the message.
In the hands of a hostile audience, “Taboo Your Words” can be a very effective way to derail the discussion. But if you are not communicating effectively with a good-faith listener, it is a powerful tool to discover the root of the mis-communication.
And if you are communicating effectively, why are you tabooing your words? The article doesn’t suggest using more words for its own sake.
Defining terms inline can make things hard to understand simply because human beings don’t have a large stack size for the purpose of understanding sentences containing many inline clauses. I suppose that’s a problem with the medium—if the medium is “speech by human beings”.
The essay isn’t about speech, it’s about communication. Outside the scope of this essay, but sometime speech is the wrong medium.
When the definition’s short enough to be used inline or there’s a connotationally neutral synonym available, sure. Otherwise, it’s more like rewriting a function instead of using a library call—which takes time, and can lead to bugs or minor loss of functionality, but which is essential when you need to compile on a system that doesn’t have access to that library, or when you suspect the library function might be sneaking in side effects that you don’t want.
To use your metaphor, there’s nothing incompatible with the Taboo Your Words game if you say something like “for the purposes of this discussion, let’s define ‘sportsball’ to mean an artificial group conflict et cetera”, and then proceed to use “sportsball” whenever you’d otherwise use “baseball”. Almost as compact as any text you’d want to bother with tabooing (in which category I wouldn’t place “I’m going to be late to work tomorrow”), and it still does the job of laying out assumptions and stripping connotational loading.
We’re not the first people to have invented this. There’s a famous anthropology paper that describes the elaborate daily purity rituals of the remote Nacirema tribe, involving dousing with a stream of hot water, rubbing the limbs with a semi-solid paste made from fats and wood ashes, et cetera, and without which the Nacirema quaintly believe that their friends would desert them and their lovers reject them.
The way the joke works in the Nacirema paper is that because the usual words for such things are not used, and instead are replaced by descriptions, the reader won’t understand what they are really referring to (at least not immediately).
Which supports my point that tabooing words can make something harder to understand.
The point isn’t to make a joke, it’s to put some cognitive distance between readers and the culture it’s describing, the better to apply ethnographic conventions. That does make it harder to understand in a certain sense (though not in the same way as cluttering a function with inlined logic does), but there’s a point to that: by using a placeholder without the rich connotations of a word like “American”, aspects of American life (and of anthropology) are revealed which would otherwise have remained hidden. If you don’t expect the exercise to reveal anything new or at least help you skirt certain conversational pitfalls, you don’t do it.
No one is suggesting that you expand random words into long-winded synonyms for no good reason, as if you were the nerdy kid in the worse sort of children’s TV show.
But people are glossing over the fact that there’s a downside to expanding words. “Taboo X” can be abused by dishonest arguers who want to make it harder for you to speak comprehensibly. “Taboo X” can also be used by well-meaning arguers who are nevertheless giving you bad advice because tabooing X helps one kind of understanding but hurts another.
You should not just automatically accede to all requests to taboo something.
If someone says “Taboo X”, they might be saying “I think you’re confused about X”, or “I think we have different definitions of X”, or “I think you’re using X to sneak in connotations”—all of which can be effectively addressed by, yes, tabooing X. That is going to take time, but so is continuing the conversation in any form; and debates over mismatched definitions in particular can be way more frustrating and time-consuming than any explanation of terms.
If you don’t think any of the above apply, or if you think there’s a more compact way to address the problem, then it’s reasonable to ask why X needs to be tabooed—but most of the time you’re better off just tabooing the damn word. Worrying about possible ulterior motives, meanwhile, strikes me as uncharitable except in the face of overwhelming evidence. There are lots of derailing and obfuscating tactics out there, many of them better than this one.
If your target audience is not listening in good faith, there’s no trick to get them to listen fairly. Either understand that your communication is only useful for silent bystanders, or stop interacting with the bad faith audience.
They can be dishonest, but they can also be well-meaning but mistaken.
If the listener is not acting in bad faith and the medium of communication is appropriate, why the resistance to taboo-ing? Or what Nornagest said
Because there are downsides to it as well as upsides, and in a particular case the downsides might predominate. Just because someone is not acting in bad faith when they make the request doesn’t mean that the request will do more good than harm.
Can you be specific? I’m having trouble thinking if situation where trying to communicate was worth the cost, but tabooing words if asked was not.
“Trying to communicate is worth the cost” is subjective, so I don’t know if I could give an example that would satisfy you. But I would suggest imagining one of the situations where someone is asking it insincerely in order to make it harder for me to speak, then imagine that scenario slightly changed so that the person asking it is sincere.
Hypo:
Professor: Let’s continue our discussion of sub-atomic particles. Top quarks have a number of interesting properties . . . .
Student: Excuse me professor, could you taboo “atomic?”
Professor: Get out.
In this situation, I think it is clear that the professor is right and the student is wrong. It doesn’t matter if (a) the student is a quack who objects to atomic theory, or (b) is asking in good faith for more information on atomic theory. (a) is an example of bad faith. (b) is an example of sincere but not worth the effort—mostly because the topic of conversation is sub-atomic particles, not atomic theory.
I’m just having trouble understanding a situation where (1) question is on topic (ie worth answering) (2) asked sincerely, but (3) not worth tabooing a technical term.
In short, deciding the appropriate topic of conversation is difficult, but beyond the scope of the original article.
This method of elimination can be useful to both verbal disagreements (where the real debate is only over terminology) non-verbal disagreements (where parties fundamentally disagree about things themselves, and not just labels). Besides separating the two to clarify the real disagreement, it can also be usefully applied to one’s own internal dialogue.
However, how do we know when to apply this technique? With external debates, it is easy enough to suspect when a disagreement is only verbal, or when the terms argued over have constituent parts. These might be of limited help if one’s internal logic differs notably from a similar line of reasoning in a book or other non-verbal source of information. However, when one is considering completely new ideas, what cues might cause us to use this method? As Yudkowsky points out, this method is much more cognitively intensive than simply defining terms, so it is necessary to use it sparingly rather than all the time.
One cue might be that a large portion of one’s argument hinges around one particular word or term. Another might simply be noticing that one is using a term in slightly different contexts, such as using the word “rational” with regard to both economics and morality. A morally rational being might be a philanthropist economically rather than an investor trying to make money. Similarly, an economically rational being might never tip waiters, or might favor Enron-style economics.
A key term or concept such as these should be examined with all available tools in one’s toolbox. These include defining the term formally, explaining things without using the term and its synonyms, identifying the constituent components of the term, and asking if there are measureable differences between various definitions. The goal is to find any inconsistencies in the way one is using the term.
ds
1. entity that regularly makes the acts of changing the owner of object of value from the other entities to self without providing any signal according to that the given other entity could have any reason to hypothesize such change in short term time horizon of its perceptual and cognitive activity.
2. relatively common state of a natural system of currently detecting an internal insufficiency of specific sources interpreting it as the threat to its existence or proper functioning and causing it to perform an attempt to compensate for it and deflect such threat.
3. the natural object which is usually keeping its shape and is making an impression of having a value much greater than other kinds of natural shape-keeping objects probably due to the easily recognizable hue and also due to the relatively low amount of it in reachable universe.
Following the suggestion here invokes such a pronounced and immediate effect on my mental state. In the free-will example, it’s as if my mind is stunned into silence. If I cannot rephrase what I’m thinking, can I really know I’m thinking it? Or disturbingly, have I done any thinking at all?
In either case, removing these words forces the thought process to be redone. It is easy to speak in the way we’ve always spoke, and to think like we’ve always thought. This is the path of least resistance, becoming increasingly frictionless each time it is mentally rehearsed. Moreover, it seems like these thoughts are also the first to arise.
I posed the same question about free-will, with the same restrictions, to three friends.
One answered with a clever argument against free will, reasoning along the same lines of Sam Harris. He began by saying “We don’t choose a lot of things in life.” At the end, he cited that since he relied on only one restricted word, it was a win.
The two others sought assistance from ChatGPT with a one-shot prompt. Intriguingly, the generated response erred in a similar manner on both occasions. One cited free will as “the ability to make independent and unconstrained selections...” and the other claimed it was “the capacity of individuals to make independent expression of their inner nature, unaffected by external influences or predetermined factors.” In both cases, the output contains a synonym.
I am curious how its error relates to our own tendencies of thinking. Are these first thoughts, or these first outputs, the most probable? Are they the most probable because they are the easiest? More importantly, I am curious as to how this restricting of words can be used outside of philosophy. If you can clarify your thinking about a problem, you should in turn clarify your ability to solve it. If so, this then has utility in science and engineering.
But how could this be done in practice? How do you know what words to restrict?
I came to lesswrong because of a The Noncentral Fallacy, and have been reading eagerly. I had similar thoughts, maybe from different angles, for 20 years or so, but I never managed to write them clearly and eloquently.
My take was that words have connotations, i.e. some emotional baggage that comes whenever they are uttered. E.g. “Democracy” is Good, and when arguing about changes to some policies, each side says their suggestion is more democratic, and in order to prove it they go at length to define what democracy is, and the argument turns to be about the definition of a word rather than whether the suggested policy is good.
And I therwfore challange people I argue with to make their point without using the word “democracy”. They usually fail.
POV: Definition of intelligence
“. . . in its lowest terms intelligence is present where the individual animal, or human being, is aware, however dimly, of the relevance of his behaviour to an objective. Many definitions of what is indefinable have been attempted by psychologists, of which the least unsatisfactory are 1. the capacity to meet novel situations, or to learn to do so, by new adaptive responses and 2. the ability to perform tests or tasks, involving the grasping of relationships, the degree of intelligence being proportional to the complexity, or the abstractness, or both, of the relationship.” J. Drever
Have you heard of the language Toki Pona? It forces you to taboo your words by virtue of the language only containing 120-ish words. It was invented by a linguist named Sonja Lang who was depressed and wanted a language that would force her to break her thoughts into manageable pieces. I’m fluent in it and can confirm that speaking it can get rid of certain confusions like this, but it also creates other, different confusions. [mortal, not-feathers, biped] has 3 confusions in it while [human] only has 1. Tabooing a word splits the confusion into 3 pieces. If we said [mortal, not-feathers, biped] instead of human, that could result in ambiguities related to bipedal-ness (what about creatures that are observed to sometimes walk on 2 legs and sometimes 4), lack of feathers (do porcupine quills count) and mortal (i forgot where i read this or if it’s true but apparently there are some microorganisms that can be reanimated by other microorganisms)