Leave a Line of Retreat
When you surround the enemy
Always allow them an escape route.
They must see that there is
An alternative to death.
—Sun Tzu, The Art of War
Don’t raise the pressure, lower the wall.
—Lois McMaster Bujold, Komarr
I recently happened into a conversation with a nonrationalist who had somehow wandered into a local rationalists’ gathering. She had just declared (a) her belief in souls and (b) that she didn’t believe in cryonics because she believed the soul wouldn’t stay with the frozen body. I asked, “But how do you know that?”
From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her. I don’t say this in a bad way—she seemed like a nice person without any applied rationality training, just like most of the rest of the human species.
Most of the ensuing conversation was on items already covered on Overcoming Bias—if you’re really curious about something, you probably can figure out a good way to test it, try to attain accurate beliefs first and then let your emotions flow from that, that sort of thing. But the conversation reminded me of one notion I haven’t covered here yet:
“Make sure,” I suggested to her, “that you visualize what the world would be like if there are no souls, and what you would do about that. Don’t think about all the reasons that it can’t be that way; just accept it as a premise and then visualize the consequences. So that you’ll think, ‘Well, if there are no souls, I can just sign up for cryonics,’ or ‘If there is no God, I can just go on being moral anyway,’ rather than it being too horrifying to face. As a matter of self-respect, you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it.”
The principle behind the technique is simple: as Sun Tzu advises you to do with your enemies, you must do with yourself—leave yourself a line of retreat, so that you will have less trouble retreating. The prospect of losing your job, for example, may seem a lot more scary when you can’t even bear to think about it than after you have calculated exactly how long your savings will last, and checked the job market in your area, and otherwise planned out exactly what to do next. Only then will you be ready to fairly assess the probability of keeping your job in the planned layoffs next month. Be a true coward, and plan out your retreat in detail—visualize every step—preferably before you first come to the battlefield.
The hope is that it takes less courage to visualize an uncomfortable state of affairs as a thought experiment, than to consider how likely it is to be true. But then after you do the former, it becomes easier to do the latter.
Remember that Bayesianism is precise—even if a scary proposition really should seem unlikely, it’s still important to count up all the evidence, for and against, exactly fairly, to arrive at the rational quantitative probability. Visualizing a scary belief does not mean admitting that you think, deep down, it’s probably true. You can visualize a scary belief on general principles of good mental housekeeping. “The thought you cannot think controls you more than thoughts you speak aloud”—this happens even if the unthinkable thought is false!
The leave-a-line-of-retreat technique does require a certain minimum of self-honesty to use correctly.
For a start: You must at least be able to admit to yourself which ideas scare you, and which ideas you are attached to. But this is a substantially less difficult test than fairly counting the evidence for an idea that scares you. Does it help if I say that I have occasion to use this technique myself? A rationalist does not reject all emotion, after all. There are ideas which scare me, yet I still believe to be false. There are ideas to which I know I am attached, yet I still believe to be true. But I still plan my retreats, not because I’m planning to retreat, but because planning my retreat in advance helps me think about the problem without attachment.
But the greater test of self-honesty is to really accept the uncomfortable proposition as a premise, and figure out how you would really deal with it. When we’re faced with an uncomfortable idea, our first impulse is naturally to think of all the reasons why it can’t possibly be so. And so you will encounter a certain amount of psychological resistance in yourself, if you try to visualize exactly how the world would be, and what you would do about it, if My-Most-Precious-Belief were false, or My-Most-Feared-Belief were true.
Think of all the people who say that without God, morality is impossible.1 If theists could visualize their real reaction to believing as a fact that God did not exist, they could realize that, no, they wouldn’t go around slaughtering babies. They could realize that atheists are reacting to the nonexistence of God in pretty much the way they themselves would, if they came to believe that. I say this, to show that it is a considerable challenge to visualize the way you really would react, to believing the opposite of a tightly held belief.
Plus it’s always counterintuitive to realize that, yes, people do get over things. Newly minted quadriplegics are not as sad, six months later, as they expect to be, etc. It can be equally counterintuitive to realize that if the scary belief turned out to be true, you would come to terms with it somehow. Quadriplegics deal, and so would you.
See also the Litany of Gendlin and the Litany of Tarski. What is true is already so; owning up to it doesn’t make it worse. You shouldn’t be afraid to just visualize a world you fear. If that world is already actual, visualizing it won’t make it worse; and if it is not actual, visualizing it will do no harm. And remember, as you visualize, that if the scary things you’re imagining really are true—which they may not be!—then you would, indeed, want to believe it, and you should visualize that too; not believing wouldn’t help you.
How many religious people would retain their belief in God if they could accurately visualize that hypothetical world in which there was no God and they themselves have become atheists?
Leaving a line of retreat is a powerful technique, but it’s not easy. Honest visualization doesn’t take as much effort as admitting outright that God doesn’t exist, but it does take an effort.
1And yes, this topic did come up in the conversation; I’m not offering a strawman.
- My mistakes on the path to impact by 4 Dec 2020 22:13 UTC; 430 points) (EA Forum;
- The hostile telepaths problem by 27 Oct 2024 15:26 UTC; 378 points) (
- Please don’t throw your mind away by 15 Feb 2023 21:41 UTC; 356 points) (
- Staring into the abyss as a core life skill by 22 Dec 2022 15:30 UTC; 345 points) (
- Book Review: How Minds Change by 25 May 2023 17:55 UTC; 310 points) (
- The Feeling of Idea Scarcity by 31 Dec 2022 17:34 UTC; 246 points) (
- “Flinching away from truth” is often about *protecting* the epistemology by 20 Dec 2016 18:39 UTC; 233 points) (
- Sunset at Noon by 21 Nov 2017 2:05 UTC; 215 points) (
- Cached Selves by 22 Mar 2009 19:34 UTC; 215 points) (
- How to Not Lose an Argument by 19 Mar 2009 1:07 UTC; 163 points) (
- Holly Elmore and Rob Miles dialogue on AI Safety Advocacy by 20 Oct 2023 21:04 UTC; 162 points) (
- EA should blurt by 22 Nov 2022 21:57 UTC; 155 points) (EA Forum;
- Moral Reality Check (a short story) by 26 Nov 2023 5:03 UTC; 148 points) (
- Compartmentalization in epistemic and instrumental rationality by 17 Sep 2010 7:02 UTC; 123 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 119 points) (
- PSA: The Sequences don’t need to be read in sequence by 23 May 2022 2:53 UTC; 87 points) (
- Fluent, Cruxy Predictions by 10 Jul 2024 18:00 UTC; 85 points) (
- Adding Up To Normality by 24 Mar 2020 21:53 UTC; 84 points) (
- Taking Ideas Seriously by 13 Aug 2010 16:50 UTC; 81 points) (
- Crisis Boot Camp: lessons learned and implications for EA by 24 Jun 2023 6:28 UTC; 77 points) (EA Forum;
- So You’ve Changed Your Mind by 28 Apr 2011 19:42 UTC; 77 points) (
- [Valence series] 3. Valence & Beliefs by 11 Dec 2023 20:21 UTC; 75 points) (
- Get Curious by 24 Feb 2012 5:10 UTC; 71 points) (
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- “Fractal Strategy” workshop report by 6 Apr 2024 21:26 UTC; 67 points) (
- Please don’t throw your mind away by 15 Feb 2023 21:48 UTC; 62 points) (EA Forum;
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- Gears Level & Policy Level by 24 Nov 2017 7:17 UTC; 61 points) (
- Bind Yourself to Reality by 22 Mar 2008 5:09 UTC; 60 points) (
- “But It Doesn’t Matter” by 1 Jun 2019 2:06 UTC; 55 points) (
- Two Prosocial Rejection Norms by 28 Apr 2022 20:53 UTC; 53 points) (
- Reflections on a Personal Public Relations Failure: A Lesson in Communication by 1 Oct 2010 0:29 UTC; 50 points) (
- How Much Thought by 12 Apr 2009 4:56 UTC; 49 points) (
- Fighting a Rearguard Action Against the Truth by 24 Sep 2008 1:23 UTC; 47 points) (
- Rational Repentance by 14 Jan 2011 9:37 UTC; 45 points) (
- Buy Insurance—Bet Against Yourself by 26 Nov 2010 4:48 UTC; 42 points) (
- Training Better Rationalists? by 5 Aug 2021 11:13 UTC; 42 points) (
- Personal relationships with goodness by 14 May 2018 18:50 UTC; 40 points) (
- Sunk Cost Fallacy by 12 Apr 2009 17:30 UTC; 40 points) (
- Confirmation Bias in Action by 24 Jan 2021 17:38 UTC; 37 points) (
- See the dark world by 20 Sep 2015 5:00 UTC; 31 points) (EA Forum;
- 14 Feb 2012 18:00 UTC; 29 points) 's comment on What happens when your beliefs fully propagate by (
- Models of human relationships—tools to understand people by 29 Jul 2017 3:31 UTC; 29 points) (
- Planning the Enemy’s Retreat by 11 Jan 2017 5:44 UTC; 28 points) (
- Crisis Boot Camp: lessons learned and implications for EA by 24 Jun 2023 6:28 UTC; 26 points) (
- A LessWrong “rationality workbook” idea by 9 Jan 2011 17:52 UTC; 26 points) (
- Essaying Other Plans by 6 Mar 2024 22:59 UTC; 26 points) (
- 30 Nov 2013 9:06 UTC; 25 points) 's comment on According to Dale Carnegie, You Can’t Win an Argument—and He Has a Point by (
- 29 Jan 2010 10:46 UTC; 25 points) 's comment on Logical Rudeness by (
- Zen and Rationality: Don’t Know Mind by 6 Aug 2020 4:33 UTC; 25 points) (
- 20 Jan 2023 18:30 UTC; 23 points) 's comment on The ones that walk away by (EA Forum;
- Leave an Emotional Line of Retreat by 8 Jun 2023 18:36 UTC; 23 points) (
- Singletons Rule OK by 30 Nov 2008 16:45 UTC; 23 points) (
- What is Rationality? by 1 Apr 2010 20:14 UTC; 22 points) (
- Raising the forecasting waterline (part 2) by 12 Oct 2012 15:56 UTC; 22 points) (
- 1 Feb 2013 18:09 UTC; 21 points) 's comment on Open Thread, February 1-14, 2013 by (
- Good practices for changing minds by 7 Apr 2022 15:20 UTC; 20 points) (EA Forum;
- Good and bad ways to think about downside risks by 11 Jun 2020 1:38 UTC; 19 points) (
- 20 Dec 2012 20:53 UTC; 19 points) 's comment on Gun Control: How would we know? by (
- Leaving a line of retreat for theists by 23 Apr 2011 1:04 UTC; 17 points) (
- Lighthaven Sequences Reading Group #15 (Tuesday 12/17) by 14 Dec 2024 6:40 UTC; 17 points) (
- Minding Our Way – Deliberate Once by 23 May 2016 13:47 UTC; 16 points) (EA Forum;
- 23 Oct 2011 9:00 UTC; 16 points) 's comment on Better Disagreement by (
- Fluent, Cruxy Predictions by 10 Jul 2024 20:34 UTC; 15 points) (EA Forum;
- 5 Dec 2012 8:23 UTC; 15 points) 's comment on LessWrong podcasts by (
- A Process for Dealing with Motivated Reasoning by 3 Sep 2018 3:34 UTC; 15 points) (
- 4 Aug 2012 18:30 UTC; 14 points) 's comment on “Epiphany addiction” by (
- Historical examples of flinching away by 23 Dec 2011 21:41 UTC; 13 points) (
- 3 Jan 2013 16:42 UTC; 12 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 12 Apr 2009 23:38 UTC; 11 points) 's comment on It’s okay to be (at least a little) irrational by (
- Reflections on Arguing about Politics by 13 Apr 2020 17:33 UTC; 11 points) (
- 25 Jun 2011 5:03 UTC; 11 points) 's comment on Exclude the supernatural? My worldview is up for grabs. by (
- 10 May 2011 17:12 UTC; 10 points) 's comment on Holy Books (Or Rationalist Sequences) Don’t Implement Themselves by (
- 1 Dec 2013 19:09 UTC; 10 points) 's comment on Less Wrong’s political bias by (
- Meetup : Toronto—Rational Debate: Will Rationality Make You Rich? … and other topics by 11 Feb 2013 1:12 UTC; 10 points) (
- Truth seeking as an optimization process by 18 Aug 2015 11:03 UTC; 9 points) (
- 7 Aug 2014 23:47 UTC; 9 points) 's comment on Article on confirmation bias for the Smith Alumnae Quarterly by (
- Strategies for keeping AIs narrow in the short term by 9 Apr 2022 16:42 UTC; 9 points) (
- See the dark world by 20 Sep 2015 3:00 UTC; 9 points) (
- Rationality Reading Group: Part K: Letting Go by 8 Oct 2015 2:32 UTC; 8 points) (
- Call for Ideas: Industrial scale existential risk research by 9 Nov 2017 8:30 UTC; 8 points) (
- [SEQ RERUN] Leave a Line of Retreat by 10 Feb 2012 15:39 UTC; 7 points) (
- Roundabout Strategy by 28 Jan 2021 0:44 UTC; 7 points) (
- 18 Oct 2011 12:08 UTC; 7 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 8 Feb 2018 19:57 UTC; 7 points) 's comment on Mental TAPs by (
- 6 Feb 2012 3:52 UTC; 6 points) 's comment on How can people be actually converted? by (
- 3 Jul 2013 15:52 UTC; 6 points) 's comment on Open Thread, July 1-15, 2013 by (
- 5 May 2016 7:06 UTC; 5 points) 's comment on Collaborative Truth-Seeking by (EA Forum;
- 14 Feb 2010 19:11 UTC; 5 points) 's comment on Open Thread: February 2010 by (
- 11 Jan 2023 14:32 UTC; 4 points) 's comment on Reflections on Wytham Abbey by (EA Forum;
- 23 Oct 2014 1:20 UTC; 4 points) 's comment on Open thread, Oct. 20 - Oct. 26, 2014 by (
- 22 Mar 2012 8:25 UTC; 4 points) 's comment on What epistemic hygiene norms should there be? by (
- 17 Nov 2023 15:03 UTC; 4 points) 's comment on On Tapping Out by (
- 17 Feb 2012 1:54 UTC; 4 points) 's comment on What happens when your beliefs fully propagate by (
- 14 Nov 2011 9:20 UTC; 4 points) 's comment on Toward an overview analysis of intelligence explosion by (
- 16 Jan 2012 9:17 UTC; 3 points) 's comment on Introducing Leverage Research by (
- 20 May 2012 23:04 UTC; 3 points) 's comment on Learn A New Language! by (
- 23 Dec 2010 7:10 UTC; 3 points) 's comment on The Santa deception: how did it affect you? by (
- 20 Mar 2010 17:38 UTC; 3 points) 's comment on Open Thread: March 2010, part 3 by (
- Meetup : Buffalo Meetup at UB by 2 Apr 2013 20:10 UTC; 3 points) (
- 22 Apr 2013 18:12 UTC; 3 points) 's comment on Ritual Report: Schelling Day by (
- 21 Feb 2010 18:07 UTC; 3 points) 's comment on Conversation Halters by (
- 11 Feb 2016 19:11 UTC; 2 points) 's comment on Open thread, Feb. 01 - Feb. 07, 2016 by (
- 18 Dec 2024 7:47 UTC; 2 points) 's comment on Habryka’s Shortform Feed by (
- 7 Dec 2010 5:36 UTC; 2 points) 's comment on Defecting by Accident—A Flaw Common to Analytical People by (
- 7 Jan 2012 9:54 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- 3 Jan 2011 23:16 UTC; 2 points) 's comment on Vegetarianism by (
- Meetup : West LA—Keep Your Identity Small by 16 Jan 2015 3:51 UTC; 2 points) (
- 16 Jan 2010 2:54 UTC; 2 points) 's comment on The Wannabe Rational by (
- 17 May 2011 6:50 UTC; 2 points) 's comment on [SEQ RERUN] Archimedes’s Chronophone by (
- 4 Nov 2017 15:31 UTC; 2 points) 's comment on Nightmare of the Perfectly Principled by (
- 14 Sep 2020 11:56 UTC; 2 points) 's comment on Progress: Fluke or trend? by (
- 15 Sep 2021 0:39 UTC; 1 point) 's comment on Example population ethics: ordered discounted utility by (
- 10 Dec 2011 20:17 UTC; 1 point) 's comment on 2011 Survey Results by (
- 19 Aug 2022 20:56 UTC; 1 point) 's comment on David Udell’s Shortform by (
- 13 Nov 2020 15:05 UTC; 1 point) 's comment on Pontor’s Shortform by (
- 23 Jun 2010 21:08 UTC; 1 point) 's comment on Poll: What value extra copies? by (
- 4 Dec 2011 6:40 UTC; 1 point) 's comment on Rationality Quotes December 2011 by (
- 29 Mar 2024 21:32 UTC; 1 point) 's comment on From the outside, American schooling is weird by (
- 5 Oct 2010 2:09 UTC; 0 points) 's comment on Understanding vipassana meditation by (
- 5 Nov 2010 14:08 UTC; 0 points) 's comment on Religious/Worldview Techniques by (
- 2 Aug 2014 12:40 UTC; 0 points) 's comment on Connection Theory Has Less Than No Evidence by (
- 21 Apr 2009 3:48 UTC; 0 points) 's comment on The ideas you’re not ready to post by (
- 12 Feb 2012 21:56 UTC; 0 points) 's comment on Feed the spinoff heuristic! by (
- 28 Jul 2012 22:05 UTC; 0 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 14 May 2010 10:03 UTC; 0 points) 's comment on Aspergers Poll Results: LW is nerdier than the Math Olympiad? by (
- 22 Apr 2013 18:09 UTC; 0 points) 's comment on Ritual Report: Schelling Day by (
- 4 Aug 2012 13:31 UTC; -1 points) 's comment on Should I believe what the SIAI claims? by (
How many rationalists would retain their belief in reason, if they could accurately visualize that hypothetical world in which there was no rationality and they themselves have become irrational?
I just attempted to visualize such a world, and my mind ran into a brick wall. I can easily imagine a world in which I am not perfectly rational (and in fact am barely rational at all), and that world looks a lot like this world. But I can’t imagine a world in which rationality doesn’t exist, except as a world in which no decision-making entities exist. Because in any world in which there exist better and worse options and an entity that can model those options and choose between them with better than random chance, there exists a certain amount of rationality.
Well, a world that lacked rationality might be one in which all the events were a sequence of non-sequiters. A car drives down the street. Then dissappears. We are in a movie theater with a tyrannosaurus. Now we are a snail on the moon. Then there’s just this poster of rocks. Then I can’t remember what sight was like, but there’s jazz music. Now I fondly remember fighting in world war 2, while evading the Empire with Hans solo. Oh! I think I might be boiling water, but with a sense of smell somehow.… that’s a poor job of describing it—too much familiar stuff—but you get the idea. If there was no connection between one state of affairs and the next, talking about what strategy to take might be impossible, or a brief possibility that then dissappears when you forget what you are doing and you’re back in the movie theater again with the tyrannosaurus. If ‘you’ is even a meaningful way to describe a brief moment of awareness bubbling into being in that universe. Then again, if at any moment ‘you’ happen to exist and ‘you’ happen to understand what rationality means- I guess now that I think about it, if there is any situation where you can understand what the word rationality means, its probably one in which it exists (howevery briefly) and is potentially helpful to you, even if there is little useful to do about whatever situation you are in, there might be some useful thing to do about the troubling thoughts in your mind.
While that is a world without rationality, it seems a fairly extreme case.
Another example of a world without rationality is a world in which, the more you work towards achieving a goal, the longer it takes to reach that goal; so an elderly man might wander distractedly up Mount Everest to look for his false teeth with no trouble, but a team of experienced mountaineers won’t be able to climb a small hill. Even if they try to follow the old man looking for his teeth, the universe notices their intent and conspires against them. And anyone who notices this tendency and tries to take advantage of it gets struck by lightning (even if they’re in a submarine at the time) and killed instantly.
That reminds me of Hofstadter’s Law: “It will always take longer than you think it is going to take. Even when you take into account Hofstadter’s Law.”
I like both Volairina and your takes on the non-rational world. I was having a lot of trouble working something out.
That said, while Voltairina’s world is a bit more horrifyingly extreme than yours, it seems to me more probably that cause and effect simply did not exist. I can envision a structure of elementary physics that simply change—functionally randomly—far more easily than that causality does exist, but operates in the inverse. I have more trouble envisioning the elementary physics that bring that into existence without a observational intellect directly upsetting motivated plans.
All that is to say, might not your case be the more extreme one?
...it’s possible. There are many differences between our proposed worlds, and it really depends on what you mean by “more extreme”. Volairina’s world is “more extreme” in the sense that there are no rules, no patterns to take advantage of. My world is “more extreme” in that the rules actively punish rationality.
My world requires that elementary physics somehow takes account of intent, and then actively subverts it. This means that it reacts in some way to something as nebulous as intent. This implies some level of understanding of the concept of intent. This, in turn, implies (as you state) an observational intellect—and worse, a directly malevolent one. Volairina’s can exist without a directly malevolent intelligence directing things.
So it really comes down to what you mean by “extreme”, I guess. Both proposed worlds are extreme cases, in their own way.
Fair point.
I suppose I’d just think about before I met LessWrong. I wouldn’t choose that world.
That’s not the idea that really scares Less Wrong people.
Here’s a more disturbing one; try to picture a world where all the rational skills you’re learning on Less Wrong are actually somehow flawed, and actually make it less likely that you’ll discover the truth or made you correct less often, for whatever reason? What would that look like? Would you be able to tell the difference.
I must say, I have trouble picturing that, but I can’t prove it’s not true (we are basically tinkering with the way our mind works without a software manual, after all).
related: http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_great/
I’m not sure what “no rationality” would mean. Evolutionarily relevant kinds of rationality can still be expected, like preference to sexually fertile mates, fearing spiders/snakes/heights, and if we’re still talking about something at all similar to Homo Sapiens, language and cultural learning and such, which require some amounts of rationality to use.
I wonder if you might be imagining rationality in the form of essentialism, allowing you to universally turn the attribute off, but in reality there no such off switch that is compatible with having decision making agents.
No rationality, or no Bayesianism? Rationality is a general term for reasoning about reality. Bayesianism is the specific school of rationality advocated on LessWrong.
A “world in which there was no rationality” is not even meaningful, just like “world in which there was no physics” is meaningless. Even if energy and matter behaves in a way that’s completely alien to us, there are still laws that govern how it works and you can call these laws “physics”. Similarly, even if we’d live in some hypothetical world where the rules of reasoning are not derived from Bayes’ theorem, there are still rules that can be thought of as that reality’s rationalism.
A world without Bayesianism is easy to visualize, because we have all seen such worlds in fiction. Cartoons takes this to the extreme—Wile E. Coyote paints a tunnel and expects Road Runner to crash into it—but Road Runner manages to go through. Then he expects that if Road Runner could go through, he could go through as well—but he crashes into it when he tried.
Coyote’s problem is that his rationalism could have worked in our world—but he is not living in our world. He is living in a cartoon world with cartoon logic, and needs a different kind of rationalism.
Like… the one Bugs Bunny uses.
Bugs Bunny plugs Elmer Fudd’s rifle with his finger. In our world, this could not stop the bullet. But Bugs Bunny is not living in our world—he lives in cartoon world. He correctly predicts that the rifle will explode without harming him, and his belief in that prediction is strong enough to bet his life on it.
Now, one may claim that it is not rationality that gets messed up here—merely physics. But in the examples I picked it is not just that laws of nature that don’t work like real world dwellers would expect—it is consistency itself that fails. Let us compare with superhero comics, where the limitations of physics are but a suggestion but at least some effort is done to maintain consistency.
When mirror master jumps into a mirror, he uses his technology/powers to temporarily turn the mirror into a portal. If Flash is fast enough, he can jump into the mirror after him, before the mirror turns back to normal. The rules are simple—when the portal is open you can pass, when it’s closed you can’t. Even if it doesn’t make sense scientifically it makes sense logically. But there are no similar rules that can tell Coyote whether or not its safe to pass.
Superman can also plug his finger into criminals’ guns to stop them from shooting, just like Bugs Bunny. But Superman can stop the bullets with any part of his body, before or after they leave the barrel. So him successfully plugging the guns is consistent. Bugs Bunny, however, is not invulnerable to bullets. When Elmer Fudd chases after him, rifle blazing, Bugs Bunny runs for his life because he know the bullets will pierce him. They are stronger than his body can handle. Except… when he sticks his finger into the barrel. Not consistent.
Still—there are laws that govern cartoon reality. Like the law of funny. Bugs Bunny is aware of them—his actions may seem chaotic when judged by our world’s rationality, but they make perfect sense in cartoon world. Wile E. Coyote’s actions make
perfectsome sense in our world’s rationality, but are doomed to fail when executed under cartoon world logic.Had I lived in cartoon world, I’d rather be like Bugs Bunny than like Wile E. Coyote. Not to insist on Bayesianism even though it wouldn’t work, but try to figure out how reasoning in that reality really works and rely on that.
Then again—wouldn’t Bayesianism itself deter me from relying on things that don’t work? Is Wile E. Coyote even Bayesian if he doesn’t update his believes every time his predictions fail?
I’m no longer sure I can imagine a world where there is no Bayesianism...
I don’t know. But I would. Irrationality is caused by ignorance, so there will always be tangent worlds (while regarding this current one as prime) in which I give up. There will always be a world where anything that is physically possible occurs. (and probably many where even that requirement doesn’t hold)
To put it another way, there has been a moment in time when I was not rational. Is that reason to give up rationality forever? Time could be just another dimension, it’s manipulation as far out of our grasp as that of other possible worlds.
I enjoy the non-mathy posts. I believe Overcoming Bias is a worthy endeavor, and as a relatively new field of study, the math-oriented posts are important. They are often the most succinct and accurate way to convey concepts. With that said, I find that math posts to be dense with information, perhaps overly so. I find myself unconsciously starting to skim instead of read, and I find it difficult to force myself to pay attention.
The mathy posts appeal to people who are serious about moving this burgeoning field forward, and the non-mathy posts appeal to people who are more casually interested in the concepts, and allow you to have a wider audience. You will have a balance between the two no matter what you attempt, the only question is what your intended audience is, and the best way to reach those people.
Not sure why you got a downvote. Displaying, or worse still obstinately defending, poor reasoning is a valid reason for getting a down (I got a big stack of them with a sloppy article and from rushed comments [working on making it better]) but admitting that you aren’t a mathematically focused person and providing feedback on Eliezer’s communication styles is no cause for it. Got my upvote.
I enjoy all posts here, but would love a post on what does it mean to be rational. Something introductory, something you can link to when you talk with people who think “if you can justify what someone did, no matter what the justification is, the action becomes rational”.
The ability to endure cognitive dissonance long enough to find the resolution to the dissonance, rather than just short-circuiting to something that makes no sense but offers relief from the strain, is a necessary precondition for rational thought.
I don’t think it can be cultivated, and I don’t think there’s a substitute. Either you pass through the gauntlet, or you don’t.
Couldn’t you start with easier cognitive dissonances, and work your way up?
I just want you to get to that “revelation” of yours already. I thought you were approaching it, if you’re talking about neural nets and arithmetic coding. Where does it rank in your schedule? Or is this blog for human reasoning only?
I was expecting to read yet another mathy post tonight, but I was dissapointed. Less mathy stuff is ok, but shouldn’t really come at cost of anything intresting.
I agree with Kriti—introductory essay, post, etc would be useful.
I prefer the less mathy.
I too prefer less mathy—well, to be precise I’ll actually read the less mathy stuff in the first place.
More to the point, I’ve stopped listening to news reports about global warming—and this is harming my ability to think rationally about it. I’ll change the channel instead of hear someone say “You know how we all thought we’ve got 50 years to live? Turns out it’s only 30/25/20.”
[Without having read the comments]
WTF? You say: [...] I was actually advised to post something “fun”, but I’d rather not [...]
I think it was fun!
BTW could we increase the probability of people being honest by basing reward not on individual choices, but on the log-likelihood over a sample of similar choices? (For a given meaning of similar.)
As a mathematician I like your mathy posts, but this is also very welcome for a reason: it contains practical advice. Some posts are of little direct practical use but this one certainly is.
Keep on the good work!
“this is also very welcome” I’m refering to this post.
[having read the comments]
Kriti et al: I’d recommend this and this to anybody who hasn’t already read it. Otherwise I have not much idea for introductory texts right now.
I think you should go with the advice and post something fun. Especially so if you have “much important material” to cover in following months. No need for a big hurry to lose readers. ;)
I should however note that one of the last mathy posts (Mutual Information) struck a chord with me and caused an “Aha!” moment for which I am grateful.
Specifically, it was this:
I digress here to remark that the symmetry of the expression for the mutual information shows that Y must tell us as much about Z, on average, as Z tells us about Y. I leave it as an exercise to the reader to reconcile this with anything they were taught in logic class about how, if all ravens are black, being allowed to reason Raven(x)->Black(x) doesn’t mean you’re allowed to reason Black(x)->Raven(x). How different seem the symmetrical probability flows of the Bayesian, from the sharp lurches of logic—even though the latter is just a degenerate case of the former.
Insightful!
I agree with SnappyCrunch.
I like non-mathy posts. I particularly enjoyed this one, as it seems to have a clear practical application.
I liked this post, but then again, I like all your posts Eliezer! (I’ve just been hiding behind my feedreader, and so not commenting about it before.)
My opinion about mathy/non-mathy is that you should do what you think is most natural. Most days, you’ll probably want to get on with the mathy exposition (and I am very much looking forward to the more advanced mathy posts), and then sprinkle in something lighter when the occasions to do so arise. For instance, I like that you based today’s post on a recent discussion you had.
I believe this approach would be most conducive to interesting reading.
‘Newly minted quadriplegics’? What’s more fun than that?
Don’t worry too much about who wants what when. Like you say, it’s all important stuff, and at a post a day no-one’s going to complain about the odd vignette. Just keep up the good work.
When I saw the title I thought you were responding to this: http://www.overcomingbias.com/2008/02/more-moral-wigg.html
Thank GOD for non-mathy posts ;-)
There’s a common literary technique used in most storytelling in which the author writes alternating “up” and “down” scenes—it provides pacing and context; it also allows us time to digest the “up” scenes.
It seems to me that the technique is appropriate here—it might be worth making a goal for yourself to write a mathy post, then to follow up with a post on the same topic but without any math in it at all, except maybe references to the previous post. That would be an interesting exercise for you, I think. It’s supposed to accessible work—how accessible can you make it? Can you write about these mathy topics without numbers?
I don’t know, but if you never try to do impossible things...
There hasn’t been much evidence of atheists forming groups that have the positive aspects that a church/synagogue/mosque holds in the social life of some humans. So you might forgive a theist pretending to be a rationalist, for not holding the probability of this happening very high, and that the world would lack said institutions and would be a worse place.
If rationalists truly wants to get rid of religions, without getting rid of humans, we would have to ask ourselves, “What do humans get out of being part of a religion?” And then provide that through organisations.
And please no strawmen of the comfort of ignorance, I am talking about reassurance of being with people who are trying to hold the same goal system.
Eliezer,
You know that you can’t succeed without the math, and slowing down for posts like this is taking away 24 hours that might have been better used to save humanity. Not that this was a bad post, but I think you would be better off letting others write the fun posts unless you need to write a fun post to recover from teaching.
Eliezer, this was a welcome relief from the long series of mathy posts.
Eliezer, suppose it turned out to be the case that:
1) God exists. 2) At some time in the future, tomorrow, for example, God comes to Eliezer Yudkowsky in order announce His existence. 3) Not only does He announce His existence, but He is willing to have His existence and power tested, and passes every test. 4) He also asserts that according to Eliezer’s CEV, although not according to his present knowledge, God’s way of acting in the world is perfectly moral, even according to Eliezer’s values.
How would you react to these events? Would you write a post about them on OB?
Thanks for feedback, all! The consensus appears to favor leavening mathy posts with less mathy ones. I’ll bear that in mind, though I make no promises—I do have my own agenda here.
Unknown, can’t say I’ve ever thought of that one. I’ve considered how to kill or rewrite a Judeo-Christian type God, but not that particular scenario you’ve just described.
I think I would simply reply to number 4, “I don’t believe that without an explanation.” After all, just because an entity displays great power doesn’t mean it will always tell you the truth.
You can’t necessarily force me to consider believing number 4 because it involves a moral question and those are not subject to forced visualization (by this rule) in the way that factual scenarios are.
You can invent all kinds of Gods and demand that I visualize the case of their existence, or of their telling me various things, but you can’t necessarily force me to visualize the case where I accept their statement that killing babies is a good idea—not unless you can argue it well enough to create a real moral doubt in my mind.
If I myself am in actual doubt on a moral question, then I can visualize it both ways without confusing myself; and then you can demand that I visualize it. But when I am not in doubt, trying to visualize the contrary has the same quality as trying to concretely visualize 2 + 2 = 3, only more so.
I can visualize a mind constructed so as to possess a different morality, of course; but that is not the same as identifying myself with that mind.
This reminds me of an item from a list of “horrible job interview questions” we once devised for SIAI:
Would you kill babies if it was intrinsically the right thing to do? Yes/No
If you circled “no”, explain under what circumstances you would not do the right thing to do:
If you circled “yes”, how right would it have to be, for how many babies?
Evangelism and creationism don’t tend to go down very well here, but you know what’s likely to go down even less well? Claiming to have conclusive evidence against things near-universally believed here (e.g., evolution) and not bothering to provide us with any of it.
I don’t want to mislead you; if you do tell us some of the things you regard as demonstrating that evolution is “a fairy tale”, those things are not likely to get the sort of reception you would prefer them to get. (I say: because you’re claiming to offer conclusive evidence of something that is in fact false, and of course this conclusive evidence is likely to be much worse than you think it is. You might have other explanations.) But just turning up and saying “I know that you guys are catastrophically wrong” but not saying why? Hopeless. That’s not what you do when you actually want to help. It’s what you do when you want to gloat.
(You may not be aware of how smug what you wrote comes off as being, to those who don’t already agree with you. That’s kinda fair, because I am very confident that a lot of things here that assert or presuppose atheism come across as equally smug to you. But, again, if you are actually hoping to help anyone escape from darkness and ignorance, you might want to avoid coming across as smug. But I’m not sure you are. After all, you believe in a god who might well “choose to hinder their understanding”. Why, believing in such a god, you find yourself willing to believe anything that god is purported to have revealed to you, I don’t really know. But if that’s the sort of god you believe in, it’s not surprising if your belief that you are in the light and we are in the dark leads you to gloat rather than to try to enlighten.)
Anyway, I just thought it might be helpful to offer a few words of explanation of the torrent of downvotes you will likely receive if anyone else actually reads what you wrote. I expect you will think of other explanations which are more flattering to you and to your religion, and you may prefer to believe those, but I wouldn’t want not to have tried.
In other words, might makes right?
The great sin against reason is not belief in a God, it’s belief in a good God. But people cling to scraps of unreason and hope in order to endure this horror show of a world.
Are you related to Tom McCabe, who posted on this page years ago? Is there some tragedy that brings you here?
While I appreciate the mathy posts as well as I can, as someone without much training in mathematics I really enjoy these types of posts (I’ve got a large backlog of your more mathy posts bookmarked for me to work through, whereas your non-mathy posts I read as soon as they show up in my feed reader).
Let us have both!
Alternatively, if you want something super scary, try 1), 2), and 3) without 4).
I’ve considered how to kill or rewrite a Judeo-Christian type God
Please make this your next “fun” post. (Speaking of which, I enjoy the digression.)
You can’t necessarily force me to consider believing number 4 because it involves a moral question and those are not subject to forced visualization (by this rule) in the way that factual scenarios are.
But “my CEV judges killing babies as good” (unlike “killing babies is good”) is a factual proposition. You know what your current moral judgments are, but you can’t be certain what the idealized Eliezer would think. You might justifiably judge repugnant volition too unlikely to bother imagining it, but exempt?
This reminds me of an item from a list of “horrible job interview questions” we once devised for SIAI:
Would you kill babies if it was intrinsically the right thing to do? Yes/No
If you circled “no”, explain under what circumstances you would not do the right thing to do:
If you circled “yes”, how right would it have to be, for how many babies? ___
What a horrible horrible question. My answer is … what do you mean when you say “intrinsically the right thing to do”? The “right thing” according to whom? If it was the right thing according to an authority figure but I disagreed, I probably would not do it. If the circumstances were so extreme that I truly believed it’s the right thing(eg: not killing a baby results in the baby’s death anyway + 1 million babies) then I would kill babies(assuming I could overcome my aversion to killing).
Actually I don’t really know how I would react. This is how I wish I would act. Calmly theorising in front of the computer never having experienced circumstances remotely as awful is not the same as being in those circumstances when the fear and dread overtakes you. There would probably be a significant shift from what I consider and feel is “me” right now to the “me” I would become in that hypothetical situation.
“This reminds me of an item from a list of “horrible job interview questions” we once devised for SIAI:”
Could you post these?
“I’ve considered how to kill or rewrite a Judeo-Christian type God”
Okay, now I’m curious what you’ve concluded with regards to that. :)
Probably not worth doing more then just talking ’bout it in comments, if that, unless you feel like doing a post on that just for fun.
But as far as this post, I also liked it. Useful to have actual suggestions for mental practices to practice to help one debias oneself.
Why do the work of hypothesizing the world without God? It’s not like Nietszche, Sartre, Camus, Marx, Shaw, Derrida, etc. haven’t done a much better job of it than me, because they were better philosophers than me. However, I also consider Aquinas to be the better philosopher than the aforementioned. Is that so unreasonable?
Thanks for reminding me of The Art of War from your quote. You might be interested in this great translation—http://www.sonshi.com/huynh.html
“The mathy posts appeal to people who are serious about moving this burgeoning field forward, and the non-mathy posts appeal to people who are more casually interested in the concepts” - (Snappycrunch)
Beware of mistaking mathematical thinking for rational thinking; math is a tool like any other, to be used rationally or irrationally. Nassim Taleb demonstrates this very well in his book “Fooled by Randomness”.
There’s nothing casual about being interested in the concepts of rational thinking; even the mathematically minded (who will naturally be more interested in the mathy posts) need the concepts to understand what framework to put the math into.
I’ve considered how to kill or rewrite a Judeo-Christian type God If God did not exist, it would be necessary to invent him. And if there really is a God it will be necessary to abolish him!
How does one go about visualizing a world without souls? Or, rather a world in which nobody believes in souls, and how would this visualization have any bearing to “reality”? It seems like the thought experiment is really: What would I do if everything were the same except I didn’t have a soul?
Regardless of all previous posts.
I think you write better when you are expressing your beliefs and inner thoughts as opposed to the mathematical equation that leads you there.
“Do not dwell in the past, do not dream of the future, concentrate the mind on the present moment.”
Just a thought. Anna
slowing down for posts like this is taking away 24 hours that might have been better used to save humanity.
Sarcasm? Humour? Sincerity?
I’ve considered how to kill or rewrite a Judeo-Christian type God
Please make this your next “fun” post.
Seconded!
I’ve considered how to kill or rewrite a Judeo-Christian type God
Obligatory Pascal: Ah, but what if there’s a tiny chance that He’s reading along to figure out our tactics?
Steven: To kill or rewrite a Judeo-Christian God, obviously, the technique has to work even if the God can read your planning thoughts. It’s a lot easier than dealing with an UFAI, though, because the Judeo-Christian God has anthropomorphic cognitive vulnerabilities and a considerable response time delay. (“You ate the apple?”)
Naturally you prefer to rewrite the God if possible—shame to waste all that power.
Heh, so how do you know that it is not the case that this hypothetical JCG reads overcomingbias but not people’s private thoughts?
(Of course as long as we’re under these weird assumptions then not discussing tactics could be a fatal mistake too, etc etc)
I’m skeptical about the possibility of really carrying out this kind of visualization (or, more broadly, imaginary leap). Here’s why.
I might be able to say that I can imagine the existence of a god, and what the world would be like if, say, it were the Christian one. But I can’t imagine myself in that world—in that world, I’m a different person. For in that world, either I hold the counterfactually true belief that there is such a god, or I don’t. If I don’t hold that belief, then my response to that world is the same as my response to this world. If I do hold it, well, how can I model that?
This point is related to a point that Eliezer made in the comments, that I think just absolutely nails the problem, for a narrower class of the true set of states for which the problem exists:
You can invent all kinds of Gods and demand that I visualize the case of their existence, or of their telling me various things, but you can’t necessarily force me to visualize the case where I accept their statement that killing babies is a good idea—not unless you can argue it well enough to create a real moral doubt in my mind.
Exactly.
But I maintain that you can’t model the existence of a God with the right properties (including omnipotence, omniscience, and omnibenevolence) without being able to model that acceptance.
And likewise, the woman who believed in the soul couldn’t model her reaction to a world without a soul without being able to experience herself as a person who genuinely doesn’t believe in a soul. But she can only have that experience by becoming such a person.
I think this is just a limitation of human psychology. Cf. Thomas Nagel’s great article, What is it like to be a bat? The argument doesn’t directly apply, but the intuition does.
This reminds me of an item from a list of “horrible job interview questions” we once devised for SIAI:
Would you kill babies if it was intrinsically the right thing to do? Yes/No
If you circled “no”, explain under what circumstances you would not do the right thing to do: I assume by “intrinsically right thing to do”, you do not intend something straightforward like “here are five babies carrying a virus which, if left unchecked, will wipe out half the population of the planet. There is no means by which they can be quarantined, the virus can cross even the cold reaches of space. The only way to save us is to kill them”. I assume rather, that you, Eliezer Yudkowsky, hand me a booklet, possibly hundreds of pages long. On page 0 are listed my most cherished moral truths, and on page N is written: “thus, it is right and decent to kill as many babies as possible, whenever the opportunity arises. Any man who walks past a mother pushing a stroller, and does not immediately throttle the infant where it lies, is nothing more than a moral coward.” For all n between 1 and N inclusive, the statements on page n seem to me to follow naturally and self-evidently from my acceptance of the statements on page n-1. As I look up, astonishment etched on my face, I see you standing before me, grinning broadly. You hand me a long, curved blade, and tell me the staff of the SIAI are taking the afternoon off to raid the local nursery, and would I like to join?
Under these circumstances I would assign high probability to the idea that you are morally ill, and wish to murder infants for your own enjoyment. That somewhere in the proof you have given me is a logical error—the moral equivalent of dividing by zero. I would imagine, not that morality led me astray, but that my incomplete knowledge of morality led me not to spot this error. I would show the proof to as many moral philosophers as I could, ones whose intelligence and expertise in the field I respected, and held to be above my own, and who were initially as unenthusiastic as I am at the prospect of infanticide. I would ask them if they could point me to an error in the proof, and explain to me clearly and fully why this step, which had seemed so simple to me, is not a legal move in the dance at that point. If they could not explain this to me to my satisfaction, I would devote much of my time from then on to the study of morality so that I could better understand it, and until I could, would distrust any moral conclusions I came to on my own. If none of them could find an error, I would still assign high probability to the notion that somewhere in the proof is an error which we humans have not advanced sufficiently in the study of metamorality to discover. I would consider it one of the most important outstanding problems in the field, and would, again, distrust any major moral decisions which did not clearly add up to normality until it was solved.
Just as the mathematical “proof” that 2=1 would, if accepted, destroy the foundations of mathematics itself, and must therefore be doubted until we can discover its error, so your proof that killing babies is good, would, if accepted, destroy the foundations of my morality, and so I must doubt it until I can find an error.
I am well aware that a fundamentalist could take my previous paragraph, replace “killing babies” with “oral sex” and thus make his prudery unassailable by argument. So much the worse for him, I say. If he considers the prohibition of a mutually beneficial and joyful act to be at the foundation of his morality, then he is a miserable creature and all my rationality will not save him from himself.
I have tried indirectly to answer your question. To answer it directly I will have to resort to what seems a paradox. I would not do “the right thing to do” if I know, at bottom, that it simply is not the right thing to do.
If you circled “yes”, how right would it have to be, for how many babies? N/A
So, would I get the job?
I would show the proof to as many moral philosophers as I could
Boy, I sure wouldn’t. Ever read Cherniak’s “The Riddle of the Universe and Its Solution”?
I am well aware that a fundamentalist could take my previous paragraph, replace “killing babies” with “oral sex” and thus make his prudery unassailable by argument. So much the worse for him, I say.
I sympathize, but I don’t think that really solves the dilemma.
Post what you want to post most. The advice that you should go against your own instincts and pander is bad, in my opinion. The only things you should force yourself to do are: (1) try to post something every day, and (2) try to edit and delete comments as little as possible. I believe the result will be an excellent and authentic blog with the types of readers you want most (and that are most useful to you).
Eliezer,
I think there is pretty overwhelming evidence that moral philosophers are almost never moved to do anything nearly so onerous and dangerous as killing babies by their moral views. See Unger, Singer, Parfit, etc.
That title confused me. I expected an article on how, when debating, it was better to leave the opponent a line of retreat so that they would not feel dialectically cornered and start panicking. Of course, along that line of retreat, your arguments would be waiting for them. Socrates apparently was a true master of this little dance. This is especially useful if you have a lot of time and you are trying to actually change the way your opponent thinks, rather than changing that of an audience.
I am pretty sure that is what the term “leaving a line of retreat” in the context of an argument or disagreement should be used to refer to.
The meaning being proposed in this post is counter-intuitive. I classify it as being undesirable terminology.
Great post!
I think the greatest test of self honesty (maybe it ties with honestly imagining the world you wish weren’t real) would be admitting to yourself that the world looks an awful lot like the hypotheticl world you just vividly imagined. I think if anyone who believes in god or homeopathy or what-have-you honestly imagined what the world would look like if their belief was wrong, and they had enough courage, they’d admit to themselves that the world looks a lot like that already.
You really should write a book. Seriously. I could probably raise the hypothesis of teaching Rationality as a first-year course (as a follow-up to Logic) instead of useless “password” classes like I’ve received at my college. Having a book I could wave around with to convince people maybe being rational is important when you’re a scientist would help a lot. At least I’d start printing and distributing it.
You could also just put the primary sequences of this website into a (e)book format, and release it. You might reach a wider audience that way, which would of course be Winning.
The trouble with the sequences is that each was written in the course of a day, and most were unrevised since then. They’re obviously rich and interesting, but far from publishable material. The sequences meet every standard you could want for being insightful, but they fall far short of most standards of factual accuracy, organization, contact with contemporary discussions, etc.
There’s a couple of ebook versions of the Sequences floating around. I believe an official release is still in the works, but links to several unofficial ones may be found here.
A serious book on Rationality has been in the works for some time.
And again you manage to condense a wise life lesson to two sentences. I should really write them down.
“How many religious people would retain their belief in God, if they could accurately visualize that hypothetical world in which there was no God and they themselves have become atheists?”
More than a few. For example, if you are a Muslim in some places, accurately visualizing the world where you become an atheist means visualizing a world in which you get killed for apostasy.
I don’t think that’s quite it. For many, the world where there is no God is like the world where you have no parents.