Something to Protect
In the gestalt of (ahem) Japanese fiction, one finds this oft-repeated motif: Power comes from having something to protect.
I’m not just talking about superheroes that power up when a friend is threatened, the way it works in Western fiction. In the Japanese version it runs deeper than that.
In the X saga it’s explicitly stated that each of the good guys draw their power from having someone—one person—who they want to protect. Who? That question is part of X’s plot—the “most precious person” isn’t always who we think. But if that person is killed, or hurt in the wrong way, the protector loses their power—not so much from magical backlash, as from simple despair. This isn’t something that happens once per week per good guy, the way it would work in a Western comic. It’s equivalent to being Killed Off For Real—taken off the game board.
The way it works in Western superhero comics is that the good guy gets bitten by a radioactive spider; and then he needs something to do with his powers, to keep him busy, so he decides to fight crime. And then Western superheroes are always whining about how much time their superhero duties take up, and how they’d rather be ordinary mortals so they could go fishing or something.
Similarly, in Western real life, unhappy people are told that they need a “purpose in life”, so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes. You should be careful not to pick something too expensive, though.
In Western comics, the magic comes first, then the purpose: Acquire amazing powers, decide to protect the innocent. In Japanese fiction, often, it works the other way around.
Of course I’m not saying all this to generalize from fictional evidence. But I want to convey a concept whose deceptively close Western analogue is not what I mean.
I have touched before on the idea that a rationalist must have something they value more than “rationality”: The Art must have a purpose other than itself, or it collapses into infinite recursion. But do not mistake me, and think I am advocating that rationalists should pick out a nice altruistic cause, by way of having something to do, because rationality isn’t all that important by itself. No. I am asking: Where do rationalists come from? How do we acquire our powers?
It is written in the Twelve Virtues of Rationality:
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
Historically speaking, the way humanity finally left the trap of authority and began paying attention to, y’know, the actual sky, was that beliefs based on experiment turned out to be much more useful than beliefs based on authority. Curiosity has been around since the dawn of humanity, but the problem is that spinning campfire tales works just as well for satisfying curiosity.
Historically speaking, science won because it displayed greater raw strength in the form of technology, not because science sounded more reasonable. To this very day, magic and scripture still sound more reasonable to untrained ears than science. That is why there is continuous social tension between the belief systems. If science not only worked better than magic, but also sounded more intuitively reasonable, it would have won entirely by now.
Now there are those who say: “How dare you suggest that anything should be valued more than Truth? Must not a rationalist love Truth more than mere usefulness?”
Forget for a moment what would have happened historically to someone like that—that people in pretty much that frame of mind defended the Bible because they loved Truth more than mere accuracy. Propositional morality is a glorious thing, but it has too many degrees of freedom.
No, the real point is that a rationalist’s love affair with the Truth is, well, just more complicated as an emotional relationship.
One doesn’t become an adept rationalist without caring about the truth, both as a purely moral desideratum and as something that’s fun to have. I doubt there are many master composers who hate music.
But part of what I like about rationality is the discipline imposed by requiring beliefs to yield predictions, which ends up taking us much closer to the truth than if we sat in the living room obsessing about Truth all day. I like the complexity of simultaneously having to love True-seeming ideas, and also being ready to drop them out the window at a moment’s notice. I even like the glorious aesthetic purity of declaring that I value mere usefulness above aesthetics. That is almost a contradiction, but not quite; and that has an aesthetic quality as well, a delicious humor.
And of course, no matter how much you profess your love of mere usefulness, you should never actually end up deliberately believing a useful false statement.
So don’t oversimplify the relationship between loving truth and loving usefulness. It’s not one or the other. It’s complicated, which is not necessarily a defect in the moral aesthetics of single events.
But morality and aesthetics alone, believing that one ought to be “rational” or that certain ways of thinking are “beautiful”, will not lead you to the center of the Way. It wouldn’t have gotten humanity out of the authority-hole.
In Circular Altruism, I discussed this dilemma: Which of these options would you prefer:
Save 400 lives, with certainty
Save 500 lives, 90% probability; save no lives, 10% probability.
You may be tempted to grandstand, saying, “How dare you gamble with people’s lives?” Even if you, yourself, are one of the 500—but you don’t know which one—you may still be tempted to rely on the comforting feeling of certainty, because our own lives are often worth less to us than a good intuition.
But if your precious daughter is one of the 500, and you don’t know which one, then, perhaps, you may feel more impelled to shut up and multiply—to notice that you have an 80% chance of saving her in the first case, and a 90% chance of saving her in the second.
And yes, everyone in that crowd is someone’s son or daughter. Which, in turn, suggests that we should pick the second option as altruists, as well as concerned parents.
My point is not to suggest that one person’s life is more valuable than 499 people. What I am trying to say is that more than your own life has to be at stake, before a person becomes desperate enough to resort to math.
What if you believe that it is “rational” to choose the certainty of option 1? Lots of people think that “rationality” is about choosing only methods that are certain to work, and rejecting all uncertainty. But, hopefully, you care more about your daughter’s life than about “rationality”.
Will pride in your own virtue as a rationalist save you? Not if you believe that it is virtuous to choose certainty. You will only be able to learn something about rationality if your daughter’s life matters more to you than your pride as a rationalist.
You may even learn something about rationality from the experience, if you are already far enough grown in your Art to say, “I must have had the wrong conception of rationality,” and not, “Look at how rationality gave me the wrong answer!”
(The essential difficulty in becoming a master rationalist is that you need quite a bit of rationality to bootstrap the learning process.)
Is your belief that you ought to be rational, more important than your life? Because, as I’ve previously observed, risking your life isn’t comparatively all that scary. Being the lone voice of dissent in the crowd and having everyone look at you funny is much scarier than a mere threat to your life, according to the revealed preferences of teenagers who drink at parties and then drive home. It will take something terribly important to make you willing to leave the pack. A threat to your life won’t be enough.
Is your will to rationality stronger than your pride? Can it be, if your will to rationality stems from your pride in your self-image as a rationalist? It’s helpful—very helpful—to have a self-image which says that you are the sort of person who confronts harsh truth. It’s helpful to have too much self-respect to knowingly lie to yourself or refuse to face evidence. But there may come a time when you have to admit that you’ve been doing rationality all wrong. Then your pride, your self-image as a rationalist, may make that too hard to face.
If you’ve prided yourself on believing what the Great Teacher says—even when it seems harsh, even when you’d rather not—that may make it all the more bitter a pill to swallow, to admit that the Great Teacher is a fraud, and all your noble self-sacrifice was for naught.
Where do you get the will to keep moving forward?
When I look back at my own personal journey toward rationality—not just humanity’s historical journey—well, I grew up believing very strongly that I ought to be rational. This made me an above-average Traditional Rationalist a la Feynman and Heinlein, and nothing more. It did not drive me to go beyond the teachings I had received. I only began to grow further as a rationalist once I had something terribly important that I needed to do. Something more important than my pride as a rationalist, never mind my life.
Only when you become more wedded to success than to any of your beloved techniques of rationality, do you begin to appreciate these words of Miyamoto Musashi:
“You can win with a long weapon, and yet you can also win with a short weapon. In short, the Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size.”
—Miyamoto Musashi, The Book of Five Rings
Don’t mistake this for a specific teaching of rationality. It describes how you learn the Way, beginning with a desperate need to succeed. No one masters the Way until more than their life is at stake. More than their comfort, more even than their pride.
You can’t just pick out a Cause like that because you feel you need a hobby. Go looking for a “good cause”, and your mind will just fill in a standard cliche. Learn how to multiply, and perhaps you will recognize a drastically important cause when you see one.
But if you have a cause like that, it is right and proper to wield your rationality in its service.
To strictly subordinate the aesthetics of rationality to a higher cause, is part of the aesthetic of rationality. You should pay attention to that aesthetic: You will never master rationality well enough to win with any weapon, if you do not appreciate the beauty for its own sake.
- Lessons I’ve Learned from Self-Teaching by 23 Jan 2021 19:00 UTC; 355 points) (
- What should you change in response to an “emergency”? And AI risk by 18 Jul 2022 1:11 UTC; 336 points) (
- Comment reply: my low-quality thoughts on why CFAR didn’t get farther with a “real/efficacious art of rationality” by 9 Jun 2022 2:12 UTC; 260 points) (
- Beyond the Reach of God by 4 Oct 2008 15:42 UTC; 246 points) (
- Rereading Atlas Shrugged by 28 Jul 2020 18:54 UTC; 161 points) (
- Newcomb’s Problem and Regret of Rationality by 31 Jan 2008 19:36 UTC; 149 points) (
- My Algorithm for Beating Procrastination by 10 Feb 2012 2:48 UTC; 145 points) (
- Probability is in the Mind by 12 Mar 2008 4:08 UTC; 137 points) (
- Loving a world you don’t trust by 18 Jun 2024 19:31 UTC; 131 points) (
- LessWrong 2.0 by 9 Dec 2015 18:59 UTC; 127 points) (
- Where Recursive Justification Hits Bottom by 8 Jul 2008 10:16 UTC; 123 points) (
- Compartmentalization in epistemic and instrumental rationality by 17 Sep 2010 7:02 UTC; 123 points) (
- Einstein’s Superpowers by 30 May 2008 6:40 UTC; 118 points) (
- AI #14: A Very Good Sentence by 1 Jun 2023 21:30 UTC; 118 points) (
- No Safe Defense, Not Even Science by 18 May 2008 5:19 UTC; 118 points) (
- Honoring Petrov Day on LessWrong, in 2020 by 26 Sep 2020 8:01 UTC; 117 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 114 points) (
- Branches of rationality by 12 Jan 2011 3:24 UTC; 107 points) (
- Incremental Progress and the Valley by 4 Apr 2009 16:42 UTC; 97 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Rationality: Common Interest of Many Causes by 29 Mar 2009 10:49 UTC; 85 points) (
- A discussion of heroic responsibility by 29 Oct 2014 4:22 UTC; 83 points) (
- The Four Children of the Seder as the Simulacra Levels by 7 Sep 2020 15:00 UTC; 82 points) (
- Taking Ideas Seriously by 13 Aug 2010 16:50 UTC; 81 points) (
- On AI and Interest Rates by 17 Jan 2023 15:00 UTC; 79 points) (
- The Moral Void by 30 Jun 2008 8:52 UTC; 78 points) (
- Swimming Upstream: A Case Study in Instrumental Rationality by 3 Jun 2018 3:16 UTC; 76 points) (
- The Craft & The Community—A Post-Mortem & Resurrection by 2 Nov 2017 3:45 UTC; 76 points) (
- On sincerity by 23 Dec 2022 17:13 UTC; 75 points) (
- Why CFAR? The view from 2015 by 23 Dec 2015 22:46 UTC; 73 points) (
- When (Not) To Use Probabilities by 23 Jul 2008 10:58 UTC; 72 points) (
- 11 core rationalist skills by 2 Dec 2009 8:09 UTC; 72 points) (
- Which Parts Are “Me”? by 22 Oct 2008 18:15 UTC; 66 points) (
- Loving a world you don’t trust by 18 Jun 2024 19:31 UTC; 65 points) (EA Forum;
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- AI Safety is Dropping the Ball on Clown Attacks by 22 Oct 2023 20:09 UTC; 65 points) (
- Proper posture for mental arts by 31 Aug 2015 2:29 UTC; 62 points) (
- Deflationism isn’t the solution to philosophy’s woes by 10 Mar 2021 0:20 UTC; 62 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- The Problem of the Criterion by 21 Jan 2021 15:05 UTC; 61 points) (
- Why CFAR’s Mission? by 2 Jan 2016 23:23 UTC; 59 points) (
- Reductive Reference by 3 Apr 2008 1:37 UTC; 58 points) (
- Welcome to Less Wrong! by 16 Apr 2009 9:06 UTC; 57 points) (
- Inseparably Right; or, Joy in the Merely Good by 9 Aug 2008 1:00 UTC; 57 points) (
- 14 Feb 2023 22:24 UTC; 56 points) 's comment on ChanaMessinger’s Quick takes by (EA Forum;
- 7 Feb 2013 9:13 UTC; 55 points) 's comment on Rationality Quotes February 2013 by (
- You Need More Money by 12 Aug 2020 6:16 UTC; 55 points) (
- What improvements should be made to improve EA discussion on heated topics? by 16 Jan 2023 20:11 UTC; 54 points) (EA Forum;
- AI #44: Copyright Confrontation by 28 Dec 2023 14:30 UTC; 54 points) (
- AI #27: Portents of Gemini by 31 Aug 2023 12:40 UTC; 54 points) (
- The “Adults in the Room” by 17 May 2022 4:03 UTC; 50 points) (
- You don’t get to have cool flaws by 28 Jul 2023 5:37 UTC; 49 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- AI #21: The Cup Overfloweth by 20 Jul 2023 21:30 UTC; 47 points) (
- On sincerity by 23 Dec 2022 17:14 UTC; 46 points) (EA Forum;
- The Thing That I Protect by 7 Feb 2009 19:18 UTC; 46 points) (
- The End (of Sequences) by 27 Apr 2009 21:07 UTC; 46 points) (
- Higher Purpose by 23 Jan 2009 9:58 UTC; 44 points) (
- Fighting without hope by 1 Mar 2023 18:15 UTC; 44 points) (
- 23 Jul 2019 7:21 UTC; 43 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- Probability is Subjectively Objective by 14 Jul 2008 9:16 UTC; 43 points) (
- Why I haven’t signed up for cryonics by 12 Jan 2014 5:16 UTC; 43 points) (
- Monthly Roundup #3 by 6 Feb 2023 13:00 UTC; 41 points) (
- Activation Costs by 25 Oct 2010 21:30 UTC; 39 points) (
- Your Utility Function is Your Utility Function by 6 May 2022 7:15 UTC; 39 points) (
- 17 Feb 2022 23:32 UTC; 39 points) 's comment on Twitter thread on postrationalists by (
- On Defecting On Yourself by 18 Mar 2022 2:21 UTC; 37 points) (
- We are already in a persuasion-transformed world and must take precautions by 4 Nov 2023 15:53 UTC; 36 points) (
- In favor of steelmanning by 1 May 2023 17:12 UTC; 36 points) (
- Fighting without hope by 1 Mar 2023 18:15 UTC; 35 points) (EA Forum;
- The Instrumental Value of Your Own Time by 14 Jul 2010 7:57 UTC; 33 points) (
- Zombie Responses by 5 Apr 2008 0:42 UTC; 31 points) (
- Three Fables of Magical Girls and Longtermism by 2 Dec 2022 22:01 UTC; 31 points) (
- On the Nature of Agency by 1 Apr 2019 1:32 UTC; 31 points) (
- 18 Mar 2009 21:40 UTC; 29 points) 's comment on In What Ways Have You Become Stronger? by (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Are you allocated optimally in your own estimation? by 20 Aug 2022 21:19 UTC; 28 points) (EA Forum;
- In favor of steelmanning by 1 May 2023 15:33 UTC; 27 points) (EA Forum;
- The Friendly Drunk Fool Alignment Strategy by 3 Apr 2023 1:26 UTC; 27 points) (
- Aella on Rationality and the Void by 31 Oct 2019 21:40 UTC; 27 points) (
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by 2 Mar 2024 22:05 UTC; 26 points) (
- A Premature Word on AI by 31 May 2008 17:48 UTC; 26 points) (
- You Don’t Have To Click The Links by 11 Sep 2022 18:13 UTC; 25 points) (
- Terminal Bias by 30 Jan 2012 21:03 UTC; 24 points) (
- The Second Circle by 20 May 2018 13:40 UTC; 23 points) (
- The Conversations We Make Space For by 28 Jul 2022 21:36 UTC; 22 points) (EA Forum;
- 19 Nov 2012 18:15 UTC; 22 points) 's comment on Group rationality diary, 11/13/12 by (
- CFAR, responsibility and bureaucracy by 6 Nov 2021 14:53 UTC; 22 points) (
- HELP! I want to do good by 28 Apr 2011 5:29 UTC; 22 points) (
- The Conversations We Make Space For by 28 Jul 2022 21:37 UTC; 21 points) (
- 29 Mar 2009 21:46 UTC; 20 points) 's comment on Tell Your Rationalist Origin Story by (
- 3 Nov 2010 5:21 UTC; 20 points) 's comment on Rationality Quotes: November 2010 by (
- How to get the most out of the next year by 22 Dec 2011 21:23 UTC; 19 points) (
- Moving towards the goal by 7 Dec 2014 8:00 UTC; 18 points) (EA Forum;
- Sensor Exposure can Compromise the Human Brain in the 2020s by 26 Oct 2023 3:31 UTC; 17 points) (
- The Purpose of Purpose by 15 May 2021 21:00 UTC; 17 points) (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- 28 Dec 2010 8:00 UTC; 16 points) 's comment on Efficient Charity: Do Unto Others... by (
- 13 Mar 2009 17:31 UTC; 15 points) 's comment on So you say you’re an altruist... by (
- 20 Mar 2011 22:28 UTC; 15 points) 's comment on What comes before rationality by (
- 7 Mar 2012 19:22 UTC; 15 points) 's comment on AI Risk and Opportunity: A Strategic Analysis by (
- Moving towards the goal by 7 Dec 2014 2:00 UTC; 15 points) (
- 6 Jul 2009 20:02 UTC; 14 points) 's comment on The Dangers of Partial Knowledge of the Way: Failing in School by (
- Detachment vs attachment [AI risk and mental health] by 15 Jan 2024 0:41 UTC; 14 points) (
- 9 Mar 2015 22:04 UTC; 14 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 118 by (
- 17 Nov 2013 21:31 UTC; 13 points) 's comment on Open Thread, November 15-22, 2013 by (
- 15 Sep 2009 2:53 UTC; 13 points) 's comment on Tell Your Rationalist Origin Story by (
- [Altruist Support] How to determine your utility function by 1 May 2011 6:33 UTC; 13 points) (
- 3 Oct 2012 11:14 UTC; 13 points) 's comment on Rationality Quotes October 2012 by (
- 22 Feb 2016 9:26 UTC; 13 points) 's comment on The Talos Principle by (
- Incorporating Justice Theory into Decision Theory by 21 Jan 2024 19:17 UTC; 13 points) (
- 8 Feb 2009 2:06 UTC; 12 points) 's comment on Epilogue: Atonement (8/8) by (
- 25 Nov 2013 10:29 UTC; 12 points) 's comment on What can we learn from freemasonry? by (
- 13 Mar 2019 15:16 UTC; 12 points) 's comment on Blegg Mode by (
- Saving the world in 80 days: Prologue by 9 May 2018 21:16 UTC; 12 points) (
- 19 Apr 2011 21:06 UTC; 12 points) 's comment on Insufficiently Awesome by (
- Doing medical research with (a lot) of personal money [link] by 7 Sep 2011 12:46 UTC; 11 points) (
- 23 Oct 2011 9:12 UTC; 11 points) 's comment on Rationality Quotes October 2011 by (
- 28 Jul 2014 13:12 UTC; 11 points) 's comment on Ethics in a Feedback Loop: A Parable by (
- 23 Jul 2019 17:55 UTC; 10 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- 10 Feb 2010 4:18 UTC; 10 points) 's comment on Shut Up and Divide? by (
- 31 Oct 2016 21:29 UTC; 10 points) 's comment on The true degree of our emotional disconnect by (
- 21 Feb 2018 8:17 UTC; 10 points) 's comment on Yoda Timers 2 by (
- 18 Feb 2018 20:05 UTC; 10 points) 's comment on A Simple Motto by (
- 24 Sep 2019 17:58 UTC; 9 points) 's comment on Ruby’s Quick takes by (EA Forum;
- 3 Apr 2009 2:19 UTC; 9 points) 's comment on Accuracy Versus Winning by (
- 16 Jan 2013 8:32 UTC; 9 points) 's comment on Morality is Awesome by (
- 5 Mar 2010 15:40 UTC; 9 points) 's comment on Welcome to Less Wrong! by (
- 17 Sep 2010 21:55 UTC; 9 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 3 by (
- Adjusting Effort to Barely Meet Standards by 15 May 2013 3:08 UTC; 9 points) (
- What is the best way to develop a strong sense of having something to protect by 6 Sep 2015 21:37 UTC; 9 points) (
- Detachment vs attachment [AI risk and mental health] by 15 Jan 2024 0:38 UTC; 8 points) (EA Forum;
- 3 Aug 2013 5:11 UTC; 8 points) 's comment on August 2013 Media Thread by (
- The Irresistible Attraction of Designing Your Own Utopia by 1 Apr 2022 1:34 UTC; 8 points) (
- 7 Mar 2011 6:00 UTC; 8 points) 's comment on Positive Thinking by (
- 17 Mar 2015 12:32 UTC; 8 points) 's comment on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 by (
- 18 Mar 2015 7:02 UTC; 8 points) 's comment on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 by (
- 19 Apr 2012 9:41 UTC; 8 points) 's comment on How can we get more and better LW contrarians? by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 2 Nov 2009 15:00 UTC; 8 points) 's comment on Open Thread: November 2009 by (
- 14 Feb 2013 6:14 UTC; 8 points) 's comment on Memetic Tribalism by (
- [SEQ RERUN] Something to Protect by 2 Jan 2012 4:12 UTC; 7 points) (
- 17 Feb 2012 2:53 UTC; 7 points) 's comment on Seeking education by (
- Rationality Reading Group: Part W: Quantified Humanism by 24 Mar 2016 3:48 UTC; 7 points) (
- 25 Jan 2011 17:52 UTC; 7 points) 's comment on Intrapersonal negotiation by (
- 18 Oct 2011 12:08 UTC; 7 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 12 Jun 2013 12:10 UTC; 7 points) 's comment on Rationality Quotes June 2013 by (
- 21 May 2018 7:29 UTC; 7 points) 's comment on The Second Circle by (
- 26 Jul 2012 8:15 UTC; 7 points) 's comment on The curse of identity by (
- 3 Mar 2013 13:48 UTC; 6 points) 's comment on Rationality Quotes March 2013 by (
- 30 Jan 2011 0:27 UTC; 6 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 7 by (
- 4 Oct 2011 11:11 UTC; 6 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 9 by (
- 8 Mar 2010 18:30 UTC; 6 points) 's comment on The fallacy of work-life compartmentalization by (
- Best videos inspiring first interest in rationality or the singularity by 13 Apr 2011 2:32 UTC; 6 points) (
- Are you allocated optimally in your own estimation? by 20 Aug 2022 19:46 UTC; 6 points) (
- 30 Jun 2010 15:23 UTC; 6 points) 's comment on A Challenge for LessWrong by (
- What’s the best time-efficient alternative to the Sequences? by 16 Dec 2022 20:17 UTC; 6 points) (
- Being Correct as Attire by 24 Oct 2017 10:04 UTC; 5 points) (
- 15 Aug 2012 20:04 UTC; 5 points) 's comment on Let’s be friendly to our allies by (
- Crossing the experiments: a baby by 5 Aug 2013 0:31 UTC; 5 points) (
- 3 Mar 2011 11:45 UTC; 5 points) 's comment on What Else Would I Do To Make a Living? by (
- 16 Apr 2009 19:51 UTC; 5 points) 's comment on Welcome to Less Wrong! by (
- 21 Jan 2014 19:28 UTC; 5 points) 's comment on Using vs. evaluating (or, Why I don’t come around here no more) by (
- 13 May 2011 12:23 UTC; 5 points) 's comment on The elephant in the room, AMA by (
- Who is your favorite rationalist? by 25 Apr 2010 14:56 UTC; 5 points) (
- 7 Aug 2012 1:36 UTC; 5 points) 's comment on Self-skepticism: the first principle of rationality by (
- 22 Mar 2011 19:26 UTC; 5 points) 's comment on Costs and Benefits of Scholarship by (
- 16 Feb 2015 10:54 UTC; 5 points) 's comment on Wisdom for Smart Teens—my talk at SPARC 2014 by (
- 3 Sep 2013 1:44 UTC; 5 points) 's comment on Open thread, September 2-8, 2013 by (
- 2 Dec 2009 23:55 UTC; 5 points) 's comment on Call for new SIAI Visiting Fellows, on a rolling basis by (
- 14 Aug 2022 2:30 UTC; 5 points) 's comment on Cultivating Valiance by (
- 29 Aug 2023 20:19 UTC; 5 points) 's comment on Dear Self; we need to talk about ambition by (
- 25 Dec 2019 18:04 UTC; 4 points) 's comment on We run the Center for Applied Rationality, AMA by (
- 3 Dec 2010 15:31 UTC; 4 points) 's comment on Defecting by Accident—A Flaw Common to Analytical People by (
- 7 Aug 2021 9:39 UTC; 4 points) 's comment on What are some beautiful, rationalist sounds? by (
- 20 Feb 2011 9:38 UTC; 4 points) 's comment on Age, fluid intelligence, and intelligent posts by (
- 19 Feb 2010 0:17 UTC; 4 points) 's comment on Open Thread: February 2010, part 2 by (
- 21 Mar 2011 20:58 UTC; 4 points) 's comment on What comes before rationality by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- Counterfactual Mechanism Networks by 30 Jan 2024 20:30 UTC; 4 points) (
- 29 Oct 2010 21:26 UTC; 4 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 4 by (
- 2 Jun 2008 7:26 UTC; 4 points) 's comment on The Rhythm of Disagreement by (
- 2 Jan 2015 18:40 UTC; 4 points) 's comment on Group Rationality Diary, January 1-15 by (
- 29 Apr 2010 6:42 UTC; 3 points) 's comment on Averaging value systems is worse than choosing one by (
- 7 Aug 2010 22:19 UTC; 3 points) 's comment on Rationality quotes: August 2010 by (
- 2 May 2010 16:17 UTC; 3 points) 's comment on Rationality quotes: May 2010 by (
- 15 Oct 2012 23:37 UTC; 3 points) 's comment on Problem of Optimal False Information by (
- 29 Aug 2012 9:58 UTC; 3 points) 's comment on 11 minute TED talk is about instrumental rationality by (
- 30 Jun 2010 15:48 UTC; 3 points) 's comment on A Challenge for LessWrong by (
- 5 Sep 2015 15:14 UTC; 2 points) 's comment on Proper posture for mental arts by (
- Meetup : Chicago Rationality Training Group—Meeting 1: “Why Rationality?” and Using the Inner Simulator by 27 Mar 2015 15:04 UTC; 2 points) (
- 15 Sep 2010 19:29 UTC; 2 points) 's comment on September 2010 Southern California Meetup by (
- Why Rationality? by 8 Apr 2017 12:10 UTC; 2 points) (
- 17 May 2012 16:00 UTC; 2 points) 's comment on Open Thread, May 16-31, 2012 by (
- 6 Mar 2017 12:47 UTC; 2 points) 's comment on Welcome to Less Wrong! (9th thread, May 2016) by (
- 8 Oct 2010 1:59 UTC; 2 points) 's comment on Rationality quotes: October 2010 by (
- 2 Feb 2012 23:56 UTC; 2 points) 's comment on What math should I learn? by (
- We are already in a persuasion-transformed world and must take precautions by 4 Nov 2023 15:53 UTC; 1 point) (EA Forum;
- 22 Feb 2024 14:01 UTC; 1 point) 's comment on Detachment vs attachment [AI risk and mental health] by (EA Forum;
- 20 Apr 2009 23:18 UTC; 1 point) 's comment on The True Epistemic Prisoner’s Dilemma by (
- 8 Sep 2011 13:25 UTC; 1 point) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- 9 Sep 2011 5:13 UTC; 1 point) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- 29 Nov 2013 16:28 UTC; 1 point) 's comment on Circular Altruism by (
- 3 Apr 2009 1:11 UTC; 1 point) 's comment on Accuracy Versus Winning by (
- 3 Apr 2009 1:13 UTC; 1 point) 's comment on Accuracy Versus Winning by (
- 5 Sep 2022 10:39 UTC; 1 point) 's comment on Q Home’s Shortform by (
- 19 Jan 2010 1:10 UTC; 1 point) 's comment on Open Thread: January 2010 by (
- 2 Dec 2013 20:36 UTC; 1 point) 's comment on Failing to update by (
- 1 Dec 2010 20:09 UTC; 1 point) 's comment on Is ambition rational? by (
- 30 Aug 2011 9:55 UTC; 1 point) 's comment on A History of Bayes’ Theorem by (
- 5 Jan 2011 23:50 UTC; 1 point) 's comment on Pascal’s Mugging: Tiny Probabilities of Vast Utilities by (
- 19 Apr 2012 8:57 UTC; 1 point) 's comment on Meta Addiction by (
- Meetup : Saint Petersburg. Why rationality? by 3 Nov 2013 13:30 UTC; 1 point) (
- 27 May 2011 3:27 UTC; 1 point) 's comment on The 48 Rules of Power; Viable? by (
- 1 Oct 2013 23:39 UTC; 1 point) 's comment on General purpose intelligence: arguing the Orthogonality thesis by (
- 27 Nov 2010 18:02 UTC; 1 point) 's comment on Luminosity (Twilight fanfic) Part 2 Discussion Thread by (
- 31 Oct 2011 16:41 UTC; 1 point) 's comment on [LINK] Being proven wrong is like winning the lottery by (
- 10 Nov 2012 19:10 UTC; 1 point) 's comment on If MWI is correct, should we expect to experience Quantum Torment? by (
- 6 Mar 2013 10:33 UTC; 0 points) 's comment on Rationality Quotes March 2013 by (
- 6 Nov 2011 13:08 UTC; 0 points) 's comment on On Juvenile Fiction by (
- 21 Nov 2009 9:29 UTC; 0 points) 's comment on Epilogue: Atonement (8/8) by (
- 12 Oct 2010 22:05 UTC; 0 points) 's comment on Discuss: Meta-Thinking Skills by (
- 6 Nov 2013 16:26 UTC; 0 points) 's comment on No Universally Compelling Arguments in Math or Science by (
- 2 Dec 2013 2:46 UTC; 0 points) 's comment on According to Dale Carnegie, You Can’t Win an Argument—and He Has a Point by (
- 2 Mar 2017 10:36 UTC; 0 points) 's comment on Are we running out of new music/movies/art from a metaphysical perspective? by (
- 11 Sep 2010 4:31 UTC; 0 points) 's comment on More art, less stink: Taking the PU out of PUA by (
- 7 Jan 2015 4:45 UTC; 0 points) 's comment on Rationality Quotes January 2015 by (
- 14 Mar 2008 17:05 UTC; 0 points) 's comment on Penguicon & Blook by (
- 15 Jul 2008 23:00 UTC; 0 points) 's comment on Probability is Subjectively Objective by (
- 12 Jun 2021 20:29 UTC; 0 points) 's comment on Reply to Nate Soares on Dolphins by (
- 13 Oct 2020 12:34 UTC; 0 points) 's comment on Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ by (
- 3 Jun 2011 13:31 UTC; 0 points) 's comment on [SEQ RERUN] Your Rationality is My Business by (
- 4 Jun 2017 20:06 UTC; 0 points) 's comment on What Would You Do Without Morality? by (
- 28 Oct 2011 2:41 UTC; 0 points) 's comment on Rival formalizations of a decision problem by (
- 12 May 2009 16:02 UTC; 0 points) 's comment on No One Knows Stuff by (
- 30 Mar 2014 22:30 UTC; 0 points) 's comment on The Ten Commandments of Rationality by (
- 21 Dec 2010 10:26 UTC; 0 points) 's comment on Luminosity (Twilight fanfic) Part 2 Discussion Thread by (
- 21 Dec 2010 3:44 UTC; 0 points) 's comment on Luminosity (Twilight fanfic) Part 2 Discussion Thread by (
- 12 Jan 2011 3:51 UTC; 0 points) 's comment on Branches of rationality by (
- 4 Oct 2012 21:04 UTC; 0 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 12 Jul 2012 3:42 UTC; -2 points) 's comment on Rationality Quotes July 2012 by (
- Confronting the Mindkiller—a series of posts exploring Political Landscape (Part 1) by 2 Jun 2012 20:17 UTC; -3 points) (
- 13 Nov 2011 16:32 UTC; -5 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 8 May 2012 12:38 UTC; -10 points) 's comment on A wild theist platonist appears, to ask about the path by (
- AI Safety is Dropping the Ball on Clown Attacks by 21 Oct 2023 23:15 UTC; -17 points) (EA Forum;
- My critique of Eliezer’s deeply irrational beliefs by 16 Nov 2023 0:34 UTC; -33 points) (
What was it? AI?
I get an uncomfortable feeling, Eliezer, that this work is to ultimately lead to a mechanism to attract:
people of libertarian bent
people interested in practically unbounded longevity of consistent, continual consciousness
and also lead to a mechanism to tar people disinclined to those two goals; tar them with the label “sentimentally irrational”.
Rationality to me is simply a tool. I would have absolutely no confidence in it without the ongoing experiences of applying it iteratively, successfully to specific goals.
I haven’t yet needed to “deliberately believe a useful false statement” (to my knowledge), but I wouldn’t be particularly disturbed if I tried to, and found it repeatedly successful. Another tool for my tool belt.
Right now I am having some success with modeling the world over the conditions I care about with:
scientific laws (including information theory)
mathematics
groups of causality graphs, for the same phenomena, in competition
specific causality graphs
naive Bayesian
straightforward use of Bayes’ theorem
frequentist probability and statistics
discrete probability
logic
(causality graphs considered can include relations defined by simulation, and all other tools listed. Whatever it is, shove it into a causality graph. I haven’t found it useful to restrict the use of anything in a causality graph, particularly if they are forced to compete over the ability to be consistent with past data and predict future results.)
(The list above is somewhat ordered over more applicable to specific situations, to less applicable to specific situations. I attach the lowest confidence to any specific causality graph, more confidence with the graphs in aggregate in competition. I attach more confidence in frequentist analysis over good data, over Bayesian, but Bayesian is applicable in more circumstances.)
I have to deal with finite resource allocation in a manufacturing plant. Where else to use these tools? Possibly an the opportunity from celebrating the differences in all the people working in the plant.
I am often confused by your writing, because I don’t see where you have “skin in the game”. Where are you exercising your tools of rationality?
Is it all just to make the world slightly more hospitable to libertarians interested in life extension? (No negative judgment if that is the case.)
(Sorry to beg your indulgence of a long post)
The success of science was and is because it is useful, and similarly for rationalism. But one of the critiques of rationalism and of the overcoming-bias program is that it is sometimes counterproductive. The unbiased tend to be unhappy and/or insane. If someone’s goals are to be happy and successful in life, he does best not to be fully rational. Irrationality is the most useful policy if these are your goals.
Your argument suggests that this is true only because this is setting the goalposts too low. For someone who merely seeks happiness, yes, irrationality is in order. But if someone’s goals are much higher—if lives are at stake, perhaps even the lives of all humanity, then irrationality no longer works best. In that case, he must follow a path of strict rationality as closely as possible, because the stakes are so high.
However it could be argued that this is not always the case, that high stakes may nevertheless require a degree of irrationality. Rationality is useful for getting at the truth; but irrationality may be useful in persuading and motivating others to help. Successful leaders are notoriously irrational, and if your project is big enough, leadership will be a necessary ingredient for success.
Perhaps a solution is to split one’s efforts into two pieces: a rational part, which ruthlessly seeks the truth regardless of inhibitions and discomfort; and an irrational part, which takes the core results from rational analysis, dresses them up in attractive lies, and sells them enthusiastically to the larger world. In fact I would suggest that many successful enterprises have been built on a partnership with this structure: the creative genius who works behind the scenes, and the leader who is the public face of the endeavor and who excels at presentation. You might consider a similar arrangement for your own project.
I rarely post, only read in hopes of learning. Today, I comment: I appreciate the beauty of this post.
Thank you, Eliezer.
I am often confused by your writing, because I don’t see where you have “skin in the game”. Where are you exercising your tools of rationality?
If I’d went ahead and said that within the post, it would’ve transformed a piece on rationality into overt propaganda, destroying its internal aesthetics. Read my website.
What a terrible idea… then whenever rationality comes in conflict with that thing, rationality will be discarded.
We already see lots and lots of this behavior. It’s the human norm, in fact: use rationality as a tool as long as it doesn’t threaten X, then discard it when it becomes incompatible with X.
Perhaps I am one of the “sentimentally irrational,” but I would pick the 400 certain lives saved if it were a one-time choice, and the 500 @ 90% if it were an iterated choice I had to make over, and over again. In the long run, probabilities would take hold, and many more people would be saved. But for a single instance of an event never to be repeated? I’d save the 400 for certain.
Your 80% and 90% figures don’t really add up either. You don’t describe how many people in total will die, regardless of you decision. If the max death number possible from this catastrophe is 500, then your point is valid. But what if it were 100 million, or even better, all of humanity? Now, the difference in chance of saving your loved one via either strategy is vanishingly small, and you are left with a 90% chance or a 100% chance of saving humanity as a whole. It’s exactly the same as the situation you describe above, but it seems the moral math reverses itself. You need to more fully specify your hypothetical situations if you wish to make a convincing point.
Caledonian, I think you’re misreading him. He’s not saying: the cause is the one thing you never think rationally about. He’s saying: the cause is good (rationally good) and to protect/preserve it you have to pull yourself into conformance with the real world, because that’s where the action is. To achieve that you have to hold up what you (perhaps mistakenly) think of as “reason” against the real world, and be prepared to re-evaluate if it doesn’t work. What your re-evaluation seeks is better techniques of reason—not to throw reason away.
I think you’re misreading him, substituting a reasonable argument for the rather bizarre things he says.
Rationality by its nature cannot be only a means towards an end.
“Rationality by its nature cannot be only a means towards an end.”
Rationality is conformance to reality. You can conform to reality for a cause. (You’re saying, you can’t mold reality to your cause—I agree, but that’s not what he was meaning.) He was meaning that people have thought themselves rational when applying formal, skillful, pedigreed academic techniques that DON’T WORK, such as Jesuit style casuistry. So you have to hold the technique up against reality. You won’t do that if you put the technique first by saying “I serve reason”, because that morphs in your mind into “I serve Jesuit casuistry” or whatever. It blithely assumes your all-too-human technology of achieving reason works—and it might not.
Julian, Caledonian is a well-known troll on OB. We’ve decided against censorship for now, but you might not want to waste too much time. I generally don’t respond to Caledonian unless I see someone else agreeing with him.
I totally agree with “Anon”, and others who made similar points in the Circular Altruism post. Context matters! Is it a one-time choice, or an iterated choice? Is there an upper limit to the number of deaths, or no limit? Are the 500 the number of people on the sinking ship/last people on planet earth, or possible victims from a much larger pool? You can only do the math and make a rational decision when you have ALL the numbers from the relevant context.
The first steps of rationality lie not in separating problems from their context, but in determining what context is relevant.
I agree with this point, but that’s exactly what Eliezer stated he was in favor of: serving something else and merely using rationality as a means toward that end while it’s convenient to do so.
It doesn’t do any good to avoid making an implicit error by explicitly making that error instead. Certainly we need to compare our thinking to a fundamental basis, but the goal we’re seeking can’t be that basis. Rationality is about always checking our thinking against reality directly, and using that to evaluate not only our methods of reaching our goals but the nature of our goals themselves.
If you adopt rationality merely because you want to use it to attain your ends, what happens if you discover that your ends aren’t compatible with it? (And if that’s really what you’re doing, how did you know to adopt rationality in the first place? Just keep trying random stuff until you happen to stumble into the correct meta-strategy by chance? I think rationality has to be the starting point, not something picked up along the way.)
I don’t have anything desperately important to me, and you say I’m not allowed to just pick something. Given this, what am I supposed to do, to become more rational? Am I just doomed? I really desperately want to believe true things and not false things, but you say that’s not good enough.
You’re not doomed; you may just not be terribly motivated.
Decreasing existential risk isn’t incredibly important to you? Could you explain why?
Explore the world. Meet people, read books, find blogs like this one. Hopefully something will inspire you.
Good question, Nominull. Unfortunately I lack the ability to answer your question from personal experience. Mine just fell into my lap.
But is believing true things what you most desperately want, in all the world?
Caledonian, I gather Eliezer put “rationality” in quotes because people may believe they are committed to rationality when in fact they are not. If they have a goal which is contingent on rationality that will help them from straying from the path.
What he said immediately after the part you mention was: “The Art must have a purpose other than itself, or it collapses into infinite recursion”
He wasn’t talking about pseudo-rationality. When he talks about “The Art”, he’s talking about rationality.
And he’s wrong: truth points to itself.
Anon: do you suggest that others follow your policy as well? Then when many people have individual made isolated choices like that, far fewer lives will have been saved. And in the whole history of the world, choices like that must have been made many times. Why does it matter whether it is you who are repeating the choice or other people?
The question about whether the 500 are that last people in the world is adding other utilities into the issue, such as preserving the human race, and so on. In that case you have a different comparison; naturally, you may have to consider other factors besides the utility of the lives. But as long as you consider only the lives, Eliezer is right.
Caledonian: “I think rationality has to be the starting point.”
Can you expand on this? A rationalistic moral relativist might say that actions require goals, ultimate goals are arbitrary, and so rationality cannot be the starting point there. In the real world, by the time one is able to entertain ideas like ‘choosing to be more rational’, you’re already going to have goals, preferences, ideas about how you should live your life. So it could be countered that ‘rationality’ never has to supply everything; its purpose will largely be to critique existing purposes, order them by significance, or evaluate new possibilities. Say something more about what you think the role of rationality should be in developing a morality, and about the particular powers it has to fulfil that role.
Anon, Wendy:
Certainly finding out all of the facts that you can is good. But rationality has to work no matter how many facts you have. If the only thing you know is that you have two options:
Save 400 lives, with certainty
Save 500 lives, 90% probability; save no lives, 10% probability. then you should take option 2. Yes, more information might change your choice. Obviously. And not interesting. The point is that given this information, rationality picks choice 2.
Save 400 lives, with certainty
Save 500 lives, 90% probability; save no lives, 10% probability.
i.e.
Save 4 lives, with certainty
Save 5 billion lives,0.00000009% probability; save no lives, 99.99999991% probability.
Any takes for #2? I seem to remember Ben Jones saying he would choose #1 in a case similar to the second case.
Formerly, I think I would have chosen #2 in the first case and #1 in the second. But Eliezer has converted me. Now I choose #2 in both cases. But would he do that himself? Consider:
“Perhaps I am one of the ‘sentimentally irrational,’ but I would pick the 400 certain lives saved if it were a one-time choice, and the 500 @ 90% if it were an iterated choice I had to make over, and over again. In the long run, probabilities would take hold, and many more people would be saved. But for a single instance of an event never to be repeated? I’d save the 400 for certain.” (Anon, above)
“If the probabilities of various scenarios considered did not exactly cancel out, the AI’s action in the case of Pascal’s Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.
You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability: Pascal’s Mugger is just a philosopher out for a fast buck.
But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not. An AI is not given its code like a human servant given instructions. An AI is its code. What if a philosopher tries Pascal’s Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI’s calculations? What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?
How do I know to be worried by this line of reasoning? How do I know to rationalize reasons a Bayesian shouldn’t work that way?” (Eliezer Yudkowsky, Pascal’s Mugging)
Who sees the similarity? Eliezer no doubt thinks that Anon is biased toward certainty, but so is he: he simply has less of the bias.
So I hereby retract my argument against voting, Pascal’s Mugging, and Pascal’s Wager. In the particular Mugging we discussed, there may have been anthropic reasons to make it proportionally improbable. But without such reasons, it should be accepted.
It’s not a matter of bias toward certainty; accepting Pascal’s Mugger’s terms can be conclusively demonstrated to be a losing strategy. Remember, the purpose is to win. That would imply that “rationality” that complies with the Mugger is not rational after all, which means rethinking the whole thing.
Having said that, I haven’t been able to formulate a response to Pascal’s Mugging myself, so I might be wrong-
...Except that in the process of writing this right now, I think I might have! I need to think this a little further.
“It takes visceral panic, channeled through cold calculation, to cut away all the distractions.”—this just made it to my quotes file.
If i understand Eliezer’s point correctly in terms of the map/territory analogy, what he says is that having somewhere to go and actually needing to put your map to use will motivate you to make that map as accurate as possible, if you care about your destination more than you ‘believe in’ the current iteration of your map and/or the techniques used to derrive it.
Lots of things act without having any sort of goals. Does fire have a goal of reducing high-energy compounds into oxidized components and free energy? No, but it does it anyway.
You can limit ‘action’ to intentional events only, I suppose.
However, how does declaring that goals are arbitrary rule out assertions about necessary starting points?
If the goals already developed are incompatible with each other, rationality isn’t going to help much. If they’re incompatible with rationality, it really isn’t going to help. But no helping is possible.
Rationality is required to form a coherent model (however incomplete or imperfect) of the world. To take an action with the intention of bringing about a specific result requires a coherent model. Ergo...
An incoherent actor can’t be said to have any goals at all.
Formerly, I think I would have chosen #2 in the first case and #1 in the second. But Eliezer has converted me. Now I choose #2 in both cases. But would he do that himself?
Isn’t that implicitly what he does for a living? Eliezer could become a firefighter or emergency medical technician, or work for clean drinking water in rural Africa, with a near-certainty of preventing several deaths in the next year. Meanwhile, there is a very small chance of someone creating an non-Friendly AI in the next year. We can argue about the probabilities (of the problem arising, of successfully presenting it), but Eliezer has already chosen the existential threat.
“So I hereby retract my argument against voting, Pascal’s Mugging, and Pascal’s Wager. In the particular Mugging we discussed, there may have been anthropic reasons to make it proportionally improbable. But without such reasons, it should be accepted.”
I’m certainly glad you think so, Unknown, because I was just contacted by the Dark Lords of the Matrix. It turns out that we are living in a simulation. I have no idea what the physics of the world outside are like, but they’re claiming that unless you personally send $100 to SIAI right now, they’re going to put one dust speck in the eye of each of BusyBeaver(BusyBeaver(BusyBeaver(3^^^^^^^^^^^^^^^^^^^3))!!)! people.
Get out your checkbook, quickly, before it’s too late!
(same anon from above who asked about the context of the 400⁄500 problem being an issue)
In response to GreedyAlgorithm who said:
Certainly finding out all of the facts that you can is good. But rationality has to work no matter how many facts you have. If the only thing you know is that you have two options:
Save 400 lives, with certainty
Save 500 lives, 90% probability; save no lives, 10% probability. then you should take option 2. Yes, more information might change your choice. Obviously. And not interesting. The point is that given this information, rationality picks choice 2.
While I agree with your constrained view of the problem and its analysis, you are trying to have your cake and eat it too. In such a freed-from-context view, this is (to use your own words) “not interesting”. It’s like asserting that “4.5 is greater than 4″ and that since we wish to pick the greater number, the rationalist picks 4.5. True as far as it goes, but trivial and of no consequence.
Eliezer brought in the idea of something more valuable than your own life, say that of your child. By stepping outside the cold, hard calculus of mere arithmetic comparisons he made a good point (we are still discussing it), but he opened the door for me to do the same. I see your child, and raise you “all of humanity”.
Either we are discussing a tautological, uninteresting, degenerate case which reduces down to “4.5 is greater than 4, so to be rational you should always pick 4.5” (which, I agree with, but is rather pointless) or we are discussing the more interesting question of the intersection between morality and rationality. In that case, I assert bringing “extra” conditions into the problem matters very much.
If “rationality has to work no matter how many facts you have” [Greedy’s words] (which I agree with) then you must grant me that it should provide consistent results. To make the problem “interesting” Eliezer brought in the “extra” personal stake of a loved family member, and came to his rationalist conclusion, pointing out why you’d want to “take a chance” given that you don’t know if your daughter is in the certain group or might be saved as one of the “chance” group. I merely followed his example. His daughter may still be in the certain group or not (same situation) but I’ve just added everyone else’s daughter into the pot. I don’t see how these are fundamentally different cases, so rationality should produce the same answer, no?
Z. M. Davis, given the existence of that many people, and given that threat, the probability that I personally would be the one threatened in that must be multiplied by one over the number of people, since it could have been anyone else. So the expected disutility from your mugging is one dust speck multiplied by the probability that the Matrix scenario is actually true. This probability is very low, and even if it were unity, the disutility of one dust speck isn’t going to get me to pay $100.
So again, I said “without such reasons, it should be accepted.” But I have the reasons in this case.
Zubon, of course, Eliezer might well take my second gamble. I was only speaking in preparation for what came ahead: Anon doesn’t want to gamble with human lives if there’s a small chance of failure. Eliezer may be willing to do this, but once the chance of success becomes extremely small (much smaller than 1 in a billion), as in the Mugging case, he ignores the expected utility, thus falling into exactly the same sort of irrationality as Anon. In relation to this, it is significant that he admitted that he would reject the Mugging even if he had no reason to think that expected utility of rejecting it was greater than the expected utility of accepting it.
Or in other words: expected utility must equal utility times probability, no matter how small the probability is.
It’s probably just that I’m stupid, but I don’t understand the anthropic solution to Pascal’s Mugging. Why does it matter that other people could have been asked? What if it were stipulated that the mugger threatens everyone?
Maybe I should actually study Kolmogorov complexity before trying to grapple with such matters.
Viz. the dilemma posed in Circular Altruism, what should we do? When forced to “Shup up and multiply”, we have forgone our intuitions and picked the choice based upon our mathematics. However, we are not just overcoming our own intuitions, but also the intuitions of everyone who does not simply “Shut up and multiply”. We are held accountable not by those who knows the math, but by those who have intuitions like ourselves.
If we save everyone, we are heroes. If we do not, we are held accountable not for the math, but for the very intuition we had to overcome multiplied by as many people who are now trying to hold us accountable.
So at the very least, we should add ourselves as the 501st person who might die in the second case. The price we pay for the burden of rationality!
RS, if that really bothers you, you haven’t found your something to protect yet.
So, is your point that we need a cause against which to evaluate the success of our mathematics? That perhaps this sort of feedback that, persumably, you encounter on a daily basis, is something that does not come through rationality itself, but through the very real feedback of what you have chosen to protect?
I guess my previous post was a reflection that I am just a budding rationalist, and also that my skills have not been sharpened against the proper stone.
So, is your point that we need a cause against which to evaluate the success of our mathematics? That perhaps this sort of feedback that, persumably, you encounter on a daily basis
I’m not going to get feedback on my final success or failure for, oh, probably at least another 10 years.
My point, rather, was that your post illustrated very clearly why rationality comes from having something to protect—you thought of doing something rational, but worried about the other people whose intuitions differed from yours, and what they might think of you. So that worry is a force binding you to the old way of thinking.
But if the thing you were protecting was far more important than what anyone thought of you, that wouldn’t slow you down. This isn’t about iconoclasm—it’s about an inertial drag exerted by all the little fears and worries, an inertial drag of the way that you or others previously did things; the motivating force has to be more powerful than that, or you won’t move.
It’s been 13 years, what’s the feedback?
I was going to post an issue I had with this article, personally.
What is most important to me is my intent to live for a very, very long time. Assuming I do better on average, I will end at a very high place! But how can living forever be more important to me than my own life? It obviously can’t. But I think I see it now; it’s more important to me than anything else.
Who cares what anyone thinks of my desire? I’ll do whatever it takes, and I don’t mean I’ll give it a shot!
Second in importance to me is giving everyone else a chance to live a long time as well. I can’t say that this is more important to me than my own life, but it coincides with the first one anyway.
“The point is that given this information, rationality picks choice 2.”—Posted by: GreedyAlgorithm
Sorry, no. Given this information, rationality says that there is not enough information to make an appropriate decision, and demands to know the context. If contextual information isn’t available, rationality will say that either option 1 or 2 may be right, depending on circumstances.
Rationality never dismisses context as irrelevant just because it isn’t known. If unknown factors make the right answer uncertain, then you must accept that it is uncertain.
Context can change what you’re trying to achieve. Many people seem to assume that the point (re Circular Altruism problems) is to save as many lives as possible, but this might have to be balanced with other goals—e.g. setting a limit to acceptable risk (as in not risking destruction of the entire human population, whatever their number), or spreading risk instead of marking certain people for death (as in putting the last few people from a sinking ship in the lifeboat, not leaving them behind to make a crowded lifeboat safer).
Making assumptions is one of the danger pitfalls for rational thinkers. So is a reluctance to say “I don’t know the answer” when appropriate.
For some reason this post reminds me of the Buddhist parable asceticsim now, nymphs later.
I don’t think it’s all that uncommon to begin cultivating an art for some specific purpose, proceed to cultivate it largely for its own sake, and eventually to abandon the original purpose.
Under Multiple Worlds, aren’t you condemned, whatever you do or don’t do, to there being a number tending to infinity of worlds where what you want to protect is protected, and a number tending to infinity where it is not ?
Caledonian: Let’s distinguish between the aesthetics of rationality and the pragmatics of rationality. Is my model of the world consistent, do my goals make sense—that’s pragmatics. Aesthetics is by comparison nebulous and subtle, but perhaps it encompasses both admiration for the lawlike nature of reality and self-admiration for one’s own relationship to it. :-)
It seems to me that you are taking issue with the idea that the pragmatics of rationality should be trumped by a higher cause. This essay says nothing about that. It says, first, that it’s a psychological fact that people don’t adopt rationality as a conscious value until some other, already existing value is threatened by irrationality, and second, that you won’t keep developing as a rationalist without such pressure.
As for whether reason by itself can supply supreme values, I had to ask because so many people do think you can get an ought from an is. (I still don’t know what you meant by “truth points to itself”.)
You are not alone, Z. M. Davis: I disagree with Eliezer over whether Robin’s anthropic solution is a satisfactory solution to Pascal’s Mugging. (Eliezer repeated his endorsement of Robin’s anthropic solution here a few weeks ago.) Since I started reading Eliezer 6 years ago, this is the first time I can recall disagreeing with him on a question of fact. (As I have pointed out many times in the comments here, I disagree with him significantly on terminal values.) If anyone wants to reply to this, I humbly suggest doing so by clicking on my name below.
Extraordinary—I don’t believe I’ve ever heard anyone speak of the aesthetic aspects of rational thought before.
I’m not sure I agree with the concept, but it’s something to think about.
And when that value is threatened by the rationality? What then?
I suspect relatively few people have a deep desire for knowledge and understanding. They’re usually the only ones I see developing as rationalists at all. If you don’t have a need for the world to make sense, you tend to develop ad hoc methods for getting what you want. The need for systematic understanding is either present, or not.
For those saying they have nothing to protect or still need to find something to protect, remember that you are human and, unless you have no natural family or reproductive ties, you always have the people you love to protect. It may seem counterintuitive if you’ve bought into Hollywood rationality, but love is a powerful motivational force. If you think that, in theory, being more rational is good, but don’t see how you can effect greater rationality in your mind, consider the many benefits of your increased rationality (again, not Hollywood rationality, but rationality of the type Eliezer describes above).
In my case, I know I’m trying harder than ever to become a better person because of my wife. And when I do something that hurts her, my first thought is to figure out what is wrong with my thinking that led to this. My second is to find a better way to express my love, through increasing her happiness and enjoyment of life. And, realizing that the best thing I can do is shut up an multiply, I figure out how to change myself to be a better multiplier.
Excellent point by Worley. Since I have assumed the role on this blog of pointing out that happiness is not the meaning of life, let me hasten to add that happiness is a very useful barometer. Whether you are happier on average now than you were 10 years ago is for example probably a more reliable barometer of whether your life is on a better track than it was 10 years ago than change in financial net worth over those 10 years (though net worth is an important barometer too). And the one situation in which happiness is least likely to steer you wrong is when you use your wife’s happiness as a barometer for how good a job you are doing as a husband.
The object of the game of life is not just to become more rational but rather to become more rational, more ethical and more loving. “Being loving” is defined as helping those close to you to become more rational, ethical and loving. This is the way we maximize the ethics and the rationality of every intelligent agent in our reach, which, if it is not the purpose of life, is a sufficiently good approximation for most people. (Singularity scientists however will probably need a more sophisticated definition of the purpose of life.)
By “ethics” I mean simply the sincere desire to do good and to avoid doing evil. (I freely admit I do not have a formula or algorithm that allows a person to tell good from evil in any situation). I bring the concept of ethics into this little exposition because I want to suggest that it unethical to increase the rationality of an unethical person. By doing so, you are increasing his capacity to do evil. That suggestion goes against the egalitarianism that is such a central part of our ethical culture: the conventional ethical wisdom is that every human is equally deserving of loving treatment. I want to suggest that that is wrong and that although the majority of us could stand to become much more loving, it is also true that we should direct our love as much as possible towards ethical people and away from unethical people.
I end with a warning. Rationality only becomes powerful when it is combined with knowledge. If you wish to be rational about physics or space travel, it is easy to find accurate knowledge to help you, but it is much more difficult to find accurate knowledge about how to become more loving: in that domain, the accurate knowledge is mixed with a much larger amount of false information. And as has been said here before, most psychologists are idiots.
Hollerith, if ‘most psychologists are idiots’, I wonder how they discovered all the cognitive biases ?
He said ‘most’, not ‘all’. And just because someone is an idiot doesn’t mean everything they do is wrong. Even Freud managed to do some good descriptive work before descending into madness and delusion.
I mentioned psychologists in a particular context, namely, how to apply the skills of rationality to the project of nuturing and supporting your friends, lovers and family. Worley and I think rationality can be applied to that project. But I thought just leaving it at that would mislead some of the readers who have not had a lot of practical experience in life: unlike many of the other projects rationality is typically applied to, this project is different in that you cannot just travel to your nearest bookstore and by browsing the shelves expect to find accurate knowledge to help you in this project (again because the true information is mixed with a much larger amount of false and misleading information and it is impractical to decide which is which). This remains true even if your nearest bookstore is in an elite university and full of textbooks.
If Eliezer mentions a book or article in psychology in a positive light, that is strong evidence that that book or article is worth reading. In 2001 I took his the advice on his web site and read Robert Wright’s Moral Animal and “The Psychological Foundations of Culture”, and I am extremely pleased with the outcome.
Caledonian: “I don’t believe I’ve ever heard anyone speak of the aesthetic aspects of rational thought before.”
It’s funny—the phrase “aesthetics of rationality” appears in the final paragraph of Eliezer’s post; apparently it’s what the whole thing was about. But I didn’t notice it either, until I was seriously casting about for some way to show that Caledonian person why their criticism was off the mark. I think Eliezer’s point may be something like this: the aesthetics of rationality are all that could truly make it an end in itself; this necessarily involves attachment to a particular notion of rationality; and this attachment will hinder genuine progress in rationality, which may require adoption of a different but superior notion of rationality.
Along the way, I think I belatedly noticed a subtext to your own first comment too—you think Eliezer, and other promethean transhumanists like him, are themselves examples of limited rationality, their goals or expectations being unrealistic and therefore irrational. I’ve seen you say as much here, but I hadn’t figured out that this was probably on your mind as you wrote your comment.
Anyway, I should get back to thinking about specks versus torture.
Aesthetics are rarely a topic when rationality is discussed. Mostly because they’re only relevant to ancient-Greek-style thought.
On the list of things likely to cause unreasonable attachment, it’s pretty far down. Love of familiarity, wanting to appear intelligent to others, wanting to appear intelligent to oneself, unwillingness to face conclusions that one finds unpalatable, general inflexibility… these are all plausible causes of failure. But aesthetics?
I think you’re pretty close to the core of this one. You identified that having something to protect gives you strength. And having a worthy cause to work for, for the same reason.
But what is that reason? What is it that gives you strength? What is the underlying cause of us gaining strength from certain causes?
I’m not certain I understand the topic well enough myself, but I think I have something that you might find insightful here.
Moral Idealism. That’s where your power comes from. Whether you’re fighting to protect a loved one, or you’re fighting to promote a worthy cause, you have the power to dedicate yourself with every fiber of your being because you believe your actions are righteous!
You see it all the time. When people are completely confident in the righteousness of their cause, they will put their all into it. You see it when someone protects a loved one, you see it when someone works for a worthy cause, and you see it particularly with religions; their unwavering faith in their belief instills them with a sense of righteousness.
I think your mention of people being more afraid of the crowd disagreeing with them than dying highlights a very dangerous philosophical flaw people hold today. They don’t believe that protecting their lives is a righteous cause!! Having grown up in an altruistic society, they’ve probably been hammered with the message that other peoples’ lives are more important than their own. So they lack the moral justification to protect themselves and they have a flawed moral premise that works to enslave them to the whim of the crowd.
You’re worried about people not having a good reason to be rational? Here’s the answer. Your own life must be your ultimate value. It must be an end in itself, and not the means to anyone else’s ends. You must judge value with your life as the standard of judgment. Don’t think in terms of good and evil, think in terms of good for you and bad for you. Not only is logic and reasoning a tool to promote your life, you depend on it for survival. I can’t imagine any way to throw away reason and promote your life at the same time.
(If you can instill people with the power of moral idealism to promote their own lives, you might also have a higher turnout of people buying into cryogenics life insurance policies. :P)
Personally, I find aesthetic purity to be a very strong source of attachment for me. It’s certainly caused ‘unreasonable attachments’, like being stuck on being “right” and ascribing a purity to it (eg. I am right about this and you are wrong, therefore I will absolutely refuse to do this small nitpicky thing and I don’t care if I jam up the whole process because it’s MORALLY WRONG not to do so. I am the lone voice of dissent!). Oh, school..
I came across the same hack, or coping trick. Just remap the definition of what you’re being pure about to “winning” or “rationality”.
Pretty sure I’m displaying that I missed the point somehow.
The proper choice between (1) certainly save 400 lives and (2) 90% probability of saving 500 lives with 10% probability of saving no lives, depends on your utility function, which depends on the circumstances. If your utility is proportional to the number of lives saved, then sure, go with (2).
On the other hand, suppose that some cataclysm has occurred, those 500 lives are all that remains of the human race, and extinction of the human race has such an extremely negative utility for you that all other considerations amount to rounding error in the utility function. Then, to a close approximation, you want the choice C that maximizes P(S | C), where S=”human race survives”.
We have
P(S | C=1) = P(S | N=400)
P(S | C=2) = 0.9 * P(S | N=500)
where N is the size of the current population. Therefore, you should choose (1) if
P(S | N=400) / P(S | N=500) > 0.9.
Holden made a similar point.
All right, I’d like to attempt a summary to make sure that I am understanding this post, if anyone see’s some mistake in my interpretation, I’d appreciate it if they let me know.
Virtually everyone wants their beliefs to be true, this amounts to practically everyone wants to be epistimically rational. Rationality is a rare trait, so obviously that desire is not enough to make you epistimically rational. But that desire mixed with the rare desire to have all of your beliefs make useful predictions about whatever they talk about, is enough, provided that you never subordinate the mere predictive power of a belief to its truth. If you allow yourself to believe something because you thought it was true, even after you notice that some other belief makes reliably better predictions about whatever target of inquiry, at that point you fail as a rationalist.
Is there anything I’m missing?
What if what I want isn’t true beliefs or to be rational, but to have the best method for finding truths in general, and I use the predictive power of a belief as the best guide to its truth value? In that case, if I find a method that works better than mine, i.e., leads to beliefs with higher predictive power more often than my old method, I’ll switch to that method. I don’t pride myself on having the best method, I pride myself on doing everything I can to find the best method. And, let’s say that finding the best method for finding truth in general is far more important to me than my own life. Is that enough ya think?
It seems a lot like trying to be rational for its own sake, and I know that EY say’s that that leads to an infinite recursion, but I don’t know why the person I described above must be using circular justification.
please help if you can
It’s logically possible but humans tend to want other things.
I authentically feel like that’s what I want. I can’t think of anything i enjoy more, or anything i wouldn’t give up for a few decades with the best algorithm for truth finding in general. Though my revealed prefrences may end up saying something else.
Save 400 lives, with certainty.
Save 500 lives, 90% probability; save no lives, 10% probability.
I think it ought to be made explicit in the first scenario that 100 lives are being lost with certainty, because it’s not necessarily implied by the proposition. I know a lot of people inferred it, but the hypothetical situation never stated it was 400⁄500, so it could just as easily be 400⁄400, in which case choosing it would certianly be preferable to the second option. I think it’s important you make your hypothetical situations clear and unambiguous. Besides, a 100% probability of 100 deaths explicitly stated will influence the way people perceive the question. If you leave out writing out that 100 people are dying, you’re also subtly encouraging your readers to forget about those people as well, so it comes as little surprise that some would prefer option 1.
As MugaSofer said, it doesn’t need be 400⁄500, it may be 400⁄1,000,000 vs (500/1,000,000 with 90% probability). The original question indicated “Suppose that a disease, or a monster, or a war, or something, is killing people. ”
Imagine that hundreds of thousand lives are getting lost.
How about the following rephrasing?
There’s a natural catastrophe (e.g. a tsunami) occuring that will claim >100,000 lives. You have two options:
Save 400 lives, with certainty.
Save 500 lives, 90% probability; save no lives, 10% probability.
I think that rephrasing improves it.
For all we know, billions of lives could be lost, with certainty; the question is how many we can save.
Or, for all we know, there are only 400 lives to be saved in the first instance. Saving 400 out of 400 is different than saving 400 out of 7 billion. The context of the proposition makes a difference, and it’s always best to be clear and unambiguous in the paramaters which will necessarily guide ones decision as to which choice is the best.
Huh.
Can you clarify exactly why it matters?
That is… I recognize that on a superficial level it feels like it matters, so if you’re making a point about how to manipulate human psychology, then I understand that.
OTOH, if you’re making an ethical point about the value of life, I don’t quite understand why the value of those 400 lives is dependent on how many people there are in… well, in what? The world? The galaxy? The observable universe? The unobservable universe? Other?
I’m making a point about human psychology. The value of a life obviously does not change.
Although, I suppose theoretically, if the concern is not over individual lives, but over the survival of the species as a whole, and there are only 500 people to be saved, then picking the 400 option would make sense.
Well, if there are only 400 people in the universe, option 1 means you’re saving them all and nobody needs die.
But that’s a rather silly interpretation. That the option 2 exists obviously means there exist at least 500 people in the universe.
I agree with all of this.
To clarify, that’s how many people in “The world? The galaxy? The observable universe? The unobservable universe? Other?” are going to die. You can save a maximum of 500 in this manner.
Um.
OK… I still seem to be missing the point.
So I have a choice between A. “Save 400 lives, allow (N-400) people to die, with certainty.” and
B. “Save 500 lives (allow N-500 people to die), 90% probability; save no lives (allow N people to die), 10% probability.”
Are you suggesting that my choice between A and B ought to depend on N?
If so, why?
It doesn’t depend on N if N is consistent between options A and B, but it would if they were different. It would make for an odd hypothetical scenario, but I was just saying that it’s not made completely explicit.
If there were only 400, where do the extra 100 come from in option 2?
That said, if this genuinely confuses you there may well be others who are having similar problems and this should be noted in the example.
I have a low prior for this statement, but I don’t have any data. I wonder why Eliezer thinks this is the case.
Here I have a question that is slightly unrelated, but I’m looking for a good cognitive science science fair project and I’m having trouble thinking of one that would be not completely impractical for a high-schooler to do, won’t take more than a few months, and would be interesting enough to hold people’s attention for at least a few minutes before they head off to the physics and medical research projects. No one ever does decent cognitive science projects and I really want to show them that this branch of science can be just as rigorous and awesome as the other ones. Does anyone have any ideas?
I want to read the X saga but I can’t seem to find it. Can anyone point my way?
I’m fairly sure he was referring to X/1999 by Clamp.
I’ve been coming back to this post for 7 years or so, and the whole time it’s obvious that I don’t have something to protect, and haven’t found one, and haven’t yet found a way to find something to protect. It seems pretty cool though—and accurate that people who really care about things are able to go to great lengths to improve the way they think about the thing and their ability to to solve it.
I can say that once I realized I cared about wanting to care about something, that helped me quite a bit and I started improving my life.
Very interesting. I can’t help feeling that “trying to be a better rationalist” is somehow a paradoxical aim.
Roughly speaking I would say that we have preferences, and their is no rational way of picking preferences. If you prefer pizza to icecream, or pleasure to pain, or living to dying, then that is that. Rationality is a mechanism for effectively seeking your preferences, ordering pizza, not putting your had in a fire etc. You can’t pick rational preferences (goals), you can pick a rational route towards those goals.
If you adopt “I want to be more rational” as a preference/goal in-itself it feels like the snake is eating its own tail.
Maybe “meta goals” like this do arise elsewhere, eg. “I don’t currently have any interest in being strong/rich/powerful/skilled for its own sake, and nor are these things worth pursuing based on my current preferences (which are more efficiently achieved else-ways). However, these are things that might be generically useful for achieving preferences I may or may not have in the future, so I should acquire them as tools for later”.
But if we take rationality to mean “taking the best actions with the available information to meet your goals”, then, at least by this definition, pursuing the meta-goals appears to be definitionally irrational. This extends to the meta-goal of “being a better rationalist”.
I savor the succulent choleric chaos of declaring that I value mere phlegm above yellow bile. That is almost a contradiction, but not quite; and the resulting blend has a choleric quality as well: a delicious humor.
Lessons from experimenting prove to be more valuable than from Authority? I think that Adam and Eve would beg to differ. I know, mentioning them probably disqualifies me as fertile ground for rationalist seeding, huh? Oh well, can’t win them all.
But anyway, thanks for the well done Harry Potter fanfic. Truly, I am going to reread it several times, I’m sure.
[Mod note: I edited out your email from the comment, to save you from getting spam email and similar. If you really want it there, feel free to add it back! :) ]