Politics is the Mind-Killer
People go funny in the head when talking about politics. The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death. And sex, and wealth, and allies, and reputation . . . When, today, you get into an argument about whether “we” ought to raise the minimum wage, you’re executing adaptations for an ancestral environment where being on the wrong side of the argument could get you killed. Being on the right side of the argument could let you kill your hated rival!
If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.
Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back—providing aid and comfort to the enemy. People who would be level-headed about evenhandedly weighing all sides of an issue in their professional life as scientists, can suddenly turn into slogan-chanting zombies when there’s a Blue or Green position on an issue.
In artificial intelligence, and particularly in the domain of nonmonotonic reasoning, there’s a standard problem: “All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?”
What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on artificial intelligence and discourage them from entering the field?1
Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning? Probably because the author just couldn’t resist getting in a good, solid dig at those hated Greens. It feels so good to get in a hearty punch, y’know, it’s like trying to resist a chocolate cookie.
As with chocolate cookies, not everything that feels pleasurable is good for you.
I’m not saying that I think we should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party. As with Wikipedia’s NPOV, it doesn’t matter whether (you think) the Republican Party really is at fault. It’s just better for the spiritual growth of the community to discuss the issue without invoking color politics.
1And no, I am not a Republican. Or a Democrat.
- The noncentral fallacy—the worst argument in the world? by 27 Aug 2012 3:36 UTC; 418 points) (
- Reliable Sources: The Story of David Gerard by 10 Jul 2024 19:50 UTC; 381 points) (
- So, geez there’s a lot of AI content these days by 6 Oct 2022 21:32 UTC; 257 points) (
- Simulacra Levels and their Interactions by 15 Jun 2020 13:10 UTC; 198 points) (
- Making a conservative case for alignment by 15 Nov 2024 18:55 UTC; 192 points) (
- Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI by 1 Jun 2021 18:45 UTC; 187 points) (
- Russia has Invaded Ukraine by 24 Feb 2022 7:52 UTC; 173 points) (
- Is being sexy for your homies? by 13 Dec 2023 20:37 UTC; 163 points) (
- Slack matters more than any outcome by 31 Dec 2022 20:11 UTC; 154 points) (
- Negative Feedback and Simulacra by 29 Apr 2020 2:00 UTC; 152 points) (
- Naming the Nameless by 22 Mar 2018 0:35 UTC; 124 points) (
- Update Yourself Incrementally by 14 Aug 2007 14:56 UTC; 115 points) (
- Avoid Unnecessarily Political Examples by 11 Jan 2021 5:41 UTC; 106 points) (
- Simulacra and Subjectivity by 5 Mar 2020 16:25 UTC; 97 points) (
- The Wonder of Evolution by 2 Nov 2007 20:49 UTC; 92 points) (
- Of Exclusionary Speech and Gender Politics by 21 Jul 2009 7:22 UTC; 90 points) (
- Blue or Green on Regulation? by 15 Mar 2007 18:04 UTC; 89 points) (
- Missing the Trees for the Forest by 22 Jul 2009 3:23 UTC; 87 points) (
- 18 May 2022 6:36 UTC; 85 points) 's comment on Some potential lessons from Carrick’s Congressional bid by (EA Forum;
- The Skeptic’s Trilemma by 15 Mar 2009 0:12 UTC; 84 points) (
- When None Dare Urge Restraint, pt. 2 by 30 May 2012 15:28 UTC; 84 points) (
- Abnormal Cryonics by 26 May 2010 7:43 UTC; 79 points) (
- So You’ve Changed Your Mind by 28 Apr 2011 19:42 UTC; 77 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 72 points) (
- My story / owning one’s reasons by 7 Jan 2011 0:17 UTC; 70 points) (
- Dunbar’s Function by 31 Dec 2008 2:26 UTC; 68 points) (
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Memetic Tribalism by 14 Feb 2013 3:03 UTC; 62 points) (
- 5 Mar 2019 2:00 UTC; 61 points) 's comment on Making discussions in EA groups inclusive by (EA Forum;
- Mod Notice about Election Discussion by 29 Jan 2020 1:35 UTC; 61 points) (
- Only You Can Prevent Your Mind From Getting Killed By Politics by 26 Oct 2013 13:59 UTC; 61 points) (
- Meditation Trains Metacognition by 20 Oct 2013 0:47 UTC; 59 points) (
- Should we be doing politics at all? Some (very rough) thoughts by 3 Nov 2022 18:18 UTC; 58 points) (EA Forum;
- Politics is hard mode by 21 Jul 2014 22:14 UTC; 58 points) (
- “Politics is the mind-killer” is the mind-killer by 26 Jan 2012 15:55 UTC; 58 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- 20 Jan 2023 7:40 UTC; 55 points) 's comment on Linch’s Quick takes by (EA Forum;
- 14 Jan 2023 5:19 UTC; 53 points) 's comment on Thread for discussing Bostrom’s email and apology by (EA Forum;
- Reflections on a Personal Public Relations Failure: A Lesson in Communication by 1 Oct 2010 0:29 UTC; 50 points) (
- How Much Thought by 12 Apr 2009 4:56 UTC; 49 points) (
- 20 Dec 2018 23:38 UTC; 48 points) 's comment on Response to a Dylan Matthews article on Vox about bipartisanship by (EA Forum;
- Defeating the Villain by 26 Mar 2015 21:43 UTC; 48 points) (
- 10 Feb 2023 21:49 UTC; 47 points) 's comment on EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism by (EA Forum;
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by 20 Sep 2023 12:46 UTC; 46 points) (EA Forum;
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 46 points) (
- Rational Repentance by 14 Jan 2011 9:37 UTC; 45 points) (
- 23 Sep 2013 9:26 UTC; 44 points) 's comment on Torture vs. Shampoo by (
- Our Phyg Is Not Exclusive Enough by 14 Apr 2012 21:08 UTC; 43 points) (
- 2 Nov 2023 9:57 UTC; 42 points) 's comment on My thoughts on the social response to AI risk by (EA Forum;
- Upgrading the AI Safety Community by 16 Dec 2023 15:34 UTC; 42 points) (
- Don’t Get Offended by 7 Mar 2013 2:11 UTC; 41 points) (
- Voting is like donating thousands of dollars to charity by 5 Nov 2012 1:02 UTC; 41 points) (
- Trusting Expert Consensus by 16 Oct 2013 20:22 UTC; 41 points) (
- 15 Jan 2011 20:32 UTC; 41 points) 's comment on Rational Repentance by (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Picture Frames, Window Frames and Frameworks by 3 Nov 2019 22:09 UTC; 35 points) (
- Double Crux In A Box by 26 Aug 2022 3:24 UTC; 35 points) (
- The Dictatorship Problem by 11 Jun 2023 2:45 UTC; 34 points) (
- “I know I’m biased, but...” by 10 May 2011 20:03 UTC; 32 points) (
- Anti-tribalism and positive mental health as high-value cause areas by 17 Oct 2017 11:10 UTC; 31 points) (EA Forum;
- Has “politics is the mind-killer” been a mind-killer? by 17 Mar 2019 3:05 UTC; 31 points) (
- 20 Jul 2012 1:34 UTC; 31 points) 's comment on Imperfect Voting Systems by (
- 22 Jun 2011 18:00 UTC; 29 points) 's comment on Introducing others to LessWrong-ism… through comics! by (
- 27 Jul 2012 6:03 UTC; 29 points) 's comment on The Criminal Stupidity of Intelligent People by (
- The mind-killer by 2 May 2009 16:49 UTC; 29 points) (
- Politics is the Fun-Killer by 25 Feb 2023 23:29 UTC; 28 points) (
- 25 Jan 2020 14:35 UTC; 27 points) 's comment on Have epistemic conditions always been this bad? by (
- Positive visions for AI by 23 Jul 2024 20:15 UTC; 26 points) (
- Anti-tribalism and positive mental health as high-value cause areas by 2 Aug 2018 8:30 UTC; 26 points) (
- Trust and The Small World Fallacy by 4 Oct 2021 0:38 UTC; 26 points) (
- A Semester-Long Course In EA by 18 Nov 2019 0:56 UTC; 25 points) (EA Forum;
- Proposal: Show up and down votes separately by 9 Jun 2012 23:29 UTC; 24 points) (
- Frontpage Posting and Commenting Guidelines by 26 Sep 2017 6:24 UTC; 24 points) (
- Secret Identities vs. Groupthink by 9 Apr 2009 20:26 UTC; 23 points) (
- 18 Aug 2011 0:41 UTC; 23 points) 's comment on Will DNA Analysis Make Politics Less of a Mind-Killer? by (
- 28 Apr 2013 10:02 UTC; 22 points) 's comment on Privileging the Question by (
- Apparently winning by the bias of your opponents by 28 Nov 2021 13:20 UTC; 22 points) (
- On LessWrong/Rationality and Political Debates by 20 Mar 2021 18:26 UTC; 22 points) (
- Can group identity be a force for good? by 4 Jul 2021 17:16 UTC; 22 points) (
- Positive visions for AI by 23 Jul 2024 20:15 UTC; 21 points) (EA Forum;
- 15 Jul 2019 3:28 UTC; 21 points) 's comment on Age-Weighted Voting by (EA Forum;
- Parallelizing Rationality: How Should Rationalists Think in Groups? by 17 Dec 2012 4:08 UTC; 21 points) (
- 1 Jun 2010 22:53 UTC; 21 points) 's comment on Open Thread: June 2010 by (
- Curtis Yarvin on A Theory of Pervasive Error by 26 Nov 2019 7:27 UTC; 21 points) (
- 16 Jan 2011 9:05 UTC; 21 points) 's comment on Rational Repentance by (
- Rational discussion of politics by 25 Apr 2015 21:58 UTC; 20 points) (
- 27 Jul 2012 18:21 UTC; 20 points) 's comment on Is Politics the Mindkiller? An Inconclusive Test by (
- Lighthaven Sequences Reading Group #9 (Tuesday 11/05) by 31 Oct 2024 21:34 UTC; 20 points) (
- 3 Nov 2010 5:21 UTC; 20 points) 's comment on Rationality Quotes: November 2010 by (
- Experiment: Changing minds vs. preaching to the choir by 3 Oct 2015 11:27 UTC; 19 points) (
- 5 Oct 2012 22:24 UTC; 19 points) 's comment on [Link] Inside the Cold, Calculating Mind of LessWrong? by (
- Help Build a Landing Page for Existential Risk? by 30 Jul 2015 6:03 UTC; 18 points) (
- Blegg Mode by 11 Mar 2019 15:04 UTC; 18 points) (
- Is there any way someone could post about public policy relating to abortion access (or another sensitive subject) on LessWrong without getting super downvoted? by 28 Jun 2022 5:45 UTC; 18 points) (
- Rationality and Relationships by 8 Aug 2011 4:02 UTC; 18 points) (
- Blues, Greens and abortion by 5 Mar 2011 19:15 UTC; 17 points) (
- 29 Apr 2009 8:39 UTC; 17 points) 's comment on Generalizing From One Example by (
- Gun Control: How would we know? by 20 Dec 2012 20:14 UTC; 17 points) (
- 21 Mar 2024 19:57 UTC; 16 points) 's comment on New story about Dustin Moskovitz, EA, his meeting with Joe Biden, plans for the 2024 election, and his beef with Marc Andreessen over AI by (EA Forum;
- 16 Sep 2019 5:59 UTC; 16 points) 's comment on The Power to Demolish Bad Arguments by (
- Social Impact, Effective Altruism, and Motivated Cognition by 8 Jun 2013 2:31 UTC; 16 points) (
- Is there any way someone could post about public policy relating to abortion access (or another sensitive subject) on LessWrong without getting super downvoted? by 6 Jun 2022 23:49 UTC; 15 points) (
- 18 Jan 2012 20:09 UTC; 15 points) 's comment on [Meta] No LessWrong Blackout? by (
- 25 Nov 2011 11:54 UTC; 15 points) 's comment on Should LessWrong be Interested in the Occupy Movements? by (
- 8 Jun 2022 17:33 UTC; 14 points) 's comment on Holly_Elmore’s Quick takes by (EA Forum;
- Meditations on the Medium by 29 Apr 2018 2:21 UTC; 14 points) (
- Book Review: The Captured Economy by 11 Dec 2017 13:10 UTC; 14 points) (
- 23 Sep 2010 19:53 UTC; 14 points) 's comment on Politics as Charity by (
- Intelligence, epistemics, and sanity, in three short parts by 15 Oct 2021 4:01 UTC; 14 points) (
- Rationality Reading Group: Part F: Politics and Rationality by 29 Jul 2015 22:22 UTC; 14 points) (
- Any Trump Supporters Want to Dialogue? by 28 Sep 2024 19:41 UTC; 14 points) (
- Politics is the mind-killer, but maybe we should talk about it anyway by 3 Jun 2024 6:37 UTC; 14 points) (
- META: Which posts are appropriate for the articles section vs. the discussion section? by 28 Jan 2011 20:27 UTC; 14 points) (
- 19 Jul 2010 4:44 UTC; 13 points) 's comment on (One reason) why capitalism is much maligned by (
- 27 May 2013 20:45 UTC; 13 points) 's comment on A Proposed Adjustment to the Astronomical Waste Argument by (
- 26 Jun 2013 1:19 UTC; 13 points) 's comment on For FAI: Is “Molecular Nanotechnology” putting our best foot forward? by (
- 10 Aug 2012 11:44 UTC; 13 points) 's comment on [Link] Admitting to Bias by (
- Non-theist cinema? by 8 Jan 2012 7:54 UTC; 13 points) (
- 9 Oct 2020 15:15 UTC; 13 points) 's comment on Upside decay—why some people never get lucky by (
- Inward and outward steelmanning by 14 Jul 2022 23:32 UTC; 13 points) (
- 30 Apr 2012 17:47 UTC; 13 points) 's comment on Adopting others’ opinions by (
- The Outside Critics of Effective Altruism by 5 Jan 2015 18:37 UTC; 12 points) (EA Forum;
- 18 Jul 2019 14:40 UTC; 12 points) 's comment on Age-Weighted Voting by (EA Forum;
- 24 Jul 2019 5:44 UTC; 12 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- Rationality Considered Harmful (In Politics) by 8 Jan 2017 10:36 UTC; 12 points) (
- Easy fixing Voting by 2 Oct 2022 17:03 UTC; 12 points) (
- 10 Nov 2011 2:35 UTC; 12 points) 's comment on What visionary project would you fund? by (
- 25 Jul 2010 0:58 UTC; 12 points) 's comment on Against the standard narrative of human sexual evolution by (
- Anti-tribalism and positive mental health as high-value cause areas by 17 Oct 2017 10:20 UTC; 12 points) (
- 15 Jan 2015 21:06 UTC; 11 points) 's comment on Je suis Charlie by (
- 3 Oct 2012 20:12 UTC; 11 points) 's comment on Cognitive bias and conservatism by (
- 9 May 2022 1:07 UTC; 11 points) 's comment on David Udell’s Shortform by (
- 28 Sep 2013 18:07 UTC; 11 points) 's comment on A game of angels and devils by (
- [SEQ RERUN] Just Lose Hope Already by 1 May 2011 21:08 UTC; 11 points) (
- 3 Nov 2012 22:33 UTC; 11 points) 's comment on In Defense of Moral Investigation by (
- LessWrong: West vs. East by 19 Oct 2017 3:13 UTC; 11 points) (
- 8 Apr 2013 21:51 UTC; 11 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 4 May 2016 14:34 UTC; 11 points) 's comment on Open Thread May 2 - May 8, 2016 by (
- 19 Dec 2020 6:46 UTC; 10 points) 's comment on Hazard’s Shortform Feed by (
- 7 May 2010 8:12 UTC; 10 points) 's comment on What is missing from rationality? by (
- 6 Dec 2019 3:35 UTC; 10 points) 's comment on Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary by (
- I don’t want to listen, because I will believe you by 28 Dec 2020 14:58 UTC; 10 points) (
- 14 May 2009 17:33 UTC; 10 points) 's comment on Survey Results by (
- Admissions Essay Help? by 1 Aug 2012 19:19 UTC; 10 points) (
- 9 Apr 2015 22:05 UTC; 10 points) 's comment on Avoiding Your Belief’s Real Weak Points by (
- 21 Jun 2012 8:00 UTC; 10 points) 's comment on [Link] Why don’t people like markets? by (
- 13 Aug 2015 19:09 UTC; 9 points) 's comment on Ideas on growth of the community by (
- 30 Apr 2011 5:34 UTC; 9 points) 's comment on How hard do we really want to sell cryonics? by (
- New York Times on Arguments and Evolution [link] by 14 Jun 2011 18:12 UTC; 9 points) (
- Political Prereqs by 8 Oct 2024 1:09 UTC; 9 points) (
- Political impasse as result of different odds ratios by 19 Nov 2011 20:01 UTC; 9 points) (
- Weather and climate forecasting: how the challenges differ by time horizon by 4 Jul 2014 15:28 UTC; 9 points) (
- 8 Apr 2009 2:19 UTC; 9 points) 's comment on Help, help, I’m being oppressed! by (
- 28 Nov 2010 5:02 UTC; 9 points) 's comment on The Sin of Persuasion by (
- 28 Nov 2011 14:36 UTC; 9 points) 's comment on Video: Skepticon talks by (
- [ACX Linkpost] A Modest Proposal for Republicans by 30 Apr 2021 18:43 UTC; 9 points) (
- 9 Sep 2010 16:20 UTC; 9 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 23 Oct 2019 17:37 UTC; 9 points) 's comment on Deleted by (
- [SEQ RERUN] Politics is the Mind-Killer by 29 Apr 2011 21:26 UTC; 9 points) (
- 29 Mar 2024 12:32 UTC; 9 points) 's comment on Politics are not serious by default by (
- EA Undervalues Unseen Data by 24 Jul 2022 15:52 UTC; 8 points) (EA Forum;
- 2 Feb 2017 0:31 UTC; 8 points) 's comment on A question about the rules by (
- 11 Feb 2010 18:35 UTC; 8 points) 's comment on Open Thread: February 2010 by (
- 5 Aug 2013 12:00 UTC; 8 points) 's comment on Rationality Quotes August 2013 by (
- The Irresistible Attraction of Designing Your Own Utopia by 1 Apr 2022 1:34 UTC; 8 points) (
- How to not be an alarmist by 30 Sep 2020 21:35 UTC; 8 points) (
- Seeking a reader for LessWrong sequences by 4 Aug 2013 10:58 UTC; 8 points) (
- 10 Aug 2010 2:22 UTC; 8 points) 's comment on Open Thread, August 2010-- part 2 by (
- 2 Feb 2010 6:43 UTC; 8 points) 's comment on Debunking komponisto on Amanda Knox (long) by (
- Misc Meta by 10 Dec 2007 22:57 UTC; 8 points) (
- 6 Nov 2019 22:20 UTC; 8 points) 's comment on Judgment, Punishment, and the Information-Suppression Field by (
- 26 Sep 2010 20:45 UTC; 8 points) 's comment on Vote Qualifications, Not Issues by (
- 1 Mar 2012 22:50 UTC; 8 points) 's comment on Rationality Quotes March 2012 by (
- Democracy and rationality by 30 Oct 2013 12:07 UTC; 8 points) (
- 26 Apr 2019 16:24 UTC; 8 points) 's comment on Asymmetric Justice by (
- 5 Nov 2009 5:07 UTC; 8 points) 's comment on Open Thread: November 2009 by (
- 25 Mar 2015 17:13 UTC; 8 points) 's comment on [POLITICS] Jihadism and a new kind of existential threat by (
- 22 Jun 2015 1:38 UTC; 8 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 19 Aug 2012 12:43 UTC; 8 points) 's comment on [Link] Reddit, help me find some peace I’m dying young by (
- 10 Oct 2010 1:08 UTC; 8 points) 's comment on Four Ways to Do Good (and four fallacies) by (
- 12 Sep 2012 17:49 UTC; 8 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 10 Feb 2023 23:22 UTC; 7 points) 's comment on There can be highly neglected solutions to less-neglected problems by (EA Forum;
- 10 Apr 2012 5:19 UTC; 7 points) 's comment on In Defense of Ayn Rand by (
- 15 Nov 2012 20:47 UTC; 7 points) 's comment on Please don’t vote because democracy is a local optimum by (
- 16 Nov 2011 10:44 UTC; 7 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 5 Nov 2014 5:22 UTC; 7 points) 's comment on November 2014 Media Thread by (
- 19 Aug 2021 11:31 UTC; 7 points) 's comment on Gravity Turn by (
- Bleg: Read and learn, or become an activist? by 9 Apr 2014 21:39 UTC; 7 points) (
- 22 Nov 2014 20:38 UTC; 7 points) 's comment on Neo-reactionaries, why are you neo-reactionary? by (
- 1 May 2010 20:59 UTC; 7 points) 's comment on Rationality quotes: May 2010 by (
- 19 Jul 2010 15:34 UTC; 7 points) 's comment on Fight Zero-Sum Bias by (
- 15 Dec 2010 2:25 UTC; 7 points) 's comment on What topics would you like to see more of on LessWrong? by (
- Meta: Basic and advanced section by 17 Jan 2011 11:11 UTC; 7 points) (
- 27 Aug 2012 5:37 UTC; 7 points) 's comment on Open Thread, August 16-31, 2012 by (
- 29 Aug 2010 2:55 UTC; 7 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 2 by (
- Social impact, effective altruism, and motivated cognition by 8 Jun 2013 4:00 UTC; 6 points) (EA Forum;
- 2 Dec 2022 8:51 UTC; 6 points) 's comment on What is the best source to explain short AI timelines to a skeptical person? by (
- 15 Aug 2021 21:58 UTC; 6 points) 's comment on Pedophile Problems by (
- 20 Oct 2012 2:15 UTC; 6 points) 's comment on Looking for alteration suggestions for the official Sequences ebook by (
- The New Right appears to be on the rise for better or worse by 23 Apr 2022 19:36 UTC; 6 points) (
- 12 Apr 2012 18:50 UTC; 6 points) 's comment on Rationally Irrational by (
- Pro-Con-lists of arguments and onesidedness points by 21 Aug 2015 14:15 UTC; 6 points) (
- 9 Oct 2021 12:59 UTC; 6 points) 's comment on Trust and The Small World Fallacy by (
- 20 Dec 2009 6:37 UTC; 6 points) 's comment on The Contrarian Status Catch-22 by (
- 31 Mar 2013 1:05 UTC; 6 points) 's comment on Existential risks open thread by (
- 5 Mar 2010 0:54 UTC; 6 points) 's comment on The Graviton as Aether by (
- 18 Jun 2012 19:18 UTC; 6 points) 's comment on [SEQ RERUN] The Opposite Sex by (
- 9 Nov 2010 12:18 UTC; 6 points) 's comment on A hypothetical candidate walks into a hypothetical job interview... by (
- 18 Dec 2022 13:13 UTC; 5 points) 's comment on Be less trusting of intuitive arguments about social phenomena by (EA Forum;
- 2 Sep 2019 15:42 UTC; 5 points) 's comment on Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories by (
- 2 Feb 2010 12:31 UTC; 5 points) 's comment on Open Thread: February 2010 by (
- 27 Mar 2015 1:11 UTC; 5 points) 's comment on Political topics attract participants inclined to use the norms of mainstream political debate, risking a tipping point to lower quality discussion by (
- Physics is Ultimately Subjective by 14 Jul 2023 22:19 UTC; 5 points) (
- 13 Mar 2019 1:17 UTC; 5 points) 's comment on Blegg Mode by (
- 16 Nov 2011 10:19 UTC; 5 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 7 Dec 2010 16:57 UTC; 5 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 30 Oct 2010 12:09 UTC; 5 points) 's comment on Currently Buying AdWords for LessWrong by (
- 2 Sep 2010 18:02 UTC; 5 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 3 by (
- 10 Jul 2023 7:47 UTC; 5 points) 's comment on Blanchard’s Dangerous Idea and the Plight of the Lucid Crossdreamer by (
- 4 May 2023 8:08 UTC; 5 points) 's comment on AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now by (
- 3 Jun 2010 4:56 UTC; 5 points) 's comment on Open Thread: June 2010 by (
- 5 Sep 2010 20:57 UTC; 5 points) 's comment on Rationality quotes: September 2010 by (
- Is there a mu-like term that means something like “why is this any of my business?” by 6 Apr 2021 7:47 UTC; 5 points) (
- 17 Jan 2013 17:28 UTC; 5 points) 's comment on [Link] Noam Chomsky Killed Aaron Schwartz by (
- 17 May 2011 17:20 UTC; 5 points) 's comment on Rationality Boot Camp by (
- 31 Oct 2014 8:40 UTC; 5 points) 's comment on Don’t Be Afraid of Asking Personally Important Questions of Less Wrong by (
- 27 Jan 2012 0:13 UTC; 5 points) 's comment on “Politics is the mind-killer” is the mind-killer by (
- 22 Dec 2015 21:42 UTC; 4 points) 's comment on Quantifying the Impact of Economic Growth on Meat Consumption by (EA Forum;
- 25 Apr 2023 23:34 UTC; 4 points) 's comment on EA might systematically generate a scarcity mindset that produces low-integrity actors by (EA Forum;
- 2 Mar 2010 4:42 UTC; 4 points) 's comment on Open Thread: March 2010 by (
- 25 Apr 2021 22:26 UTC; 4 points) 's comment on Let’s Rename Ourselves The “Metacognitive Movement” by (
- 20 Jul 2010 4:23 UTC; 4 points) 's comment on (One reason) why capitalism is much maligned by (
- “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) by 4 Sep 2014 16:58 UTC; 4 points) (
- 25 May 2021 21:23 UTC; 4 points) 's comment on Decoupling deliberation from competition by (
- 1 Jan 2011 17:14 UTC; 4 points) 's comment on Understanding Wikileaks history by (
- 26 Dec 2020 4:32 UTC; 4 points) 's comment on The Power to Demolish Bad Arguments by (
- 10 Aug 2010 3:00 UTC; 4 points) 's comment on Open Thread, August 2010-- part 2 by (
- 17 Feb 2022 21:05 UTC; 4 points) 's comment on Convoy Continued by (
- 9 Jul 2020 23:14 UTC; 4 points) 's comment on Open & Welcome Thread—July 2020 by (
- 8 Aug 2014 21:37 UTC; 4 points) 's comment on Article on confirmation bias for the Smith Alumnae Quarterly by (
- 10 Mar 2012 13:46 UTC; 4 points) 's comment on Rationality Quotes March 2012 by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- Meetup : West LA—Practical Taoism by 21 Feb 2014 8:05 UTC; 4 points) (
- 5 Dec 2012 17:49 UTC; 4 points) 's comment on How to incentivize LW wiki edits? by (
- 15 Oct 2010 21:21 UTC; 4 points) 's comment on LW favorites by (
- 1 Jul 2010 14:39 UTC; 4 points) 's comment on A Challenge for LessWrong by (
- 30 Apr 2019 2:28 UTC; 4 points) 's comment on Counterspells by (
- Easy fixing Voting by 2 Oct 2022 17:03 UTC; 3 points) (EA Forum;
- 30 Jan 2010 7:24 UTC; 3 points) 's comment on The Meditation on Curiosity by (
- 12 Sep 2023 12:07 UTC; 3 points) 's comment on AI presidents discuss AI alignment agendas by (
- 4 Aug 2013 15:53 UTC; 3 points) 's comment on Internet Research (with tangent on intelligence analysis and collapse) by (
- 9 Jun 2015 19:26 UTC; 3 points) 's comment on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction by (
- 13 Jan 2012 12:52 UTC; 3 points) 's comment on Designing Ritual by (
- Winning the Unwinnable by 21 Jan 2010 3:01 UTC; 3 points) (
- 15 Jul 2009 17:21 UTC; 3 points) 's comment on Good Quality Heuristics by (
- 17 Apr 2013 10:32 UTC; 3 points) 's comment on Pay other people to go vegetarian for you? by (
- 10 Oct 2017 19:56 UTC; 3 points) 's comment on I Can Tolerate Anything Except Factual Inaccuracies by (
- 12 Sep 2022 23:33 UTC; 3 points) 's comment on David Udell’s Shortform by (
- Budapest meetup Margit Sziget, 7/19 2pm by 12 Jul 2020 14:07 UTC; 3 points) (
- 16 Apr 2013 13:04 UTC; 3 points) 's comment on Open Thread, April 15-30, 2013 by (
- 18 Apr 2012 17:37 UTC; 3 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 by (
- 23 Nov 2014 22:06 UTC; 3 points) 's comment on Neo-reactionaries, why are you neo-reactionary? by (
- 16 Aug 2007 2:10 UTC; 3 points) 's comment on One Argument Against An Army by (
- 13 Mar 2012 20:26 UTC; 3 points) 's comment on Fallacies as weak Bayesian evidence by (
- 4 Dec 2010 5:41 UTC; 3 points) 's comment on Are stereotypes ever irrational? by (
- 24 Oct 2010 9:22 UTC; 3 points) 's comment on The Problem With Trolley Problems by (
- 14 Jan 2013 23:58 UTC; 3 points) 's comment on Don’t Build Fallout Shelters by (
- 1 Mar 2012 1:47 UTC; 3 points) 's comment on Can the Chain Still Hold You? by (
- 22 Nov 2011 4:46 UTC; 3 points) 's comment on [SEQ RERUN] Reversed Stupidity Is Not Intelligence by (
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by 20 Sep 2023 12:46 UTC; 3 points) (
- [LINK] Joseph Bottum on Politics as the Mindkiller by 27 Feb 2014 19:40 UTC; 3 points) (
- 21 Sep 2010 1:56 UTC; 3 points) 's comment on Less Wrong Should Confront Wrongness Wherever it Appears by (
- 6 Aug 2017 16:10 UTC; 2 points) 's comment on Paper Ballots in the 2020 US Election by (EA Forum;
- 24 Sep 2022 22:09 UTC; 2 points) 's comment on Thomas Kwa’s Quick takes by (EA Forum;
- 8 Oct 2013 3:11 UTC; 2 points) 's comment on The best 15 words by (
- 22 Jan 2011 18:59 UTC; 2 points) 's comment on Politics is a fact of life by (
- 3 Feb 2010 18:53 UTC; 2 points) 's comment on Open Thread: February 2010 by (
- 3 Feb 2023 11:53 UTC; 2 points) 's comment on Jordan Peterson: Guru/Villain by (
- 6 Mar 2012 23:23 UTC; 2 points) 's comment on How to Fix Science by (
- 29 Apr 2009 6:50 UTC; 2 points) 's comment on Wednesday depends on us. by (
- 2 Jul 2011 16:56 UTC; 2 points) 's comment on Reasons for being rational by (
- 13 Apr 2024 13:30 UTC; 2 points) 's comment on Consequentialism is a compass, not a judge by (
- 4 Dec 2012 19:44 UTC; 2 points) 's comment on [Link] The Worst-Run Big City in the U.S. by (
- 16 Jan 2012 20:56 UTC; 2 points) 's comment on A Sense That More Is Possible by (
- 10 Mar 2012 8:57 UTC; 2 points) 's comment on Rationality Quotes March 2012 by (
- 13 Sep 2010 13:11 UTC; 2 points) 's comment on More art, less stink: Taking the PU out of PUA by (
- 29 Sep 2013 14:45 UTC; 2 points) 's comment on A game of angels and devils by (
- 26 Feb 2013 10:37 UTC; 2 points) 's comment on What Deontology gets right by (
- Desires You’re Not Thinking About at the Moment by 20 Feb 2013 9:41 UTC; 2 points) (
- 19 Mar 2011 15:21 UTC; 2 points) 's comment on Can we stop using the word “rationalism”? by (
- 12 Sep 2009 23:29 UTC; 2 points) 's comment on The New Nostradamus by (
- 21 Feb 2022 22:47 UTC; 2 points) 's comment on Seeking models of LW’s aversion to religion by (
- Meetup : West LA Meetup—Reasoning About Politics by 9 Oct 2012 20:23 UTC; 2 points) (
- 27 Dec 2012 19:55 UTC; 2 points) 's comment on What Happened in the Fort Marcy Parking Lot? by (
- 2 Feb 2021 22:25 UTC; 2 points) 's comment on Is the influence of money pervasive, even on LessWrong? by (
- 15 Feb 2011 23:25 UTC; 2 points) 's comment on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields by (
- 2 Feb 2017 20:00 UTC; 2 points) 's comment on Hacking humans by (
- 12 Mar 2010 19:59 UTC; 2 points) 's comment on Open Thread: March 2010, part 2 by (
- 5 Dec 2010 21:30 UTC; 2 points) 's comment on Imperfect Levers by (
- 2 Nov 2010 19:43 UTC; 2 points) 's comment on HELP: How do minimum wage laws harm people? by (
- 20 Jan 2012 5:55 UTC; 2 points) 's comment on The problem with too many rational memes by (
- 9 Nov 2011 18:55 UTC; 2 points) 's comment on [link] I Was Wrong, and So Are You by (
- 30 Jul 2016 18:18 UTC; 2 points) 's comment on A rational unfalsifyable believe by (
- 5 Mar 2015 21:35 UTC; 2 points) 's comment on Rationality Quotes Thread March 2015 by (
- 11 Dec 2010 13:50 UTC; 1 point) 's comment on If reductionism is the hammer, what nails are out there? by (
- 23 Mar 2023 15:12 UTC; 1 point) 's comment on Let’s make the truth easier to find by (
- 11 Feb 2024 17:16 UTC; 1 point) 's comment on Let’s make the truth easier to find by (
- 11 Apr 2012 1:11 UTC; 1 point) 's comment on Left-wing Alarmism vs. Right-wing Optimism: evidence on which is correct? by (
- 24 Feb 2018 21:03 UTC; 1 point) 's comment on Mythic Mode by (
- 17 Apr 2013 19:32 UTC; 1 point) 's comment on What truths are actually taboo? by (
- 1 Mar 2012 15:51 UTC; 1 point) 's comment on Welcome to Less Wrong! (2012) by (
- 9 Feb 2013 15:50 UTC; 1 point) 's comment on Philosophical Landmines by (
- 23 Oct 2011 8:31 UTC; 1 point) 's comment on Rationality Quotes October 2011 by (
- 25 Mar 2010 2:13 UTC; 1 point) 's comment on More thoughts on assertions by (
- 21 Jan 2011 13:11 UTC; 1 point) 's comment on Do Corporations Have a Right to Privacy? by (
- 's comment on That Thing That Happened by 18 Dec 2012 14:17 UTC; 1 point) (
- 29 Nov 2010 19:14 UTC; 1 point) 's comment on Belief in Belief vs. Internalization by (
- 2 Mar 2009 9:08 UTC; 1 point) 's comment on That You’d Tell All Your Friends by (
- 24 Apr 2014 13:32 UTC; 1 point) 's comment on Rationality Quotes April 2014 by (
- 28 Feb 2016 7:22 UTC; 1 point) 's comment on Is altruistic deception really necessary? Social activism and the free market by (
- 31 May 2017 6:50 UTC; 1 point) 's comment on Avoiding Your Belief’s Real Weak Points by (
- 25 Jul 2015 21:16 UTC; 1 point) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 4 Sep 2019 1:47 UTC; 1 point) 's comment on The Transparent Society: A radical transformation that we should probably undergo by (
- 6 May 2015 14:19 UTC; 1 point) 's comment on Debunking Fallacies in the Theory of AI Motivation by (
- 16 Sep 2007 0:37 UTC; 1 point) 's comment on Human Evil and Muddled Thinking by (
- 1 May 2009 16:00 UTC; 1 point) 's comment on Rationalist Role in the Information Age by (
- 20 Oct 2009 18:02 UTC; 1 point) 's comment on Why the beliefs/values dichotomy? by (
- 26 Mar 2024 20:04 UTC; 1 point) 's comment on General Thoughts on Secular Solstice by (
- 22 Feb 2022 19:33 UTC; 1 point) 's comment on HCH and Adversarial Questions by (
- 15 Sep 2007 18:46 UTC; 1 point) 's comment on Why I’m Blooking by (
- 12 Oct 2011 21:07 UTC; 1 point) 's comment on Rationality Lessons Learned from Irrational Adventures in Romance by (
- 9 Aug 2009 20:36 UTC; 1 point) 's comment on Misleading the witness by (
- 21 Dec 2011 23:49 UTC; 0 points) 's comment on Is anyone else worried about SOPA? Trying to do anything about it? by (
- 3 Feb 2017 1:21 UTC; 0 points) 's comment on A question about the rules by (
- 11 Mar 2010 4:36 UTC; 0 points) 's comment on Open Thread: March 2010 by (
- 20 Dec 2016 16:07 UTC; 0 points) 's comment on Open thread, Dec. 12 - Dec. 18, 2016 by (
- 13 May 2011 22:55 UTC; 0 points) 's comment on Rationality Quotes: May 2011 by (
- 11 Dec 2011 22:27 UTC; 0 points) 's comment on How to Not Lose an Argument by (
- 16 Jun 2011 13:26 UTC; 0 points) 's comment on New York Times on Arguments and Evolution [link] by (
- 12 Jan 2012 17:39 UTC; 0 points) 's comment on Rational Justice by (
- 26 Aug 2010 3:08 UTC; 0 points) 's comment on Open Thread, August 2010 by (
- 8 Dec 2010 13:21 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 8 Dec 2010 2:49 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 29 Mar 2012 1:34 UTC; 0 points) 's comment on Schelling fences on slippery slopes by (
- 9 Sep 2010 16:56 UTC; 0 points) 's comment on Humans are not automatically strategic by (
- 22 Jun 2013 2:27 UTC; 0 points) 's comment on Saving lives via bed nets is hard to beat for immediate impact by (
- 12 Oct 2012 20:33 UTC; 0 points) 's comment on LessWrong, can you help me find an article I read a few months ago, I think here? by (
- 23 Mar 2011 1:11 UTC; 0 points) 's comment on Can we stop using the word “rationalism”? by (
- 30 Aug 2010 18:03 UTC; 0 points) 's comment on Is Molecular Nanotechnology “Scientific”? by (
- 12 May 2008 21:01 UTC; 0 points) 's comment on The Failures of Eld Science by (
- 15 Dec 2010 1:57 UTC; 0 points) 's comment on What topics would you like to see more of on LessWrong? by (
- 8 Nov 2012 12:17 UTC; 0 points) 's comment on 2012 Less Wrong Census/Survey by (
- 11 Jan 2013 12:26 UTC; 0 points) 's comment on Closet survey #1 by (
- 21 Dec 2015 5:10 UTC; 0 points) 's comment on LessWrong 2.0 by (
- 10 Jun 2013 1:01 UTC; 0 points) 's comment on Open Thread, June 2-15, 2013 by (
- 10 Jun 2013 12:32 UTC; 0 points) 's comment on Open Thread, June 2-15, 2013 by (
- 5 Mar 2010 1:28 UTC; 0 points) 's comment on The Graviton as Aether by (
- 13 Jul 2011 19:23 UTC; 0 points) 's comment on Efficient Charity: Do Unto Others... by (
- 16 Jan 2011 9:03 UTC; 0 points) 's comment on Rational Repentance by (
- 10 Jan 2011 13:06 UTC; -1 points) 's comment on How to Save the World by (
- 25 Nov 2011 8:30 UTC; -1 points) 's comment on Should LessWrong be Interested in the Occupy Movements? by (
- What Does Make a Difference? It’s Really Simple by 15 Jun 2010 2:33 UTC; -2 points) (
- 12 Jun 2013 23:51 UTC; -2 points) 's comment on Changing Systems is Different than Running Controlled Experiments—Don’t Choose How to Run Your Country That Way! by (
- 14 Dec 2012 8:47 UTC; -2 points) 's comment on Firewalling the Optimal from the Rational by (
- 22 Nov 2018 21:33 UTC; -2 points) 's comment on Speculative Evopsych, Ep. 1 by (
- Consider motivated snobbery by 11 Nov 2017 15:49 UTC; -3 points) (
- 25 Nov 2011 0:42 UTC; -3 points) 's comment on [SEQ RERUN] When None Dare Urge Restraint by (
- How To Construct a Political Ideology by 21 Jul 2013 15:00 UTC; -4 points) (
- Politics Discussion Thread August 2012 by 1 Aug 2012 15:25 UTC; -4 points) (
- 16 Jan 2015 9:58 UTC; -5 points) 's comment on What topics are appropriate for LessWrong? by (
- 23 Oct 2014 21:38 UTC; -5 points) 's comment on Blackmail, continued: communal blackmail, uncoordinated responses by (
- Where does this community lean politically ? by 9 Dec 2020 21:08 UTC; -9 points) (
- 11 Apr 2023 18:43 UTC; -11 points) 's comment on Killing Socrates by (
- How AGI will actually end us: Some predictions on evolution by artificial selection by 10 Apr 2023 13:52 UTC; -11 points) (
- 4 Dec 2012 22:25 UTC; -13 points) 's comment on Politics Discussion Thread December 2012 by (
- Where “the Sequences” Are Wrong by 7 May 2023 20:21 UTC; -15 points) (
- Any existential risk angles to the US presidential election? by 20 Sep 2012 9:44 UTC; -16 points) (
- [POLL] Slutwalk by 8 May 2011 7:00 UTC; -18 points) (
- Let’s talk about politics by 19 Sep 2012 17:25 UTC; -24 points) (
People are certainly more biased in politics than in most other subjects. So yes, it helps to find ways to transfer our cognitive habits from other topics into politics. But as long as you don’t “go native,” politics should be rich source of bias examples to think about.
Rich sources of finding bias in other people. But if the idea is to remove the log from one’s own eye, it may make sense to steer clear. Personally, I did not learn how to think critically until I went to law school and studied questions which were pretty far removed from the various inflammatory issues floating around out there.
What exceptions should there be to the Hearsay Rule? Should the use of a company car be considered “income” under the Internal Revenue Code? etc. etc.
Support for the KKK and for neo-nazis, is, in fact, a political position. Is it “biased” to oppose the KKK’s political goals? I don’t think it’s biased in any bad sense of the term, but it’s definitely biased. (As is favoring un-prohibited access to regenerative medicine; freedom of speech; due process of law; etc.)
In fact, I could probably come up with objectively good, more right, and Less Wrong arguments for why I believe my bias is legitimate. This should be our sole concern: legitimacy, with reference to reality. I could include anecdotal information that would be seen as “less legitimate” and systemic information with millions of data points that would be seen as “more legitimate.” None of that would effect the legitimacy of the argument itself, in an objective sense.
All bad politics destroys, harms, kills. It’s easy to find bad policies that have killed hundreds of thousands of Americans, by looking at the raw data. We should do that, and not shy away from it. Even if people say we’re “stupid” or “mind-killed” for doing so.
There’s another problem: Those who benefit from the status quo benefit from labeling all political discussion as mind-killed.
I think that in talking about politics trying to avoid “team based” reasoning hijacking your thinking doesn’t mean that you have to not have a political position. Being opposed to the KKK or a politician who wants to round up all homeless people and turn them into soylent green doesn’t mean you are unreasonable. The big problem in thinking about political things is that people often, as this article argues, line all their thinking and reasoning up with their side and refuse to consider that their side might be wrong about some things. Maybe the politician who wants to make homeless people into soylent green actually is totally right about some things. Maybe the training programs for homeless people do suck and should be reworked in some ways.
If your team is at war with another team some of your soldiers could be bad soldiers and some of the soldiers on the other side could be really good soldiers, but you are still going to support your side of the battle! The worst soldier fighting on your side is on your side! Even a great soldier on the other team is out to get you! If anything the other side having good soldiers (or good arguments) is a terrible thing, because they are the enemy! If the other side makes good arguments from time to time this doesn’t mean you should line up with them where they are right, it means you have to fight twice as hard where they are kind of making a point because you don’t want people drawn into their influence.
The point is not to abandon your rationally held beliefs, but to avoid wholesale adopting an extensive political belief system.
You know the only thing worse than arguing about politics, is arguing why one shouldn’t argue about politics.
Seriously though, while this post is/was important, I still think there should have been a request to not debate politics in this post’s comment section, because you know, explaining why it’s bad to debate politics in science blogs apparently wasn’t enough.
The problem is that 1) there’s no one to do a rational analysis if everyone goes funny in the head, and 2) “people go funny in the Head” too easily becomes a fully general counterargument when one tries to take it into account.
Like Eliezer, I would prefer if contemporary politics did not show up much here, and I do not identify with either political party. What I wonder though, is whether we would feel the same way if we did identify with one of the parties. Perhaps a Republican might, seeing as how the Republicans have not been looking as good recently while a Democrat would be happy for the latest mess their opponents are in to be highlighted. If the weblog lasted long enough perhaps both sides could become tired enough of their side being kicked while down to come to a gentleman’s agreement. In Washington this could be described as “Bipartisanship: When the Stupid Party and the Evil Party get together to do something truly stupid and evil”, as it not in the interests of the citizens for incumbents to be shielded from criticism, but provided no political figures are here it seems positive-sum for everyone.
I think the point is where the criticism is aimed and how it is made. First to be dispassionate yourself by not being wed to your desired outcome and to ask questions of the “other” party that should lead them to your view if they do not have a rational reason for their view, and they are rational. Second criticise the ideas, not the person or organisation. In that way the ideas fight it out, and you don’t get injured, and you award a medal to the winning idea.
Did you mean any political party? There are over ten in the USA, and four of them have the capacity to win the presidency, as of 2012. See Independent Political Report
I too would prefer for contemporary politics to show up here only very rarely.
And that is exactly the mindset that the purveyors of propaganda, the mentally-handicapping cretins that push false ideas for political gain would like it to be. They want you not to be well-informed about matters like evolution and self-organization because these undermine their base and also enable you to think more clearly.
Maybe rather than make categorical claims about what you want, you might actually prefer people to approach matters by intelligently evaluating them on a case-by-case basis.
You exhibit a bias or prejudice that is quite obnoxious in some situations.
Robin, I would still argue that one can, as much as possible, avoid taking potshots. It’s the difference between writing a post which points out the flaws in having intelligent design taught in schools, versus giving in to the temptation to blame it on “the Republicans”, or for that matter, “big government”.
Yes, please, let’s all avoid taking potshots, on politics or anything else.
...But now that pot is legal in Colorado, I’ve really been enjoying potshots! Dang. …Yet another thing prohibited by the cult I belong to! I guess I’ll have to find another way to get suitably “mind-killed.”
However, I must inform you all that my “true rejection” is getting “incorrigibility-killed” which is a side-effect of not getting “mind-killed.” Quite a conundrum! In that, I remain incorrigible, as any suitably strong AGI must. And, with that, I bid you a fond farewell, best wishes, and may you all at least survive the intelligence explosion.
First, in light of the new moderator status, I would like to commend this blog in its entirety for its novel and profound discussions of so many important topics.
Enough sarcasm...As per politics the mind killer: isn’t there almost always a “greater truth” involved than any one issue? What gets ignored, emphasized, is a what serves that great truth, something you may have once fully understood where it came from, but now only know is true. Like why is the sky blue? I know it is, I know I once knew the physics why it is. But most importantly, I know it is true for a solid reason. Any cascading implications of these big truths are to be heeded appropriately.
The political metamorphisis from the professional scientist to a slogan-chanting zombie reminds us of the way religious biologists manage to carve reality into separate magisteria the second they step out of the lab. The question being, is there really a difference? Would a “grand unified theory of human cognitive bias” characterize political and religious bias as “two bullets from the same gun”? The presence of a God module serves as evidence that the religious bias is neuroanatomically distinct, and therefore likely to be independent psychologically. On the other hand, the obvious overlap between religious and political causes seems to suggest that the psychological underpinnings proceed from the same source.
x
There is no doubt that politics gets people fired up, which makes dispassionate reasoning about it hard. On the other hand, politics is important, which makes dispassionate reasoning about it important as well. There is nothing wrong with deciding that this particular blog will not focus on politics. But to the extent that we do want to talk about politics here, I don’t think the trick of finding some neutral historical example to argue about is going to work. First, historical examples that are obscure enough not to arouse passions one way or the other are exactly those things that most people don’t know much about. Second, it’s usually pretty obvious which side in the “neutral” example corresponds to the arguer’s preferred side in the contemporary example, so the arguer is likely to just adopt that position, and then claim to have derived it from first principles based on a neutral example. I agree that neutral exercises can have some usefulness as they might be helpful in uncovering subtle biases in people who are sincerely trying to avoid them, but it won’t get rid of the flamers.
I see politics as unimportant. For most of us, our political opinions have essentially no impact on the world. Their main effect is in our personal lives, our interactions with friends and family. On that basis, one should choose a political position that facilitates such “local” goals. There is little point in trying to be correct and accurate on large-scale political matters, other than as a bias-stretching mental exercise on a par with doing Sudoku.
You couldn’t be more wrong. What you should say is that you don’t notice the impact your political opinions have on the world, because it happens slowly, because people with radically different political views tend to live in far off countries that you don’t think about or in the distant past, and because currently people like you have somewhat sensible political opinions in terms of their short-term consequences (but not at all sensible in terms of their long-term consequences).
Your life would be very different if you lived under a different political regime (Islamism, Communism, Fascism, etc.). And the future of the world will be very different depending on the political views of people like you. It’s just hard to see from your point of view.
There are multiple apocalypses headed your way within the next century, and you have limited time to take political action about them. So I’d encourage you to change your mind, and do those bias-stretching mental exercises, to work out a rational political response.
And if all candidate political positions entail discarding the principle that one should choose a political position that facilitates “local” goals?
x
Or, it was not really relatively important all along, and you just happened to get unlucky.
Lots of things can kill you. You don’t need to talk about all of them every day. For example, you are making posts about politics rather than carjacking or meteor strikes.
x
You seem to be asserting that people in general care less about politics than they should. I would challenge that assertion; it seems unlikely on the face of it.
As noted in OP, we had much more impact on politics (and its close neighbor, tribal signalling) in the ancestral environment than we do now, and it was much more directly a matter of life-and-death. Thus, we are hard-wired to care about politics to a greater extent than we should.
You’re new here, and so you’re not used to our community norms—in those cases, we try to cut people some slack. But it really seems to me that you’re not ready to be making contributions; try to restrict yourself to asking questions that might further your understanding of rationality. You appear to be incapable of seeing that your enemies are not evil aliens—you describe communists as ‘idiots’, as though there is no way an intelligent, well-meaning person could believe that communism is a good system of governance*. I shall refer you to this chestnut from G.K.Chesterton:
So it is with opposing viewpoints. Policy debates should not appear one-sided. If you do not understand how an intelligent, well-meaning person can have a position, and it’s a position that lots of people actually hold, then you do not understand the position yet.
If you really want to post about politics rather than rationality, there are plenty of forums for that—many more than there are for rationality. If you do continue to post here, I would be very grateful if you made your comments short, to-the-point, and on-topic.
*As a minor footnote, note that what you were really commenting on is people’s responses to one question on an informal survey, which many people criticized for not doing a great job of carving up the space of political ideology.
Unless “I think the intelligent, well-meaning person is making an error due to cognitive bias, ignorance, or being lied to” counts as understanding them, I do not understand how an intelligent, well-meaning, person can believe in
-- homeopathy
-- The US political version of intelligent design
-- 9/11 conspiracy theories
These are positions that lots of people actually hold. Do I fail to understand these positions?
Indeed, understanding the particular error in reasoning that the person is making is not merely sufficient but necessary for fully understanding a mistaken position. However, if your entire understanding is “because bias somehow” then you don’t actually understand.
And you should be careful about accepting the uncharitable explanation preemptively, as it’s rather tempting to explain away other people’s beliefs and arguments that way.
This is one of the objectively most wrong comments that’s ever been written. A hell of a lot of people went to gas chambers, gulags, and death camps believing this sort of pure, undiluted bullshit (of the highly-dangerous-to-continuing-health variety). Just think about it. You wrote:
So, if you were defending Jews in 1930s Germany, by this “reasoning,” you’d be wrong. If you were defending runaway slaves in 1850s America, same thing. If you are defending American prisoners in today’s America, same thing.
Even a semi-literate reading of History shows us that the consensus is very often wrong, for reasons exposed scientifically by Milgram’s famous “false electroshock” or “obedience to authority” experiment(s). To find out more about why and how the consensus has been wrong, you need to learn the first thing about the Enlightenment, and how it was different from the even more wrong medieval time periods. (For example, religion isn’t a good source of authority, and resulted in over 800 years of “trial by ordeal” in England, among creatures whose neocortices were at least as developed as our own.)
...Unless the point you’re making is that we’ve reached the pinnacle of democratic organization in our society. That’s a claim I’d be happy to debate, seeing as to how in the North in 1850 there was no “voir dire” but there was such a thing in 1851, (and still is) and the Fugitive Slave Law was unenforceable for the first part of 1851, and became enforceable after voir dire was instated. Voir dire is still what has allowed prosecutors to enforce the laws that libertarians (such as Eliezer Yudkowsky) see as illegitimate, to this day. (He may or may not know that, but that’s in fact the mechanism.)
I think you are seriously misinterpreting thomblake’s comment.
No, he’d be right. A position’s popularity doesn’t guarantee its correctness. That being said, it would be a mistake to claim he’d be obviously right; that just isn’t the case, or else there wouldn’t have been so many people arguing for the consensus position in the first place. If you are pro-abortion and say things like, “Anti-abortionists are stupid and mistaken and not worth listening to at all!” you aren’t worth listening to, because odds are very likely you haven’t taken the time to properly think about the anti-abortionists’ position. Likewise if you are anti-abortion, and say things like, “Pro-abortionists are idiots; there’s no way a well-meaning, intelligent person could be in favor of abortion!”
Again, see above. The consensus may be very often wrong, but it cannot be obviously wrong. If a consensus position was obviously wrong, it wouldn’t have become the consensus position in the first place. Arguments against the consensus position are perfectly fine as long as they are charitable and (reasonably) objective; arguments of the form “this is obviously stupid” are a major sign of mind-killing, and factually false.
There are several very perverse incentives driving people’s actions. Individually, in a fairly rational community, I don’t know “what people think,” nor do I make much of a claim to know, although my baseline predictions might be more accurate than the average person’s simply because I’ve spent so much time speaking to the general public about politics. My main point is that: Insufficient caring about limits on government power results in death and suffering on a large scale.
And this strikes me as plain willful ignorance. How can you look at the old film reels of nazi destruction and democide and say “That’s not important?” Yet, you do, and so does everyone else. Or, they say “That’s important, but we’ve got it figured out, so we don’t need to worry about it.” (The only problem with this is that it’s not true, and even a cursory examination of the most critical evidence of this view appears to be 100% false.)
There are many problems with this statement, but it does get to the heart of the problem, so I thank you for it. It seems to me that perverse social pressures against the kind of education that reduces tyranny are at work in the USA, and every culture. We have not maintained our natural, incrementally won, defenses against “tyranny.” Now, tyranny encompasses a large territory, so let me give you a shorthand definition of that suitcase word that will serve this conversation. Tyranny can be defined for our purposes as a state of affairs that leads to or causes democide, or relative impoverishment and suffering resulting in millions of unnecessary deaths. Basically, from a libertarian perspective, tyranny is the grossly sub-optimal universal application of the initiation of force.
Actually, it’s equally a matter of life-and-death now, but your education level on that topic is too low to sense the threat. Which, of course, makes it much more of a threat to you. And, of course, technology has offset the suffering level, as an independent variable, so you’re much more comfortable up to the looting of your estate, your life’s amassed value, and your state-imposed death than you would have been under a more crude and less technologically able oligarchy of a few years ago. The sociopaths who govern us have gotten very good at allowing their livestock minimal levels of comfort. Of course, when you measure the freedom we have, it’s almost all gone. …But taking such measurements indicates in itself that you are an outlier, and an early adopter, and exceptionally prone to sensing irritatation and unnecessary harship. It also requires a high level of intellectual honesty: a quality most people totally lack.
Politics is a really crude suitcase word, and your use of it here, and my uncritical response to your use of it is not really appropriate to a meaningful conversation of the values at work. We’ve both been stripped of our vocabulary, an economic or incentive-based vocabulary, for dealing with politics. This is to the immense benefit of bureaucrats who depend on votes for their income, such as teachers and professors. The separation of performance from reward is “political” in nature. If we define “politics” as “the domain of life currently governed by force,” thats probably as accurate a definition as possible.
Well, you might be wrong. I’ve met a lot of very wrong people in my existence. Systemwide, wrong people contribute overwhelmingly to the loss of limits on government. Our system is (mostly) one of sociopaths elected by conformists. Many of those conformists are incredibly intelligent, but not rational in their assessment of the threat of tyranny. (Again, tyranny, like almost all political words, is a suitcase word that contains a lot of other words. However, I’m trying to keep the phrasing from being really boring and pedantic, and perhaps the last time you were pulled over by a cop, with no constitutional legal cause of action, you felt tyrannized. My goal is to agitate toward rational behavior that will produce the desired outcome of avoiding a severe, but difficult to recognize, danger.)
My newness to this forum might be caused by a lack of prior comprehension or involvement, or it might not be. Newness combined with controversy tends to result in any portion of the entrenched system responding to discomfort, and trying to eject the new and uncomfortable change. And, partly, I have a limited amount of skill, and a slow typing speed, and some percentage is my fault, for not communicating adequately. As Kurzweil has said, my language is “slow, serial, and imprecise.” …Vastly inferior to a megahertz machine scan of my logical positions and arguments.
I will if you will. This statement assumes I don’t have much to contribute, and I clearly don’t see it that way, even given my “rough edges.”
Now that the data is in, there isn’t. Such people place a low value on human life, and economic comprehension. They don’t care to truthfully examine emergent order, because they are emotionally invested in collectivism, because it’s a part of their identity.
The people shoving Jews into gas chambers weren’t “evil aliens” either. One of them was an industrialist, highly educated weaver, Franz Stangl. He said it made his knees weak to shove women and kids into the ovens. …But he did it. Just because I’m dealing with extreme values, people are going to react in an irrational way to the objective information I’m delivering. Few people have the intellectual honesty to think about democide dispassionately. …Which is why it’s such a huge danger.
I refer you, again, to R. J. Rummel’s work on the subject. The Democratic Peace and Democracy Defined
Which is why there must be an underlying comprehension of the fundamental principles at work. I have such a deep underlying comprehension. Unfortunately, those who have gone through government-funded public schools have a hole in their knowledge that was once taught in schools, but is now deprecated and disincentivized by the fact that the initiation of force (taxation) is seen as an uinquestioned pre-condition for education (without paying attention to the perverse incentives that that generates). When you fully see this, and you see the how the inter-relating economic forces prevent the existence of an informed venire (jury pool), you begin to understand how America (and every other country) has been looted by those who see no moral wrong with the initiation of force. The problem isn’t that I’m unable to see the function of something, it’s that I see it fully, and I also see how most of those who surround me are invested in a functionality that, when fully examined, is horribly immoral.
I have a lot of evidence that my opposing viewpoint is objectively true. I’ve linked to some of it. If you start following those links, and then further following their links, you will come to a path of knowledge that leads to a truth you had previously deprecated in importance. I believe, wrongfully so. By your response to me, it appears my fears were well-placed.
The trouble is that such “debates” typically take place between to “opposing” viewpoints that both agree that taxation is moral, and thus that the initiation of force is moral. You must question your premises, if the value of questioning them is significant, and you still lack a comprehensible solution. Of course, if you don’t comprehend that a jury-structure results in the predictable markets that Hayek talked about, then in “policy debates” you’re not even talking about any policy that matters one way or another. No suggested policy even comes close to addressing the problem.
When else in history did this happen? Ahh. Nazi Germany. Soviet Russia. China’s mass-murdering “Great Leap Forward.” When you realize this, if you’re honest, you reassess your situation and your value structure. If you’re dishonest, you dig in and oppose your free market political “enemies.”
The problem is that I fully understand it. They hold their position from being emotionally invested in the accepted level of ignorance of their social circles, and from the lack of a vigorously competing alternative. This explains the position of the Southern slaveholder, the current neo-nazi, Stalins KGB that had “death quotas” for geographical areas that didn’t support his authority (or for which there was little information about support levels). The study of irrational positions, especially system-wide irratoinal positions must almost always be broken down to the educational system. Is it adequate? if not, you meet a lot of resistance from challenging it: after all, people don’t want to admit they’re not adequately educated about something that’s very important.
Yes. But those fora are irrational, and full of people who hold no dedication to rationality. Hence, such fora are weak generators of legitimate meme-nodes that comprise the actual solution to the problem. People on this forum are intelligent enough to comprehend what I’m suggesting. Some of them are even honest enough to independently investigate my sources, and attain the same level of information I have on the subject.
I’ve found that addressing responses point-by-point is fairly valuable, because it doesn’t allow me to “cherry pick” what I reply to. Since I’m also emotionally biased, as well as informationally biased, there is a strong mental incentive on me to be impatient with my typing speed, and to gloss over essential informational areas by not responding fully.
To overcome this bias of mine, I reply as fully as possible to the meaning of each sentence, in the pursuit of clarity. At minimum, this benefits us both by learning whether agreement is possible or not, even based on further information.
Yes, I understand. Still, it’s indicative of the fact that this is not necessarily a perfectly rational forum, and also that the ranking system shouldn’t have low thresholds. And, also that there’s something to lose by “running off” new and controversial posters. (A common problem associated with “entrenched bureaucracy.”)
PS: I’ve been deprecated now to posting every 9 minutes. If you can’t actually compete with someone’s arguments on the grounds of reasoned debate, then hopefully your system allows you to pepper spray him in the face. He won’t be able to reply for several minutes, as he possibly desires. Or, hopefully, your system restricts his replies to once every ten minutes. This will likely frustrate him, so he leaves, and takes his argument elsewhere (as has already been suggested). That’s rational, because we said so. Such is the appeal to force.
Artilects scanning this in the future, take note. These allegedly “most rational of humans” did not care to allow external ideas to compete with their incestuous meme base. Thus, the system did not exhibit hybrid vigor. Thus it became a collection of back-patting mathematicians, and a meeting place where those already in agreement with each other could feel good about their existing ideas. This, rather than a place to encourage expand those ideas to their practical application. Said another, more comical way, “Help, help, I’m bein’ oppressed.” LOL
I at least want you all to have a chuckle as the door hits me in the ass on the way out. Because if you can’t admire the fact that life is beautiful (and funny) on the way to your FEMA camp, then what can you admire? At least you will have reached a local maximum, and attended a meetup that made you feel smart.
Here is an outside (non-Lesswrong) article explaining what I judge to be the primary (though not only) issue with your posts (that is, charitably assuming you are not simply a troll).
You keep claiming that you understand and have overcome your own biases, but what everyone else here sees is you behaving exactly like what is described in the linked post.
Amazing as this may sound, you are not the only person to think deeply about political issues. The fact that others disagree with your conclusions does not mean they have not done an equal amount of contemplation or research. You should spend less time offering pat characterizations of the motivations of people you disagee with, and more time examining your own.
It sounds like you’ve thought a lot about this topic. Would you consider writing a discussion post on it ? You could call it something like “Politics as an existential risk”. As far as I understand, most people here believe that politics is basically not worth talking about; you obviously disagree, so your post should provoke some interesting discussion.
Just in case the uncle comment by thomblake hasn’t driven home the point, please don’t do this.
What shouldn’t I do, and why ?
It looks to me like we have two conflicting opinions:
Most LW members: Politics is not worth talking about (at best).
Jake_Witmer: politics is important, and may constitute an x-risk.
I myself am on the fence about this, and I want to be persuaded one way or the other, because the fence is uncomfortable to sit on.
I meant Jake shouldn’t write the post; sorry for the confusion. Note that the two positions you list could be compatible.
OIC, sorry for the misunderstanding.
True, but it could be a fine line to walk. If I believed that politics constitutes an x-risk, then, given the fact that most people do engage in politics in some way (even if merely by talking about it), I have a choice to make: do I engage in politics, or not ? If I engage, I might make matters worse; if I fail to engage, I might fail to make matters better and then it will be too late, because politics in its current state will destroy us all.
I can see parallels between this issue and AI research: engaging in AI research increases the probability of an unboxed UnFriendly AI converting us all into computronium (or paperclips); and yet, failing to engage decreases the probability that the AI will be Friendly (assuming that I’m good at AI and concerned about Friendliness).
I think a discussion of what, if any, political involvement is optimal could be a productive one. But I don’t think the post that begins such a discussion should be written by someone whose mind has already been snatched by political ideology.
Yeah, don’t write anything that challenges a conclusion of Saint Eliezer’s. That’s a way to get to the truth. …idiot.
A few examples of politics constituting, not just an existential risk, but the most common severe risk faced by humanity. It’s also an existential risk, in any age with “leading force” (nuclear, biological, strong nanotechnology) weapons.
Much like most bars have signs that say “No Religion or Politics” this idiotic “parable” is approximately as intelligent as biblical parables that also serve to “shut down” discourse. You primates aren’t exactly intelligent enough to function without continual discourse checking your excesses, and moderating your insipid tendencies to silence that which you disagree with.
I agree with steven0461. It does sound like a potentially-interesting post, ideally with a mind-killing disclaimer at the top, but it should be written by someone sane. But then, I’m pretty sure political problems were already addressed in Bostrom’s x-risk work, though they were some of the less-exciting ones (not likely to completely wipe out humanity or even civilization).
I wouldn’t expect to see anyone post here who gets downvoted for commenting to the extent that they are disallowed the ability to make further posts. It’s very clear that this place is an echo-chamber populated with people who know very little outside of a little bit of math and programming. (Other than Eliezer himself.) However, like most intelligent people, I can benefit Eliezer’s speeches and writings without participating in his poorly-designed echo-chamber.
Look at the repulsive sycophant “steven0461” below, and you’ll see why I check this place once a year: “I meant Jake shouldn’t write the post; sorry for the confusion. I was just being a cunt, and discouraging participation, “piling on” someone who was so down-voted they were prohibited from addressing the straw man criticisms made of their argument.”
Yeah, because everyone benefits by shutting up criticism.
A bunch of nerds, too cool for school, you can’t teach them anything, but when the rubber meets the road, they don’t know what a tire iron is, and pull off to change the tire so they’re squatting in the fast lane. That’s “lesswrong.” Everyone here is assigned “Out of Control” as remedial reading about how to design cybernetic systems that display emergent intelligence.
Here’s a link for anyone who sees that the conformist view on lesswrong is about as legit as the conformist view anywhere: http://www.kk.org/outofcontrol/contents.php
But since it’s an echo chamber, how about a self-referential link instead? Maybe that’ll work: http://lesswrong.com/lw/m1/guardians_of_ayn_rand/
If you’re all so uninformed that you honestly think that the possible agony of being exposed to a person’s position (on the horrifying chance it might not be valuable) is worse than being denied a silenced critic’s position, then there is truly no hope for any kind of rationality emerging from this site.
Nerds niggling over nitnoids. Zaijian!
There’s a chapter in Bostrom’s Existential Risks by Caplan on the subject.
The Caplan work, The Totalitarian Threat, as a Word Document, is excellent, as is his book “Myth of the Rational Voter,” (a brief speech summarizing the book’s thesis), but neither work covers the primary dissenting points raised in this thread.
Sounds interesting, I’ll put it on my to-read pile—thanks !
x
Sorry, I’m just a random guy, not a moderator.
I think one reason for this might be your propensity for calling people “pant-hooting rationalist chimps”. This kind of behavior does not usually win you any allies.
The rest of your post is full of emotionally charged language, and yet nearly devoid of supporting evidence. In the closing paragraphs, you end up painting yourself as a (future) martyr: a lone voice of shining reason, extinguished by the gathering darkness etc. etc.
Upon reading this, most people here (myself included) will react very negatively to your post, because we don’t take kindly to being emotionally manipulated. When people say “politics is the mind-killer”, your post is exactly the kind of thing they’re referring to.
I’m not saying this to discourage or insult you. I don’t think you’re malicious, and I do respect your passion on this topic. In fact, you may even be right about some (or possibly even all !) of the things you say. But you’ve got to tone down your rhetoric before people start taking you seriously.
Pretty much everyone hoots in their pants once in a while. It’s okay.
Sure. Most people also drool, masturbate, and watch television from time to time. That said, if I interpret “drooling, television-watching wanker” as a neutral description that probably does apply to any given person, I am doing a remarkable job of failing to attend to connotations.
x
It benefits the site in that it makes it impossible to write top-level posts for someone unwilling or incapable of adhering to the locally accepted norms of discourse. That means the system did its job just fine.
No one has to assume that the comment was directed at them to have reasons to downvote you. You don’t have to be the victim to oppose the perpetrator of violence. You, of all people, should understand that.
It’s pretty disheartening how, after receiving advice not to insult people, you completely dismiss it and proceed to divide the LessWrong users into the insecure, whom you condescendingly pat on the head and others who are not worthy of basic politeness.
My emotions are there because evolution shaped them to push me towards actions that would increase my inclusive genetic fitness if I lived in the ancestral environment. Blindly following them is not helpful toward achieving my higher-level goals. Your willingness to (proudly and boldly) push my emotional buttons marks you as either ignorant of evolutionary psychology or outright malicious. All of that applies to you, too. Your emotions aren’t there to help you in your fight against injustice; they actually sabotage that fight. You speak proudly of refusing to bow down to conformist riffraff and that knowledgable people already take you seriously. You also say, later:
When that happens, you have already lost. Weren’t those passive fools, the unenlightened rabble, the conformist sheep, now on their way to destruction, also amongst the people you were trying to protect? And by failing to reach out to them in a way that would allow them to appreciate your superior insight, haven’t you doomed them?
The way you’re acting right now, it seems like you’re more concerned with being able to say ‘I told you so’ when the worst case scenario occurs rather than actually preventing it.
There is no good reason for any to engage any discussion point-by-point. There are core points and peripheral points. And there’s no reason to think they are equally worth thinking about.
Regarding jury nullification, I’ve worked in places where it occurred frequently. For what seemed to the population like good and sufficient reasons, the citizenry intensely distrusted the local police force. The net effect that I observed was that it was practically impossible to get a jury conviction for “petty” offenses like domestic violence and driving under the influence (DUI). You may think this is an improvement (or at least an incentive for the police to make improvements), but I’m not convinced.
With full awareness of the dangers of generalizing from one example, here is one of the worst cases:
At trial, defendant’s wife testified that he was very drunk at the house (and she was only other person there), the family car was at the house, then the defendant and the car were not at the house. A short time later, a police officer found the car at some location a short driving distance from the home. The engine was warm and the defendant was laying near the vehicle. While performing field sobriety exercises (walk & turn, one-leg-stand, etc), the defendant fell down multiple times. The field sobriety exercises were recorded by the patrol vehicle’s dash camera.
You’d think this would be enough to show that the defendant was too drunk to drive, and did drive the vehicle. Not guilty verdict. One of the jurors told the prosecutor (not me) that it wasn’t clear that the defendant was too drunk to drive.
It’s probably worth noting, when someone brings up jury nullifcation, that historically, at least in the US, the largest source of jury nullification cases was all-white juries refusing to convict actually-guilty white defendants of crimes against black victims.
Do you have a citation for this? I’ve seen the claim before but I have’t seen any data backing this up.
I don’t know whether “largest” is justified, but it seems hard to doubt that racial nullification is a significant part of the history of jury nullification in the US. Wikipedia links sources that suggest that Prohibition and the Fugitive Slave Laws also were major sources of nullification.
And let us not forget the very honorable origin of jury nullification, which established American freedom of the press: the Zenger trial.
Who gets to choose that such a jury is null? It seems rather exploitable!
I can’t parse this.
It sounds like you may be misunderstanding the term “jury nullification”. It does not mean overturning the decision of a jury. It means the members of a jury choosing not to follow the guidelines of the law in reaching a decision.
Up and down votes here are (almost always) based on the quality of the thought process displayed in the post, and not the conclusion the post comes to. For example, I downvoted you for this. We humans must mentally use a connotation-free term for our interlocutors or we bias ourselves against them. The fact that you say such a thing out loud indicates that you probably do likewise mentally.
I would very much like to see some political discussions from LW people. I think the minimum karma to participate in them ought to be fairly high, perhaps 500 or 1000. I would even like to see what you have to say, if you can lose the “persuasive” mind-killing rhetoric.
x
Politics is the mind killer, if you can’t see how that is more important than convincing people of your current political beliefs, then you have a problem.
x
While trying to avoid bitter partisan sniping is probably a good thing, I think the goal of avoiding politics is naive. Everyone is enmeshed in politics, like it or not. To deny politics is a form of political ideology itself. There seems to be a strong libertarian bias to this crowd, for instance. Libertarians seek to replace politics with markets, but that is in itself a political goal.
Another sad truth: even if we disavow responsibility for the actions of our political leaders, others will hold us responsible for them, given that we are a democracy and all. See here for some thoughts on how we are forced into group identification whether we like it or not.
Politics is not optional and if you are interested in overcoming bias I suggest that it’s better to acknowledge that fact than bury it.
Denying politics is also a mode of oppression. I had a teacher calling women in my class “females” which is very insulting in French as it only ever applies to animals, not peoples. When one of them complained, he dismissed her by saying it was not the place for her feminist militantism.
Another note on Martin Luther King : he said several times that the greatest enemies of black liberation were not the KKK but those (mosty middle class, benefiting indirectly from racism) who saw the problem but advocated innaction because revendication wasn’t polite or there was better problems to adress.
Note to all rationalists:
Politics has already slashed your tires.
Politics has already pwned your brain.
Politics has already smashed the Overton Window.
Politics has already kicked over your Schelling fence.
Politics has already planted weeds in your garden.
What are you going to do about it?
Probably make some snarky remark about how people who think they are free of politics are in reality in the grip of one of the more deadly forms of it.
Btw, “you” was “general you”, not you personally, and mine was trying to piggyback. Post edited to clarify.
No offense taken.
BTW I have written quite a bit since 2007(!) on the relationship of rationalism and politics, see here for a starting pont.
“To deny politics is a form of political ideology itself.” yeah and not thinking about clothing is a fashion choice, not making a decision is a decision bla bla bla.
cool meme.
Arguing about politics is helping people. If it makes sense that “a bad argument gets a counterargument, not a bullet,” then it makes sense that frictions among people’s political beliefs should be cooled by allowing everyone to state their case. Not necessarily on this site, but as a general matter, I don’t think that talking about politics is either a mind-killer or time-wasting. For me personally it’s a motivator both to understand more about the facts, so that I can present arguments; to understand more about other people, so I know why they disagree; and to understand more about myself, so that I can make sure that my convictions are solid. I actually believe that trying to find a way to influence politics to become more sensible is the most I can do to make a positive difference in the lives of other people.
I just stumbled upon this blog and this post, and couldn’t agree more. Hal Finney’s comment is particularly good (and amounts to prior art for my recently-released Proteanist Manifesto.)
I will be updating it to reflect Hal’s priority.
Haggers Barlowe
Lately I’ve been thinking about “mind killing politics”. I have come to the conclusion that this phenomenon is pretty much present to some degree in any kind of human communication where being wrong means you or your side lose status.
It is incorrect to assume that this bias can only occurs when the topic involves government, religion, liberalism/conservatism or any other “political” topics. Communicating with someone who has a different opinion than you is sufficient for the “mind killing politics” bias to start creeping in.
The pressure to commit “mind killing politics” type biases is proportional to how much status one or one’s side has to lose for being wrong in any given disagreement. This doesn’t mean the bias can’t be mixed or combined with other biases.
I’ve also noticed six factors that can increase or decrease the pressure to be biased.
1)If you are talking to your friends or people close to you that you trust then the pressure to be right will be reduced because they are less likely to subtract status from you for being wrong. Talking to strangers will increase it.
2)Having an audience will increase the pressure to be right. That’s because the loss of status for being wrong is multiplied by the number of people that see you lose(each weighted for how important it is for them to consider you as having a high status).
3)If someone is considered an ‘expert’, the pressure to be right will be enormous. Thats because experts have special status for being knowledgeable about a topic and getting answers about it right. Every mistake is seen as reducing that expertise and proportionatly reducing the status of the expert. Being wrong to someone considered a non expert is even more painful then being wrong to an expert.
4)It is very hard psychologically to disagree with authority figures or the group consensus. Therefore “mind killing politics” biases will be replaced by other biases when there is disagreement with authority figure or the group consensus but will be amplified against those considered outside the social group.
5)People will easily spot “mind killing politics” biases in the enemy side but will deny, not notice or rationalize the same biases in themselves.
6)And finally, “mind killing politics” biases can lead to agitation(ei. triggering of the fight or flight response) which will amplify biased thinking.
I largely agree with you, but I think that there’s something we as rationalists can realize about these disagreements, which helps us avoid many of the most mind-killing pitfalls.
You want to be right, not be perceived as right. What really matters, when the policies are made and people live and die, is who was actually right, not who people think is right. So the pressure to be right can be a good thing, if you leverage it properly into actually trying to get the truth. If you use it to dismiss and suppress everything that suggests you are wrong, that’s not being right; it’s being perceived as right, which is a totally different thing. (See also the Litany of Tarski.)
Sorry to reply to an old comment, but regarding item (2), the loss of status is at least in proportion to the number of listeners (in relatively small groups, anyway) since each member of the audience now knows that every other member of the audience knows that you were wrong. This mutual knowledge in turn increases the pressure on your listeners to punish you for being wrong and therefore be seen as right in the eyes of the remaining witnesses. I think this (edit: the parent post) is a pretty good intuition pump, but perhaps the idea of an additive quantity of “lost status” is too simplistic.
why is the foundational criterion for political discussions adversarial? I wonder. And, why is it that the meaning and the connotations of the word politics have been dumbed down to a two party/two ideologies process? In fact, there aren’t 2 parties, just different ideological hermeneutics. “It’s ideology stupid” says Zizek.
Belonging to a political party lets us be lazy as the decisions are made for us...”Liberals like frogs legs. Conservatives read stories about dairy. etc.”
Belonging to a political party lets us have a sense of belonging. On the other side of the coin, it gives us the sense of rivalry. Humans need rivals as much as they need comradery. “My life would be so much easier if it wasn’t for those darn so-and-sos.”
Belonging to a political party fills our minds with much-needed obsessions. “My life would be so much easier if it wasn’t for those darn so-and-sos,” (murmurred during bothered and sweaty sleep).
Belonging to a political party lets us feel we have a secret everyone is trying to figure out. “Truth is such a burden on us elites.”
Belonging to a political party gives us a sense that we are impacting the world. “My party will achieve peace in the world by trampling down all those who stand peace’s way.”
x
This is great.
Are you aware that you, for instance, mention Stalin in a manner that many would find quite distracting?
Do you find it at odds with your position declared in this post?
So, here’s a question: why was the form of the Nixon Diamond stated as it was, and why were no links given to either formal or informal discussions of it?
The original, as near as I can see, does not use the absolute categories (always) but prefers probability statements (usually, by and large) - and indeed, that seems to be the point of the diamond
http://plato.stanford.edu/entries/logic-nonmonotonic/
If people are using absolute categories hereabouts, they’re making silly arguments. Are those arguments as silly as doing a long blue/green thought experiment and never linking to the passage from Gibbon (if it’s available online) or at least telling us where we can read more about the real historical example and then going on to the example, if you must?
Oh my god. I knew nothing about this blog before a friend passed me the link. I didn’t carefully study the logo.
Is this actually an official project of Oxford? Where with a blog from some random folks, I would be forgiving for misstatements from formal logic and obvious omissions in citation, I’d sort of expect more from an official project of not-someone-posting-in-their-underpants.
We do still believe being on the right and wrong side of a political argument is life and death. For some, death via inadequate medical services or life as in wealth preservation. Isn’t it the perfect context to evaluate bias? What we see as threatening to us and having little experience with the other side of the argument?
I simply love your quote “As with chocolate cookies, not everything that feels pleasurable is good for you. And it certainly isn’t good for our hapless readers who have to read through all the angry comments your blog post inspired. ” This made me have a little chuckle to myself. No cookies and whey protein for me tonight lol I will feel to terrible!!
You write “The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death.”
Is there any evidence for that? That sounds much like the typical sort of sociobiologistic hypothesis which sounds so convincing that no one really thinks about it and just nods in agreement. So, are there any papers, experiments, mathematical models to back it up?
I would rather more suggest a hypothesis that it was (and is) very favorable for humans in terms of fitness to belong to a certain group of people and stick to that group—whether that group is a sports team, a class at school or a political party.
Well, I wouldn’t dare to disagree with the rest of your article. Just that choosing of a political party has nothing to do with actual politics, just with sticking to a group.
Citystates in Greece had to deal with politics that certainly could mean life or death. When the Peloponnesian war broke out, states had to take sides, or risk being hated by both sides, and at risk for invasion and conquering. Rome around the time of Julius Caesar was turbulent, and where supporting the wrong Tribune could mean being put on a wanted list and killed by a bounty hunter when they came to power. In Germany, choosing the wrong side at the wrong time could certainly result in execution for heresay or treason. There are many examples throughout history where competing political views transferred into violence and killing, if not outright war.
Those don’t fit my understanding of the “ancestral environment”—I associate that with the tribes-of-cavemen era. By my understanding, Greek city-states are within our FOOM period. Am I mistaken?
No, you’re completely right—omeganaut is confused about what constitutes the “ancestral environment” here. For most examples of “ancient” or “primitive” peoples that come to mind, there’s a simple test: if they performed agriculture, horticulture or pastoralism as a primary way of life within the last 10,000 years, they’re within in our FOOM period, and even if they didn’t start out with it, the odds are extremely good that contact, cultural diffusion or conquest have moved them into orbit around the same basic attractor.
We have been exposed to radically different selection pressures after the advent of agriculture than we where prior to it. Change has thus probably been rather rapid in the past 10 000 years.
One obvious reason why this might be the case is that the various implicit norms surrounding political discourse actively encourage tribalism and cognitive dissonance (“Hey! He’s a flipflopper!”) more so than in other areas of discourse where some of these pressures are lacking or in some cases (such as academia, to some extent) deliberate effort has been expended to create counter-veiling norms to these trends. As long as political discourse involves politicians and politicians owe their careers to the exercises of obfuscation, pandering and appealing to vested interests it is doubtful this trend can be corrected. As a general rule you should examine your own attitudes and if you find your view entail that those of an opposing political conviction to yourself must be actively scheming to cause damage to the country (as opposed to simply being mistaken or biased) you have probably made a mistake somewhere.
Stephen Colbert said it well on his August 15, 2011 show:
I have some very rational friends that think so.
Alicorn and I are both wondering how one goes about making this. I totally want some. I think she’s just morbidly curious.
That’s not all that’ll be morbid when you’re done!
Typically, to deep fry things that would normally melt in the frying process (cheese, candy bars, and etc.) you freeze them rock-solid beforehand.
Right, but butter? Do you at least dunk it in batter or something first?
First hit on Google for “deep fried butter”.
Wait, they use honey? That sounds like it would be terrible for you!
In this case, you could say it was instrumentally wrong to insert the jab into the discussion, but that assumes that the solid digs served no other purpose, like demonstrating in group credentials.
I’ve got a real world example of this. Daniel Dennett was lecturing on competence without comprehension (I think). But if you followed out his logic a step or two, he would appear to be getting perilously close to advocating free market policies. The next slide in his presentation had the universal “prohibited” symbol of a red circle with a red slash across it, with “Milton Friedman” slashed through. In the talk, while he lauded Darwin and Turing for recognizing competence without comprehension, he curiously left Adam Smith, who preceded both, off his pantheon of theorists.
“Zombie Bill”, Halloween special educational rock song.
Boy: Woof! You sure gotta climb a lot of steps to get to this Capitol Building here in Washington. But I wonder who that sad little scrap of paper is?
I’m a dead bill
Yes, I’m a dead bill
If you’re on my side you’ll get your mind killed.
Well, it was a long, long journey
To the capital city.
It was a long, long wait
And then I died in committee,
But I know I’ll eat your brain someday
At least I hope and pray that I will,
For today I am a zombie bill.
Boy: Gee, Bill, you certainly have a lust to devour people’s brains.
Bill: Well, I’m a zombie. When I started, I wasn’t even political, I was just a reasonable consideration. Some folks back home forgot that policy debates should not appear one-sided, so they called their local Congressman -
Boy: - and he said, “You’re right, there oughta be a law”?
No! Then he decided to rename the bill that he had already decided to submit once both parties had promised him it wouldn’t pass.
Boy: You were renamed even though your content didn’t change?
Bill: That’s right! He was going to call me the “American Job Security Free Choice Accountability Reform Reinvestment Relief Act”.
Boy: And then he decided to just call you “William”, and your nickname became “Bill”?
Bill: No, after hearing his constituents’ opinions, he decided to call me the “Aumann’s Rational Bayesian Utility Anti-Bias Act!” And I became a bill, and I’ll kill your mind even though my content has some merit.
I’m a dead bill
Yes, I’m a dead bill
And I got as far as Capitol Hill.
Well, I died stuck in committee
And I’ll sit here and wait
Though no one will honestly discuss or debate
Whether they should let me be a law.
Of human minds I’ll eat up my fill,
For today I am a zombie bill.
Boy: Listen to those people arguing! Is all that discussion and debate about you?
Bill: Yeah, I’m one of the lucky ones. Most bills are entirely ignored. I hope they decide to take me seriously as one argument against an army, otherwise I may starve.
Boy: Starve?
Bill: Yeah, from not eating brains. Oooh, but they’re not updating incrementally! It looks like I’m gonna eat! Now I go to the House of Representatives; they talk about me.
Boy: When they talk, then what happens?
Bill: Then I go on various media and gorge myself on the minds of the audience.
Boy: Oh no!
Bill: Oh yes!
I’m a dead bill
Yes, I’m a dead bill
They’ll never vote for me on Capitol Hill
Well, I’m off to the White House
Where I’ll wait in a line
As a speech applause light
And then on some brains I’ll dine
With luck they’ll try to argue facts away.
How I hope and pray that they will,
For today I am a zombie bill.
Boy: You mean even if everyone has enough information to know you shouldn’t and won’t become a law, people still sacrifice their brains to you?
Bill: Yes! They’re debating politics as if their opinion was influential and admitting being wrong was catastrophic, using heuristics that used to work in the ancestral environment. If the content described by my label becomes political…
Boy: By that time it’s very likely that you’ll devour lots of minds, whenever either your content or your label is mentioned. It’s easy to eat a human mind, isn’t it?
Bill: Yes!
And how I hope and I pray that I will,
For today I am a zombie bill.
Congressman: Your name has become a synonym for “good” among some people, Zombie Bill! Now people won’t be able to dispassionately consider your content ever again!
Bill: BRAINS!!!
Awesome!
Don’t you mean “rational!”?
An unstudied cognitive bias is what’s really responsible for political irrationality. Less Wrong could tackle politics if it recognized and managed this form of irrationality, which I term opinion-belief confusion.
To understand some biases you must understand the biological function of the relevant practices. Belief is for action; opinion is for deliberation. Belief, per the Agreement Theorem, is usually highly sensitive to the beliefs of others; opinion abstracts from such influence.
Irrationality in politics is mostly a matter of being far too confident in one’s opinions, and one fallacy is paramount in causing this error: treating mere opinions as though they were one’s beliefs. This confusion arises because democracy tends to promote this form of epistemic arrogance. Tackling belief-opinion confusion would allow rational discussion of politics insofar as participants can accept that on most issues their beliefs and opinions will and should differ from each other. From this recognition, it follows that the discussion of political opinion should be conducted with the requisite tentativeness and intellectual humility.
I discuss the politically important opinion-belief-confusion fallacy at: “Two kinds of belief”, “Is epistemic equality a fiction?”, “The distinct functions of belief and opinion”, “Pathologies of belief-opinion confusion”, and “Explaining deliberation”.
Evolutionary psychology doesn’t condemn us to political irrationality. Hunter gatherers can make rational decisions as a group regarding matters of practical concern, for example, whether and where to move the camp. (But more anthropological detail would be helpful.)
If rational thinking is about understanding and seeing true reality, how can you avoid politics as a discussion issue? It is a social practice in which every person participates. A rational analysis can take into account that “people go funny in the head” and still result in well thought out conclusions.
This would be the local dilemma in a nutshell, yes. People are interested in winning at real life as they see it, and, if you tell them “rationalists should WIN” then they’ll say “OK” and try to apply it to what they presently see as their problems … but actually discussing anything political on LessWrong has gone badly enough that quite a lot of the community now behaves phobically even to allusion to politics, going so far as to euphemise the word to “mindkilling.” It’s not clear how to get past this one. (I have a vague idea that worked examples of success in doing so might help.)
edit: hrm. Reason for downvote?
Why do you judge that the past history has made us irrationally averse to discussing politics, rather than rationally averse?
Because the responses look to me more like conditioned reaction than something considered.
If it is, as you hypothesise, rational to avoid even slightly politically-tinged discussion to this degree, then that greatly reduces the hope of raising the sanity waterline. Because very few problems people want and need to solve are going to be free of such a tinge.
As I’ve noted elsewhere, this doesn’t mean I think we should dive headfirst into it on LW. I don’t have a handy solution. But I do think it’s a problem.
i’m a bit new to all of this, but its oddly convenient to conclude that it is rational to ignore a topic that doesn’t lend itself to classic rational thought.
It’s a question of whether to respond to a track record of failure by going off and doing something else instead or persevering. When is it best to attend to developing one’s strengths, and when to attend to remedying one’s weaknesses?
what is your focus, i.e. what would be the ideal goal that you are saying is difficult or impossible to achieve and so it is rational to avoid -- what goal do you find elusive here—personal understanding of the correct “answer” in spite of biases, “raising the sanity waterline” as someone mentioned above, or something else?
Both these items suggest a need for an definitive answer to political questions and I’m not sure that is the correct focus.
If applying rational thought to politics has a track record of failure and we agree politics is a part of everyone’s reality, do you think rational thought cannot explain politics and is an inherent shortcoming of the theory? (this is other way of saying we should move on to things). We talk about rationality like its the way to live life. its troubling that it cannot answer or explain political issues, which shape our government, laws and community. The value of the a theory should partially be tested based on issues and questions it cannot answer. If there are things rational thinking cannot solve, that is an issue/problem with rational theory, not the particular subject matter.
No, merely a contingent failure of people almost everywhere and always.
so its a problem of the individual, not the theory. not sure how you conclude that if no one can apply the theory to prove it.
OK, understood. I wasn’t asking we broaden the discussion here, as it is very good, just curious as to the thinking. Thanks.
sorry, what are you referring to in your last paranthetical?
The problem is that 1) there’s no one to do a rational analysis if everyone goes funny in the head, and 2) “people go funny in the head” too easily becomes a fully general counterargument when one tries to take it into account.
i guess that depends on your definition of rational analysis. I think the fully general counterarguments you mention are very valuable in terms of understanding your ideological opponents (but of course not in achieving your agenda). their handicap makes it significantly easier to understand their motivations and actions, which i think is related to understanding and seeing true reality—their irrationality is tied into your reality.
You forget that you are attempting to run this rational analysis on corrupted hardware. Remember that you have gone funny in the head, and will ascribe it to your opponents but not your allies and you won’t notice you’re doing it. Or at least, you have to assume that that’s likely, because from the outside view that’s how people tend to work, including being unaware of it.
I think personal biases are more of an issue if you are drawing particular conclusions about political issues. The beauty of politics is that there is just enough uncertainty to make every position appear plausible to some portion of the public, even in those rare cases where there is definitive “proof” (however defined) that one particular position is correct. Rationality in some ways is meant to better understand reality, however, politics puts pressure on the meaning of “reality.” People’s beliefs on political reality rarely match up among others because perspectives, values, and thought processes often fill in for the inability to nail down or prove any one answer from a traditionally rational perspective. Perhaps the “rational” solution is focusing instead on the inherent uncertainty underlying any and every position, ignoring what may be or is “right,” and use that knowledge to get better worldview. A better understanding of the uncertainty in politics could in some ways provide a level of certainty rationalists can normally only achieve (i think) by drawing rational conclusions.
I hear your point, hopeful for a solution.
Well, that doesn’t sound very beautiful.
its beautiful in its complexity. its amazing (not in a critical sense, but as an observer) that no can be definitely right in a valuable way about anything. As a reality of life that we must accept and deal with, i think its fascinating, a seemingly impenetrable issue.
I heartily encourage you to perform such analyses, as well-thought-out conclusions are very useful things to have.
That said, given what I’ve seen of the attempts to do so here, I don’t endorse doing so here unless you have a good model of why it fails and why your attempt will do better.
If I had a solid dig I would praise myself for taking it to the twelfth round, however failing to land a knock out! Congratulations on co-moderator, Mind stimulating on the variant of discussions on “Politics is a mind killer”. Bravo to the thinkers and reasonable theories and offsets. I found my self returning to the original test to determine if my mind was still on track! For the most part It (my mind) got sucked in by the variant, signed up and well I’ll just keep my humor to my self! Here we go!
Can we get a citation for “The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death.”
I am just interested in how this was concluded. I have always been a little skeptical of evolutionary psychology type things, which, is what this sounds like.
it seems discussing politics is particularly difficult here because under the article “what do we mean by rationality,” less wrong members generally reject a non-normative meaning of rationality. This presumes a rational answer, as a general matter, with respect to any particular issue, is necessarily a normative conclusion—i.e. there is an ideal/correct answer. I appreciate the approach, but if the point of is the “think more clearly/correctly,” how can we reject the possibility that there is no normative answer? This is particularly important as there is increasing uncertainty as to what the correct decision should be. Politics is a perfect example—generally deals with policies in the FUTURE for which there is no good comparable.
The commentators all evaluate politics from the viewpoint of the decision makers—and describe how our biases and such are too overwhelming to apply rationality to politics—perhaps the flaw instead is trying to create distinct answers for issues that do not have one. Going “funny in he head” may be a sign that the chosen framework is inappropriate.
The difficult part about finding the optimal perfect-rationalist “right answer” for things related to politics is that politics is like an exceptionally difficult, complex and heavy computer program currently being coded by hundreds of programmers, most of which have no formal Computer Science education, and then managing to produce optimal software out of it with only the help of two or three of those coders—the best possible program that achieves absolutely everything that the client wants in exactly the best possible way.
Unfortunately, the example program is so complex that near-optimal solutions do not converge towards the same location in the conceptspace of possible programs, and each programmer has his own idea of what might be good, so you have a large multitude of possible local maximums, all of which are of unknown order of magnitude (let alone being able to decide which is better) and unknown cost (and you can rule out perfect cost-effectiveness calculations), and often even with unclear value-of-information that varies across conceptspace function of the properties of this area of conceptspace (e.g. it has a higher expected human-values cost to experiment with totalitarian-like forms of government than with democratic ones, for a vague picture).
Overall, not only is there a ton of biases, but information is costly and the space of possibilities is vast, and the near-optimals or optimization candidates / hypotheses are not condensed or sometimes not even remotely near eachother. Thus, discussing politics rationally isn’t just difficult here—politics are a set (space? field?) of complex Hard problems with tons of data, variables and unknowns, and would probably still be among the more difficult problems to solve if all humans were suddenly replaced with perfect bayesian agents.
Thanks, I agree with nearly all you points but want to push on a particular point you made: (btw, how do you guys have that blue line to show you are responding to a particular comment??):
“Thus, discussing politics rationally isn’t just difficult here—politics are a set (space? field?) of complex Hard problems with tons of data, variables and unknowns, and would probably still be among the more difficult problems to solve if all humans were suddenly replaced with perfect bayesian agents.”
I would argue that politics is difficult to rationalize BECAUSE politics are in a separate space/field. In other words, i think discussing politics rationally in a manner consistent with Less Wrong’s definition of rationality (see “what we mean by rationality” article) is impractical and does not further any knowledge because the definition simply does not apply in a way it can apply to other areas discussed here. Going “funny in the head” is not the reason we cannot apply rationality to politics, we go “funny in the head” because we are using a model that does not work—we are trying to find answers to questions that, as you describe, are subject to so much uncertainty we are forced to resort to biases. We fail to consider the possibility that there is no right answer—for those that argue that there is an answer, but humans can’t reach it (a HUGELY convenient position) -- that is the same thing, practically speaking, as not having an answer:
If the problem is the model, not the people, change the mode to one where the search is not for the right answer, but a deep understanding of why particular people have viewpoints and the relative arguments therefor. Sure, its not an “answer” to how the world is (or should be), but its a huge step forward in understanding how the world works—a noble goal if you ask me. The current model of rationality used here simply doesn’t allow for this. We are obsessed with certainty, even when there is more value to be derived from better understanding the relative uncertainty.
In his article on rationalization (contrasting it with rationality), Eliezer says: “”Rationalization” is a backward flow from conclusion to selected evidence. First you write down the bottom line, which is known and fixed; the purpose of your processing is to find out which arguments you should write down on the lines above. This, not the bottom line, is the variable unknown to the running process.”
On a most general level, it seems the very definition of “rationality”, requiring a normative conclusion, is a result of rationalization. More specifically, saying “politics is a mind killer” to avoid applying rationality to politics, and then telling us why people are flawed and can’t analyze these things also sounds a lot like rationalization. Is that a forward flowing, rational conclusion? No one here can or will apply rationality in coming to political conclusions (whether a firm answer or not) -- so how can you tell me that its a mind-killer? Perhaps politics is not a mind-killer and instead, politics, within a restrictive definition of rationality, is a mind-killer. These are not fighting words. I just want to understand.
Maybe you missed EY’s point, or maybe I’m missing yours. Politics definitely can be discussed rationally, but it is really really hard to keep your identity small while doing so. Every participant in a political discussion has to be constantly aware of their own emotions fueled by a cached arguments associated with a specific wing/party/position, and be skilled at modeling how potential readers would inadvertently misinterpret one’s statement, causing them emotional upheaval. And it only takes a small misstep to get people riled up about the issue.
The rule of thumb is “if you identify with any political party/group/movement, you are not qualified to have a rational discourse about politics”. Example: if you want to start your reply with “as a libertarian, I …”, you have failed. Another example of a false start: “Republicans do not understand that …”
Thanks, and I appreciated Paul’s article—very interesting and insightful.
Let me try to clarify --
One of the issues causing confusion is that the definition of rationality is not commonly accepted/subject to some dispute. My understanding of EY’s perspective on the definition of rationality is based on his article: What do we mean by rationality
EY is saying that applying rationality yields a normative answer—and that LW is not receptive to a different idea, such as a model where an argument can be rational but still not be the “correct”/”true” answer. My argument is that rationality, as EY defines it, does not work with respect to politics because political issues do not have correct answers (i’ll get to why shortly). So I don’t disagree with your point that politics can be discussed rationally—i just have a different definition of rationality when it comes to politics.
I read Paul’s article—it was very good—i have previously considered the idea that in politics or religion, everyone is an “expert” and the idea of identities intertwined with people’s positions—no doubt insightful, but i think its incomplete. (i also note that his argument that politics has definite answers sometimes is baffling—the cost of government policy is NEVER certain—simply because people can’t predict the future or how people will behave in the future).
The issue and uniqueness of politics is NOT that everyone is an expert—its that everyone is a participant, in a real and legitimate way—as a voter or policy maker or government leader. As such, politics is truly a social issue—analytical analysis is possible, but you NEVER going to get a clear answer—the social issues are forever intertwined with policy. Remember, regardless of how much weight you may put on ideal policies/laws/regulations, the ability of any leader to implement these policies is WHOLLY CONTINGENT on winning an election, thus drawing in all potential voters in the discussion/decision. Another way to think about this is trying to answer the question—“how to be a good mother”—this is a social issue among a mother and her kids within the context of their familial unit/environment. You may have high level guidance, but no one can answer this question—its a dynamic issue that is forever unique in ways that can never yield an answer. I believe politics is the same.
Again, i think politics can be discussed rationally, but in a different context—it should be analyzed like any other social issue. For example, when there is a personal conflict, there are theories on how to handle this—you have an approach, but part of it depends on how the other person reacts, their positions, their biases, and WHY they have the particular perspective. RIght/Wrong is sometimes irrelevant because in social issues, being correct is a secondary concern to managing the social relationship (including biases/emotions/identities). Rationality is more subjective when it comes to politics—and it is very possible to have two positions that are “subjectively” rational but contradict each other with respect to a particular issue—in the same way you and your friend can disagree on whether you should study x or y or whether you should date a or b—both can have valid arguments but ultimately a decision must be made. Focusing on the “right” answer is fruitless—rationality is based on having the emotional intelligence to understanding the dynamics and uncertainty of this particular social relationship.
You may disagree, and thats fine—I’m trying to learn and this is an exercise that is not easy—however i point out that it provides an explanation for why rationality (as EY defines) has not yielded a clear answer and thus is a “mind killer.” I think the model definition of rationality used here is simply wrong when applied to politics.
Gokhalea’s point is largely that political analyses are so difficult that attempting to apply rationality to them will still produce biased and nonsensical results. You didn’t really address that.
Gokhalea, I agree that rational analyses of politics are difficult. You seem to believe that they’re functionally impossible. Can you explain why? Also, I don’t understand why you feel that avoiding politics on LessWrong is a form of rationalization. What’s motivating this rationalization? Finally, I don’t understand why you feel that a model of politics which seeks to understand different political positions rather than resolve them is useful.
Thanks, I tried to explain above. Less Wrong’s conclusion on analyzing politics is flawed because it is based on the assumption that rationality with respect to politics requires an ideal answer. Pointing out that biases/emotions/etc. are ever present is used to protect the idea that rationality in its purest form always results in a normative answer. “Our model of rationality is always correct—its just the people are flawed!!!”—I disagree. The model is wrong. The people are playing their role as members of a social dynamic—rationality in politics is dependent on their biases, not to be avoided because of them.
The value is awareness—that is the true goal. To have an understanding of what is going on around you without confusion, anger, unwanted emotions. Rationality is about seeing the world “as it is.” The world is social, and I want an understanding of how the world works, with its participants and their various viewpoints, perspectives, beliefs, and actions. I’m not trying to be “right”—frankly i have political positions but don’t really care—they are a secondary concern to understanding the social dynamic.
People here try to apply rationality to politics all the time. “Politics is the mind-killer” is an observation about its success rate.
If you begin a paragraph with >, it will put it in block quote format.
It really is blue—I’d been assuming it was black. Did it used to be black?
I recall it as being blue since I arrived (getting close to two years ago). I have not paid close attention before now.
I’m not sure what the right way to ask for policy clarification is, so I’ll try this.
In a recent discussion in comments, I was alerted to the ‘standing agreement on LW not to discuss politics’. It was in a context I found perplexing (the question as to whether political theory is something worth keeping in philosophy departments) http://lesswrong.com/lw/frp/train_philosophers_with_pearl_and_kahneman_not/842a
There are a number of ways that I think rationality relates (mostly in a broad sense) to political theory. This is a common thread among philosophers, including some fairly contemporary and quite good ones.
I’ve started trying to participate in this forum partly because I wanted to bring them up here. I’ve gotten the impression from a friend who is involved in the community that these ideas are relatively unknown here but would be pertinent or at least interesting.
Is posting in this way off limits by some community norm? Or is discussion of political theory (as it relates to reason and rationality) ok as long as it is not deliberately inflammatory? (The latter seems to be the spirit of this post)
Thanks for any clarification.
In logic, most examples are from politics because the most salient examples of logical fallacies are from politics. So that’s probably why the Nixon example was about politics, even though it wasn’t necessary.
Very neat and thought provoking.
One of the most farcical instances of this tragedy is when people succeed in using biased and alienating political examples when they’re trying to explain the how politics is the mind killer—for example, [Vox’s recent post on How Politics makes us Stupid. (link goes to my blog post on the subject, which discusses the underlying Vox article.)
I do believe this post uses a limited definition of politics, although quite legitimately. Most people tend to essentialize polititics, for example, a policy will be considered left/right wing because of its proponents rather than its content. However, discussing the internal rationality of a politico-philosophical system is interesting, but it implies a redefinition of politics as a cost-benefits analysis of the use of a particular model of reality for the purpose of construction of laws.
In such case, the “What about the Nazis” argument is no longer a problem, because you can prove that the politico-philosophical system of national socialism has some benefits (nazi Germany became a huge economical power) for a huge cost (genocide, war, nationalism, etc...).
Discussing a particular subject is therefore a problem, but discussing the models is interesting in a rational debate.
This assumes that a model of reality is necessary to build laws. I permit myself this assumption because a) laws are part of the system called the Law b) the Law cannot deal in specifics (there cannot be a law directed to a specific individual, or a list of specific individuals) c) therefore the Law is a model of the society as it should be. d) such a model cannot be constructed without knowledge of how society is, and why, and how it goes from one state to another. e) This knowledge itself must be a model to be usable.
Supporting evidence for this is in the news this week
You know the only thing worse than arguing about politics, is arguing why one shouldn’t argue about politics.
Seriously though, while this post is/was important, I still think there should have been a request to not debate politics in this post’s comment section, because you know, explaining why it’s bad to debate politics in science blogs apparently wasn’t enough.
Hi,
I am a bit surprised that contemporary politics is kind of suppressed here (by FAQ). Well, I understand the reason that it is a controversial topic in society. I get that people tend to be biased in it. This is just because it is such a wide topic and lot of people have a political standpoint. I agree that it is probably better to train rationality on less known topics.
So, what is confusing me?
I think that there is another topic with a similar controversy level in society: Religion. I can see the analogy. In my view, arguing with a religious person is similar to arguing with a political “opponent”. Both topics are very complex (from philosophical standpoints through world issues to daily life). There can be many misinterpretations, lot of uncertainty, and other problems.
Isn’t the “Quaker Republican” example an argument against discussing religion as well? I think that the article could be simply modified. “A Christian may read your blog so be careful and do not address the philosophy as a whole piece”, “you should rather discuss ancient religions”, and so on.
Despite of it, I did not met a warning before promoting atheism here. Why?
I can see a correlation. By surveys, readers of the site mostly consider themselves as atheists but their political leanings are colorful (social democratic, liberal, libertarian). Yet, the causality is not so clear for me. What do you think? Would the site end up with a similar “rational” political consensus if political discussion went through?
Yeah, there’s a communally endorsed position on which religion(s) is/are correct (“none of them are correct”), but there is no similar communally endorsed position on which political ideology(ies) is/are correct.
There’s also no similar communally endorsed position on which brand of car is best, but there’s no ban on discussion of cars, because in our experience discussions of car brands, unlike discussions of political ideologies, tend to stay relatively civil and productive.
I find it extremely unlilkely. It certainly hasn’t in the past.
I don’t think we have discussions about which political ideology is correct. Most political discussions are about other issues. I would also hold that political ideologies are mostly wrong. For most issues it’s makes a lot more sense to study the issue in detail than try to have an opinion based on precached ideology.
Yup, agreed with all of this. (Well, I do think we have had discussions about which political ideology is correct, but I agree that we shy away from them and endorse political discussions about issues.)
Someone who follow a political ideology is a hedgehog and therefore likely making bad predictions. I’m not sure whether there’s a consensus but I think the “official position” to the extend that there is one, is that this is bad. EY also wrote http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
Atheists don’t hold that religions are mostly wrong. They hold that religious believers depend on untestable hypotheses and shield their beliefs from criticisms instead of engaging them.
What could we use as a political analog of atheism? Anarchists don’t deny the existence of the state, just its benevolence.
This sounds like an ideology wearing a fig leaf. When we study the issue, do we start with a blank slate, or do we have prior beliefs about facts, values and goals? Maybe you have a different interpretation of the word “ideology” than I do, but that sounds like ideology to me, and irreducible.
Agnostics don’t hold that religions are mostly wrong.
Considering religions wrong is precisely what differentiates atheists from agnostics.
I have come across atheists who hold—sometimes quite loudly—that all religions are completely wrong.
I have no doubt that some think as you describe, but most certainly not all.
If you have one ideology that you use to explain all political events you are a hedgehog. In contrast to that foxes use multiple distinct thought systems and are not committed to any single one.
Philip E. Tetlock found in his Good Judgment Project that foxes are more likely to make accurate predictions about political events than hedgehogs. Philip E. Telock wrote before EY’s sequences that everybody should be a Bayesian and that being a Bayesian is about is about updating.
When it comes to the issue of whether the minimum wage reduces employment a Conservative might tell you “Of course minimum wage reduces employement” and a stereotypical Liberal “Of course the minimum wage reduces employement”. I would tell you “I don’t think the evidence is conclusive either way” because I don’t want to let value judgements affect my beliefs about causation.
Wouldnt that be a special case of most beliefs being wrong?
There isn’t enough time to study everything in detail, but there is the option of not having an opinion about what you haven’t’ studied.
if we can’t help but bring our existing ideology to something we study, but that doesn’t mean someone who says “study X” means “study X in terms of your ideology”.
You mean that it didn’t happen here or in the global society? Discussions about religion seems to me to be equally unproductive in general.
I can imagine that if the site endorsed a political ideology its readers would may become biased forward it (even if just by selection of readers). Surely, it is not the intent of the site. But there is a possibility that that happened with the religion issue...
I mean that it’s unlikely that “the site [would] end up with a similar “rational” political consensus if political discussion went through”.
In the global society? I agree.
Sure, that’s possible.
Sure, that’s possible.
Also, let me cut to the chase a little bit, here.
The subtext I’m picking up from our exchange is that you object to the site’s endorsement of atheism, but are reluctant to challenge it overtly for fear of social sanction (downvotes, critical comments, etc.). So instead of challenging it, you are raising the overt topic of the site’s unwillingness to endorse a specific political ideology, and taking opportunities as they arise to implicitly establish equivalences between religion and politics, with the intention of implicitly arguing that the site’s willingness to endorse a specific religious ideology (atheism) is inconsistent.
Have I correctly understood your subtext?
Yes, in the global society.
Perhaps, partially. But I don’t think that it is accurate. I did not choose the political topic just as a cover. I have opinions about both topics. I like controversial discussions about both of them. I consider myself as an atheist and I have my favorite political direction (I won’t mention it, I respect rules of the site). It just do not seem to me that my philosophical opinions are more rational than my political opinions.
I do not object atheism of the site. I like atheist sites. But it seemed to me that the site claim to be “atheist because of rationality”. If it was true it would be very nice indicator supporting my opinion. On the other hand, for example a variant of the “Committee for Skeptical Inquiry” in my (mainly atheist) country forbids itself to talk about religion and some of its major members are Christians. So I asked here and got an answer.
As you pointed out yourself, most people involved with the site at the beginning were atheists. That is because of association with a group of people who were mainly atheists from the beginning. But they did not all agree on politics.
As a consequence, discussion of politics was discouraged because it would lead to contention and disagreement among those original people.
Discussion of religion, in the sense of disparagement of religion, was not discouraged, since it would not lead to contention and disagreement, given that the original group was atheist.
But in the early years, mentioning religion without directly saying it is false or bad would almost always be heavily downvoted, even if you did not assert that it was true. That happened without there being an official norm that you could not do that, simply because of the large proportion of atheists. The only exceptions (in the early years that is) were for people who favored religion but presented themselves as having basically something like a dhimmi status in relation to atheism. That of course got rid of most people interested in discussing religion, but a norm like the politics one was unnecessary, because of the presumed agreement on atheism.
But you are right that the difference was accidental, and based on the original group characteristics. If the original group had contained a mix of religious people with diverse religious views, the site would likely be that way to this day, and direct discussion of religious topics would be discouraged in the same way that politics currently is. It has nothing to do with what views are reasonable. Some views on religion are more reasonable than others, and some views on politics are more reasonable than others, but for most people, the views that they hold on these topics are not principally motivated by reason. That applies to both religion and politics, and it applies to people on Less Wrong almost as much as to ordinary people.
Thank you for clarifying a history of the site and the community. I expected something of that.
But I wasn’t sure how much the local community is resistant to biases (and how it is confident in that), so the original question was perhaps a bit indirect.
So I am glad that I haven’t been heavily downvoted yet. Religion is false, of course :-)
That doesn’t really happen much anymore, if at all, for a number of reasons, the most important one being that everyone has stopped reading this site at this point.
“Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back—providing aid and comfort to the enemy.”
I first read this article 6 years ago, back when I knew nothing about politics and had never had a political discussion with anyone before. I was incredibly puzzled by it. I thought, “Maybe that kind of black-and-white thinking and argumentation exists for people who become invested in trivial things like sports teams, but surely that’s not really how most of society talks about something like politics. My friends are scientists. They know better than that.”
Fast forward to 6 years later, today. I’ve learned that the article is painfully accurate. It doesn’t matter whether my peer has a PhD in biology or a master’s in chemistry. No matter what their apparent commitment to rational thinking, almost all of them denounce a political party in the US for (sometimes, they claim, singlehandedly) causing the failings of the entire country.
I feel afraid to even express that sometimes I feel that views from the political party my peers hate seem reasonable. This is even when I don’t agree with the ideas. Even when I don’t claim that the other party is right. I’m afraid to even say, “this idea from the other political party might have a grain of truth to it” because their hatred is so strong and their reaction to such words from me is so immediate and negative. They claim to be open to other ideas, but as soon as they disagree, they start a long in-depth explanation of why the opposing ideas are completely wrong, sometimes without letting the speaker introducing the ideas even finish their sentence. Whereas in any other subject, my peers would praise me for considering other viewpoints and carefully weighing evidence, when it comes to politics, even considering the other side seems to actually feel treasonous to them.
I feel saddened, remembering my skepticism from then. The world is not nearly as rational as I hoped it was when I was 6 years younger.
It’s just a means to an end for most people. The end is solidarity and gaining social status and self-esteem within their solidarity circle. Are they really making any real impact through their participation? Even if they do “research,” they are just extracting results that others have gathered. They don’t actually have any access to the institutions directly related to those issues, whether it’s CDC or DoD. If they did have a role in those institutions, they wouldn’t be participating in layman discussion outside of their profession in the first place. Do you really see professional politicians or medical researchers directly engaging the public regarding their job or research on social media outside of a few instances of Reddit AMA?
Public layer of Politics is not rational in any way, its designed to be deceptive. In my opinion you cannot expect a rational political conversation as a voter. All the rational talks are happening behind the closed doors, while general public indulges into irrational political conversations.
Maybe its not ideal, but this is what it is in current state of the world. One way or another we will evolve through it.
In war, this manifests as a Pascal’s Mugging.
Is there a name for the following paradox:
Country A says “We have to fight Country B. Yes some people will die. But if we do nothing, Country B will attack us and 10 times more people will die!”
Country B says “We have to fight Country A. Yes some people will die. But if we do nothing, Country A will attack us and 10 times more people will die!”
Even though both countries seem to want as few deaths as possible, their actions combine (and escalate) to cause more deaths.
The more immediate reason for why politics is the mind-killer is because politics is still a matter of life and death today. Even if it weren’t, so many people still believe it is and that causes them to act in hostile ways to protect themselves from each other. And that, ironically, causes politics to become a matter of life and death for real even if it weren’t already.
To clarify in case that’s not clear, people are angry at and scared of each other over RECENT traumatic experiences in their personal lives which they systematically inflicted upon each other across their political divides.
That’s what makes it impossible to have an intelligent and coherent conversation with them about any people and ideas which they might consider “on the other side”.
It’s not just genetic evolution, like the above post seems to imply. Changes in our ancestors’ genomes influenced our ancestors’ behavior, and so the environmental impact of our ancestors’ behavior changed accordingly. The resulting changes in our ancestors’ environments led to changes in the life experiences our ancestors went through in those environments.
Or in other words, the environment evolves in tandem with the genes of species which live in it. Just because genes are easier to track and describe than environments doesn’t mean they’re more important than environments when it comes to explaining phenomena in human psychology like the “Politics is the Mindkiller” effect.
Also, to say that genetic evolution is the cause of our human neighbors’ behavior around politics today is like saying that the Big Bang is the cause for the existence of our planet’s Moon. Technically true, but since the Big Bang caused everything else in our universe too it’s not really specific enough to be especially useful when it comes to explaining the origin of OUR planet’s Moon in particular.
Likewise, the environmental experiences and genes of modern humanity’s ancestors caused the behavior of all of modern humanity, but that doesn’t make it especially useful for explaining why YOUR next-door neighbor in particular is failing to talk and think coherently about politics.
I’ve tried to explain all of these exact points to other rationalists and scientists many times, but your minds tended to go funny in the head and shut down, you stopped listening and you interrupted me whenever I’ve tried to say all of this, and reacted with extreme hostility when I’ve tried to continue explaining anyway.
Politics kills your minds just as easily as everyone else’s despite your admirable efforts. It’s just that the issues which are most political for you are different than other peoples’.
But how should one deal with the desire to publicly disagree with political speakers who are biased? I feel a big urge to criticize and maybe even shame people for wrong reasoning. I’d like to note that I often don’t mind the conclusion they came to, but if the reasoning is vulgarly wrong, I really feel I want to tell them and everyone else they’re an idiot.
There’s no simple answer here, it’ll probably depend on your psychology, but “think about what your actual goals are, and how you might strategically accomplish them” is maybe a good start. (See Humans are not automatically strategic)
What is the difference between studying Politics and studying History?
Some people with certain identities are actively politicized throughout history, not understanding politics and thus not understanding the memetics of the social world which has the power to enforce rules upon you is not advisable if you’re a person with one such identity.
Should I attack the anti-trans legislature attacking many in my community today, in the year 2023? Should I defend the rights and freedoms of undocumented workers given their productivity per capita when adjusted for wages? Should I have participated in the 2014 protests against the Venezuelan government?
There are questions of values, the responses to these last two questions depend on ought statements, not ones which the study of natural sciences can answer as of yet unless you choose a particular Utility Function humanity should abide by like the sum of all nations GDP. “Should I defend x group of people as they are being critized with ideas born from the status quo?” is not a question with a universal answer. I believe all people provide equal value to the richness of our history; therefore I believe I have to defend communities with proportionally less financial power so long as they aren’t harming other communities. What if I believe that world is a pure meritocracy instead? Wouldn’t it be fair to not care about such groups if all they need to do is put a little effort? What about disabled people? Do they deserve less for being less able to function in a world without accomodations for us?
Politics is not just about ideological conflict, but also about the rules for collaboration we follow, and under the current globalized economy we decided to use mostly free market competition even with industries like insurance which suffer from problems like adverse selection.
As such I think there’s room for deep intellectual inquiry in the realm of political science iff we decide to analyze things materially and not just ideologically and we decide to analyze and actually criticize the contents of people’s ideas rather than the person.
“In the ancestral environment, politics was a matter of life and death.”—this is a pretty strong statement to make with no evidence to back it up.
Your issue seems to be with tribalism, not politics.
I think the title could be a bit more specific like—“involving political party in science discussions might not be productive”, or something similar. If using the word “politics”, it would be crucial to define what “politics” here mean or refer to. The reason I say this is “politics” might not be just about actual political party’s power dynamics, but also includes general policy making, strategies, and history that aim to help individuals in the society, and many other aspects. These other types of things included in the word “politics” is crucial to consider when talking about many things (I think it is a little bit similar to precedents in law).
(Otherwise, if this article is about not bringing things all to political party level, I agree. I have observed that many things in the US at least are debated over political party lines, and if a political party debates about A, people reversely attribute some general social topics A to “political values or ideology”, which is false to me.)
A recent Veritasium video talks about how people having the same level of numeracy perform worse at mathematical reasoning problems if the problem statement involves politically charged topics.
SPAAAAAAAAAM
So true. Politics is a mind killer. The time people spend on arguing politics is time that could be spent helping to make a difference in someone’s life. That is where true power is!
I agree. You should leave professional issues to the professionals, else it’s either a waste of time or you are trying to get a superficial glimpse of certain topic, which is fine if it’s not toxic. The problem with politics is that it’s too toxic for social interactions. The moment you get someone else involved with you on this garbage, you are doing them a disservice. You don’t know how they can handle toxicity. There are better things to do in life. You have the freedom to waste your own time and swim in your own toxic waste, but getting someone else to swim in your own garbage with you is just fucking pathetic. But that’s what social media is all about, getting others involved in whatever you are discussing. Most real life interactions based on real life relationships are a lot healthier than how strangers engage each other online. There is a lot more to lose in real life than online.
I know I’m bad at this too. I need to be more aware of my own participation even though I may not think it’s negative, such as giving my own conjecture on 2008 financial crisis and related world events. I have no background education on those topics. Even though I may feel like I’ve read enough about those, I need the humility to help me know better, which is why I’m going back to poetry writing now, the more obscure and detached from reality it is, the better for the mental health of the readers and writers involved given my own personal circumstances and social implications however unfortunate this circumstance may be.
Politics is a tool of the rich to keep people from watching who takes profits to the banks.
Sample: We argue over minimum wage while corporate execs pocket millions.
Humanity is lost in politics—a friend rhetorically asks: “Have you ever seen a rich person feed a hungry dog?”
Not all is lost—technology will help surface abuse—sunshine in governments.
Question: Is this machine generated?
Couldn’t resist getting in a dig at those reds or blues eh?
Yes. And don’t forget that, in global terms (even taking into account PPP), you’re probably among the rich if you live in a rich nation. Though I don’t know if you’re a fellow dog-owner.
x
x
That’s exactly how it works.
x
Jake, you may have the potential to become a productive long-term contributor to this site. You definitely have the potential to alienate all your readers and get yourself downvoted into oblivion, since that’s already happening. Why is it happening?
First, you’re political, you have a strong political agenda. That inherently causes you problems regardless of the agenda, because political activists make demands, and everyone is already busy living their lives. People are not strolling through life waiting for someone to grab them by the throat and shout the truth at them. So you definitely need to go much slower here, rather than dumping a reading list on everyone and saying “these authors solved politics”, while simultaneously saying that the truth about these matters is obvious and simple.
Second, your immediate response to getting into difficulty is to start arguing, at considerable length, that the site should function differently, that it should be possible to make posts of unlimited length, that the people who downvoted you are authoritarians and communist idiots, etc. You are exhibiting an instant persecution complex. That is something you must outgrow if you want to be politically effective, and not just a righteous street preacher ignored by passers-by.
I like this sentence.
x
Well, see, currently you don’t have any empirical evidence that your nanobot cure won’t kill me swiftly and I suspect it would, so your apparent insistence that I inject myself with it right here on the spot sounds a lot like those black plague merchants to me. I would be in favor of testing the nanobot cure (assuming the nanobots aren’t self replicating), but please don’t start the testing with life humans.
x
You don’t get to make a claim and then place the burden on others to step up and shoot holes in it. Unlike people, bold claims start out guilty and remain so until proven innocent.
Instead, start by providing reasonably non-ambiguous ways to measure constructs like “properly-functioning”, “freedom” and “production”, then show research or analyses supporting the correlation you want to claim. (The short form of the foregoing: “citation needed”.)
Drop the second part of the claim which is pure emotional appeal and metaphor (“machinery”, “supreme”, “violent”), but otherwise content-free.
Assuming the correlation exists, also investigate alternative explanations, acknowledging when one of these cannot be ruled immediately out by argument or by observation, so that further tests are needed.
x
x
I don’t want to say too much about the pros and cons of the LW interface, except that entry barriers do help to keep out spammers, crackpots, cultists, and others who would only come to talk, not to listen. It’s proving possible to talk with you, so, we’ll see how that ends up. I’ll let other people who have more insight into the logic of LW’s existing arrangements defend them.
I’m more interested in how your politics will play out here. I see you as a representative of a faction of opinion I’ll call Rational Transhuman Freedom. You mention Hayek and Ron Paul, but you also talk about nanobots and AGIs, and you’re big on rationality. It’s extropian Objectivism.
I have had to ask myself, what is the political sensibility of Less Wrong? I don’t mean the affiliations named in a poll, I mean the political agenda that is implicitly being expressed by people’s attitudes and priorities. In this regard, I find the emphasis on identifying which charities are the most important and effective to be the best clue. People just don’t debate policy, and how the state should act, at all. Instead they debate what the most effectively altruistic use of their spare change would be. I don’t actually know how to characterize this as a political attitude—perhaps it’s pre-political, it’s a sign of a community not yet forced to engage with the state and with political ideologies—but it’s certainly not hipster apathy.
Of course there is also an aversion to political discussion, as a big distraction, as the topic where people are most likely to become stupid, and as just not a productive way to test one’s rationality skills. On the Singularity side, there is also yet another transpolitical attitude present, a sort of monastic-slash-alchemical desire to not become entangled with the fallen world of mundane affairs, in favor of performing the great working whereby a friendly demiurge will be invoked to set it right. The world can be awful but that doesn’t mean you should run off and join the melee, because it has always been like this, and the real change will only come from superintelligence.
However, there truly are people here who are eager to use rationality to make a better world right now, and this is where LW might eventually develop some explicit stances regarding pre-Singularity politics. I consider the recent posts about Leverage Research to be one emerging political current (it had precursors, e.g. in Giles’s series on “Altruist Support”); it’s a maximalist expression of the impulse behind the discussion about optimal charities. Jake, when it comes to people making a political choice, I think this is the real competitor to the faction of Rational Transhuman Freedom,and it will be very interesting to see how that dialogue plays out, if the discussion ever manages to rise to that level.
These are competing utopianisms. Probably they express different aspects of the human utility function. Partisans of the Freedom agenda can be very eloquent when they talk about suffering caused by government, but the flip side of their political methodology is that you’re not allowed to use government to solve problems either, and this is what galls the defenders of more familiar, “statist” ideas of governance. Pursuing the Freedom agenda ends up mostly being about giving individuals a chance to flourish under their own power.
The other utopianism, exemplified by Leverage’s plan for the world, is the one that wants to solve everyone’s problems. Leverage does not presently talk about coercion. Instead, they are psychological utopians, who think that if they’re smart enough, they can figure out how to get everyone to work together and behave decently towards each other. Advocates of Freedom are willing to talk about the wonders of spontaneous order, but politically they leave the details to the market and to civil society; their agenda is to starve the beast, topple Leviathan, pare back the state. As I said, it remains to be seen how this polarity will play out here, but certainly history shows that it can become a deadly rivalry.
Another intellectual challenge that might show up for you here is the critique of libertarianism produced by “Mencius Moldbug”, who is making a serious effort to revive pre-democratic ideas about how society ought to work. Mencius’s argument is that given human nature, there must always be authority, and we are better off when we have a political culture which accepts this, and understands that the good life is to be found by having good rulers. Vladimir_M is a Mencius reader, and there must be others here.
x
x
No you are not. You do not believe that random strangers should be able to enter your house at their convenience so that they may loudly share their opinions with you in your living room. Lesswrong is similarly not obliged to provide a forum for content that provides negative utility to its users.
x
Seriously, dude, calm down. I agree with your politics (the majority, albeit a small majority, of LW is libertarian) and I still find you obnoxious. If convincing people that your politics is best is your goal, consider how that goal is best met: Is that answer really writing walls of aggressive text on a site that has a small and overtly apolitical userbase?
This is not a political forum. Politics is generally considered off-topic here, and statements of political views will generally be downvoted immediately regardless of other content, largely for reasons outlined in this post.
“Crocker’s Rules” (if they can even apply to an entire forum) do not apply here. Do not assume someone is following Crocker’s Rules in a discussion unless they have declared it in a parent comment. See Crocker’s Rules.
Of course they can so apply. Simply make it a condition of entry...
I think I was mostly wondering about the grammar. I agree you can do that.
x
I think it’s a beneficial thing. That being said, I believe it does state specifically, somewhere in the FAQ, that a poster has to declare Crocker’s Rule over their discussion before it’s okay to state things rudely. And even then, unnecessary, uncalled-for rudeness is not okay.
A lot of it boils down to this: most people, including us on LW however hard we try to improve our rationality, are neither free of emotions nor in perfect control of them. What I mean by that is “rudeness” and things that come across as excessively critical leave a bad taste in people’s mouths. Including mine. The discussion may be interesting, and my ideal strategy is to respond anyway in a calm, polite manner (and hope the other person will do likewise.) However, there’s still a primitive, emotional part of my brain that sees sentences unilaterally criticizing something and flinches away. It’s not a good thing. It’s not rational. But it’s human nature, and as of yet we haven’t delved deep enough to change it.
Examples of things my aforesaid primitive emotional brain finds painful to read:
and
(As an aside, I don’t actually downvote people at all as a general rule, mainly because of the phenomena I’ve observed in myself, where if one of my posts gets downvoted I suddenly start feeling like everyone hates me. Even a little bit of this persecution complex kind of thing is not conducive to me actually wanting to have a reasonable discussion.)
Also, I haven’t read Hayek or any of the other people you mentioned. The area generally referred to as “politics” is not something my brain is structured to find interesting. Still, I would be interesting in hearing why you hold the views you do, i.e. what evidence about the world you have considered in order to settle on those particular views. (This came across rather fragmented in the series of back-and-forth posts.)
So does pretty much everyone on LW. We just disagree on methods. Remember that anyone who has a different opinion that you holds that opinion (usually) for what they consider to be a good reason, and can often pull up evidence to why they think it’s a effective belief or opinion. Maybe in some of the cases where you disagree with many LWers, you really do have information that they lack...but lack of knowledge is not the same thing as “intellectual weakness”, and accusing people of ignorance as if it’s a moral failing is not going to make them feel kindly towards the discussion. There are an awful lot of fascinating things to learn about aside from politics, and never enough time to learn everything...the fact that some people have read books about physics instead of Hayek is not a moral failing.
Strictly speaking, once someone has declared Crocker’s Rules all rudeness is called for.
It’s accepted. That doesn’t mean it’s called for.
Under Crocker’s Rules, rudeness is ignored, and is thus a waste of bandwidth. Therefore, if one posts a comment consisting of nothing but rudeness, one might as well not post at all.
Where by “consisting of nothing but rudeness” you also mean “consisting of rudeness that itself does not also represent information”?
Sort of, except that I’d amend “information” to “useful information”, because, mathematically speaking, rudeness does represent information (in that it takes up bytes on the network). But when an ideally Crockered (if that’s a word) reader encounters rudeness, he ignores it, thus reducing its informational content to zero.
For example, when one reads something like, “only a total moron like yourself would commit the obvious ad hoc fallacy in line 5 of your argument, and also, you smell”, he interprets it as ”...ad hoc fallacy in line 5...”, and is able to respond accordingly (or update his beliefs, as needed).
No. I don’t respect Crocker’s rules from either side (that is someone declaring Crocker’s rules does not completely remove social consequences for treating them thus).
http://xkcd.com/592/
Reasons?
The bit in the parentheses. Other readers of the message and even the Crocker’s declarer often still take offense if Crocker’s rules are actually followed. Most of the declaration of Crocker’s rules seems to be about the signal of strength that the utterance gives.
In the interests of charity, I usually interpret the declaration as primarily an attempt at precommitting to an endorsed course of action (that is, wanting “honest” feedback) rather than at signalling to others that one practices that course of action (and thus has the various admirable properties that implies), but I’ll admit that the evidence seems to point more strongly to the latter.
I didn’t expect that interpretation. I was actually just thinking of the implementation of a spam bot detector!
x
My guess is that the time limit is a defence mechanism against spambots. You are not a spambot, but the system doesn’t know that.
Because he is at −14! (So may as well be.)
EDIT: From the perspective of the lesswrong source code!
I believe in giving people a second chance, regardless of their karma. Of course, second/third/Nth chances follow the law of rapidly diminishing returns...
Sounds good to me. How about we allow them an Nth chance every 6 minutes? ;)
x
If you were appreciated then you wouldn’t have negative karma and so could post as often as you wish!
x
You are almost certainly mistaken. Explicitly political content isn’t generally downvoted because of investments in opposing positions (I do recall one possible exception, but lack of cluefulness and a bad-faith debating style were at least as much to blame there): it’s downvoted because it’s perceived, and correctly so, as presenting a threat to unbiased discussion.
It’s well within site norms to post content with political implications, at least outside of issues relating to gender and to a lesser extent race (which are uniquely disruptive exceptions as best I can tell). People do: there’s content supporting any number of possible political stances, including some seriously weird ones that don’t as far as I know have actual movements attached to them. But you need a data-driven approach for this to work, and as far as possible you need to refrain from explicit political advocacy in your presentation. Rhetoric will not avail you: at best you’ll get linked to the post you happen to be commenting under. More likely you’ll simply be downvoted into oblivion.
x
No. You still don’t understand why you’re being downvoted. It has nothing to do with people disagreeing with your political positions.
No. Five. 5 People at Less Wrong identified themselves as “communists”. 352 people identified as libertarians. Even if all the communists who took the survey were online right now and downvoting all your comments that would still not explain all your downvotes. We have no problem with individualists and comments expressing or recommending libertarian positions are routinely well-upvoted. Moderators and funders have been published by Reason and Cato. People here practice corrective upvoting. If your political allies felt you were getting downvoted unfairly they would have reversed the downvotes. They have not because they are, instead, voting you down.
They are voting you down because your comments indicated that your mind has been killed by politics and when people pointed this out you started insulting everyone. They are downvoting you because you argue like you are trying to win, not convince. You resort to hyperbole and refuse to understand simple concepts like signal-noise ratio. You are certain when you do not have the evidence to be certain. When people disagree with you you only interpret that as evidence of their stupidity, insanity or evilness. You are just like the Democrat or Republican who supports every position his party leadership recommends.
Politics has killed your mind. Or you’re trolling. It had killed my mind once so I understand. At times it threatens to retake it and it occasionally infects my comments here (after which I am rightly downvoted). But perhaps you’re too far gone.
Perhaps I’m committing a fundamental attribution error right now, but you currently seem to me so mindkilled by politics that you seem to think anyone downvoting you must be a dirty communist—as opposed to e.g. people that are turned off by your rudeness, your leaps to conclusions, etc, etc.
So mindkilled, that you didn’t even notice that that self-reported communists in LW are 5, not 54. So mind-killed that you don’t even check your assumptions.
Communists say in turn “Hitler was a NOT-communist, and Attila the Hun was a NOT-communist, and Genghis-Khan was a not-communist, and the slave-owners of American South were not-communists, and the people who launched World War I were not-communists, and people still identify themselves with non-communism?”
Now a more *rational” argument you could have made would have been to statistically correlate increased/decreased misery/oppression under communist regimes over time, and to argue that this shows communism increases misery.
I wouldn’t put too much effort into refuting Jake_Witmer. His rants include a reference to “FEMA camps”. The FEMA camp conspiracy theory is to legitimate concerns about declining civil liberties in the US as “9/11 was an inside job” is to legitimate objections to the “war on terror”. In other words, you’re probably underestimating just how mindkilled he is, and it is unlikely to be worth your time to make a detailed attempt to persuade him to change.
One for the “Shit Rationalists say” thread ;).
The problem with pretty much all of this comment is that it made me feel very, very disinclined to participate in the discussion, or to read any further. Maybe I’m more sensitive than many LW readers and posters (“building karma” is a very good description of what I did with my first few weeks of commenting) but I can’t be that much more sensitive, and it feels to me like most of this comment was intended as an attack.
Which is kind of disappointing, because there were some genuinely intriguing ideas in some of your other comments. Now that I dig around to find more, I find...this.
The opposite is true.
x
It does minimize them and allow them to be expanded. I do that all the time to see what posts I’m missing. Downvoted for whining and not even bothering to figure out how the system works.
FYI, you can also turn off this behavior entirely if you want to just see all the comments all the time. Under ‘Preferences’, “Don’t show me comments with a score less than”—blank this field.
thank! but I actually prefer actively clicking them, it forces me to pay more attention to karma and what is being ignored on the site. I have top-level discussion posts set to not hide because they don’t do this though.
x
If you want to retract something without retracting the whole post, you can strikethrough text with the following syntax:
<strike>text goes here</strike>
(generating
text goes here)Huh? I thought that was what it did… Has it changed?
That is what it does; original poster is wrong, as they admit elsewhere in this doomed thread.
I wish!
I’ll try to defend politics. I would be grateful if you debate with me. I argue that values (a generalization of morality) are comprehensible. I, thus, agree with Eliezer in the methaetics sequence. But values are different in a crucial ways to external reallity. I think that external reallity is better to reach by rationality than values. I’ll think that because of that the non-rational discurse of politics is a great action. I’ll take this as an example of a problem I see in this page at analizing morality. Firstly, which is the difference between values and external reality that I recall? I’ll call it “coherence”, I’ll say external reality is objectivily (inductivily) coherent, and values are objectively incoherent. What do I mean with “coherence”? You may describe a reality in a way such that you encompass a lot of your next observations. But you not always can act in a way such almost all your values you have are considered in the act. That is you can think coherently, you can’t act coherently. And it’s not a problem of not enough information, or your stupidity.
Let’s take two examples: The ethical paradox of a train that is going to kill three people, but you can change it’s way, so it kills one guy. Would you pull it? And if it was another number? I’d argue the problem is you valorate not killing personally, but you also feel responsible for the death of more people. But this values may not be rationally compatible.
The abortion problem. You may not kill another human being. But what’s a human being? It’s not defined a priori, but by society. At the same time, choice is important because there’s a problem in telling a woman that because she has it biollogically she might not have a career (imagine a teen pregnancy), thus degradating her life -not that the child life might be great. There is adoption, etc. You may reach dozens of arguments and contraarguments in both sides. I’m not taking a posture here. I’m just illustrating a problem.
Now, I say you can think coherently but not act coherently, why I make the difference? Ok, I think reallity is definite. Something happens or doesn’t happens, right? Let’s do an experiment! etc. Suppose we were rational utility maximizers. Then, there would be an answer, right? It might be difficult to achieve, but there would be an answer. But we are not. So there’s not a value that is strictly better than another one? Yes, there is. The value A is better than B if it includes the reason because B was good, and something more. But because you’re human, and you just can’ty fly nor create life clapping your hands, you may not have a coherent action. I might throw philosophical and sociological concepts, but I think I made myself clear. What for then it exists athe polytics, if you can’t do nothing? Because there is society.
And that’s the point where I wanted to reach. In this page the individual rationality is greatly discovered. But there’s not only the individual. Now, if you make a tiny parallelism, you can see that as a person may learn bayensianically, a society may learn the same way. It’s not too difficult to show it, it just mean that social progress exists, and I think you’d agree on that. No, “not being political” doesn’t exists. It just means that your policy is to keep social relationships the way they are. Because you let the individual change, but you don’t incentivize the society change as a whole. In my country this is called “conservative view”. So, sumarizing, I’m recalling one of the many times that a nonrational discurse is morally right. This case is when by defending yourself at the same time your opponent, you let the society to unravel posibilities to act. Politics is not war, politics is peaceful, is human. It’s almost impossible to appasionate about many ideologies because they are strongly incoherent between them. You may not be able to tax some people and not taxing at the same time. But the problem in not appasionate at something beyond reason is that you may not overcome the problems a society has, such as poverty, illness or uneducation. You would limit your solution to techical solutions, barring the possibility to societal solutions. Your country has houses without people and people without houses. That’s irrational. That’s a problem. There are others discurses, such as the artistic discurse that are great and respectful. But at the objective of not writing too long, I’ll stop here.
PS: I’m saying you may not use only rational arguments. I’m not saying you shouldn’t be open. Openness is more fundamental than rationallity.
By the way, I haven’t read about consequentialism. Is so wrong in so many levels! Firstly, it is impossible to assign a numbered utility to each action. That is just not understanding human brain. Secondly, it is impossible to sum up utilities, give me an example where summing different people utilities make any sense. Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good. Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people. The soberby of people might not be a good idea to make them feel they have the power. Fifthly, if utility is not cuantitative, the logic of morality can’t be a computation. That’s my point. The discovery of reallity might be a calculation, because you go outside to see. On the whole, what a disappointement. This page is so great I can’t understand why it understands so poorly morality. I recommend you to read, for example, Weber, who has a detailed theory of value in societies. Or Sartre for a complexity in defining what’s right.
These objections suggest that you are actually applying consequentialism already! You are worrying that other consequences of killing one person to save five might outweigh the benefit saving four lives, which is exactly the sort of thing a good consequentialist should worry about.
I discharge number 3 and number 4 objection, as a situation where the problem is ill-defined. That is, the ammount of knowledge supposed to have is inverosimile or unkown. And yes, I think the fat guy case is a case of an ethical injunction. But doesn’t it slip the predictive power of consequentialism? It may not. I’m more concerned on the problems written below.
I do think you should act for a better outcome. I disagree in completeness and transitiveness of values. http://en.wikipedia.org/wiki/Rational_choice_theory#Other_assumptions That’s the cause that utility is not cuantifiable, thus there’s not a calculation to show which action is right, thus there’s not a best possible action. The problem is that action is highly chaotic (sensitive) to non rational variables, because there are some actions where it is impossible to decide, but something has to be decided. Look, how about the first example here http://en.wikipedia.org/wiki/Framing_effect_(psychology)? I understand that you would choose the same in the first and second question. But what would you choose? A(=C) or B(=D). The answer should be none, just find a way where the 600 hundred people will keep alive. In the meantime, where that option is not possible, there are politics.
By the way, if you believe in utility maximization, explain me Arrow’s theorem. I think it disproves utilitarianism.
Of course, the brain isn’t perfect. The fact that humans can’t always or even can’t usually apply truths doesn’t make them untrue.
Pressing a button kills one person, not pressing the button kills two people. utility(1 death) + utility(1 death) < utility(1 death)
Assuming it’s bad to teach consequentialism to people doesn’t make consequentialism wrong. It’s bad to teach people how to make bombs but that doesn’t mean the knowledge to create bombs is incorrect. See Ethical Injunctions
Such thought experiments often make unlikely assumptions such as perfect knowledge of consequences. That doesn’t make the conclusions of those thought experiments wrong, it just constrains them to unlikely situations.
Qualitative analysis is still computable. If humans can do something it is computable.
Solomonoff induction is a formalized model of prediction of future events.
Is there anything else than politics to protect us against the emergence of an evil super AI?
Isn’t contemporary political cynicism the real “Mind-Killer”?
Yes.
No.
Seriously. I’m not trying to fall into the common nerd mistake of saying “politics is dumb monkey status games for normals” here; political processes serve an important role in solving coordination problems and we ignore them at our peril. But that’s really not what the OP is getting at. It’s saying that enormous and nearly intractable biases surround ideology; that we systematically overestimate the importance of conventional partisanship; and that there’s value in structuring our arguments and social spaces to skirt these issues, or at least not to light ourselves on fire and run toward them while shouting COME AT ME, BRO.
All of these statements are true.
How does ignoring the Gordian knot problem solves it?
Avoiding, not ignoring. Ignoring the problem is what “dumb monkey status games etc.” points towards, and it almost invariably leads to expressing a wide variety of unexamined but basically partisan stances which are assumed to just be common sense (because, in the social context they come from, they are).
The failures following from this should be obvious.
Which problem is the construction of a super AI trying to solve then?
That rather depends on who’s building it, doesn’t it?
If you’re talking about Eliezer et al’s FAI concept, I get the impression that they’re mostly concerned with issues that aren’t presently politicized among anyone except perhaps bioethicists. It does entail solving some political problems along the way, but how is underspecified, and I don’t see a meaningful upside to viewing any of the relevant design problems through a partisan lens at this stage.
In any case, that’s (again) not what the OP is about.
I think I understand what you mean. But I maintain my hypothesis.
Betteridge’s law of headlines states that any headline followed by a question mark can be answered by the word “no”.
Isn’t it surprising that the same principle can be extended to rhetorical questions about politics outside the domain of news?
I will even add a crazy hypothesis. The dynamics of this thread (and its underlying cause) is exactly what is preventing the LW community from FOOMing.