Well-Kept Gardens Die By Pacifism
Previously in series: My Way
Followup to: The Sin of Underconfidence
Good online communities die primarily by refusing to defend themselves.
Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...
I am old enough to remember the USENET that is forgotten, though I was very young. Unlike the first Internet that died so long ago in the Eternal September, in these days there is always some way to delete unwanted content. We can thank spam for that—so egregious that no one defends it, so prolific that no one can just ignore it, there must be a banhammer somewhere.
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
After all—anyone acculturated by academia knows that censorship is a very grave sin… in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors’ grading, and heaven forbid the janitors should speak up in the middle of a colloquium.
It is easy to be naive about the evils of censorship when you already live in a carefully kept garden. Just like it is easy to be naive about the universal virtue of unconditional nonviolent pacifism, when your country already has armed soldiers on the borders, and your city already has police. It costs you nothing to be righteous, so long as the police stay on their jobs.
The thing about online communities, though, is that you can’t rely on the police ignoring you and staying on the job; the community actually pays the price of its virtuousness.
In the beginning, while the community is still thriving, censorship seems like a terrible and unnecessary imposition. Things are still going fine. It’s just one fool, and if we can’t tolerate just one fool, well, we must not be very tolerant. Perhaps the fool will give up and go away, without any need of censorship. And if the whole community has become just that much less fun to be a part of… mere fun doesn’t seem like a good justification for (gasp!) censorship, any more than disliking someone’s looks seems like a good reason to punch them in the nose.
(But joining a community is a strictly voluntary process, and if prospective new members don’t like your looks, they won’t join in the first place.)
And after all—who will be the censor? Who can possibly be trusted with such power?
Quite a lot of people, probably, in any well-kept garden. But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer—
(for such internal politics often seem like a matter of far greater import than mere invading barbarians)
—then trying to defend the community is typically depicted as a coup attempt. Who is this one who dares appoint themselves as judge and executioner? Do they think their ownership of the server means they own the people? Own our community? Do they think that control over the source code makes them a god?
I confess, for a while I didn’t even understand why communities had such trouble defending themselves—I thought it was pure naivete. It didn’t occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power. “None of us are bigger than one another, all of us are men and can fight; I am going to get my arrows”, was the saying in one hunter-gatherer tribe whose name I forget. (Because among humans, unlike chimpanzees, weapons are an equalizer—the tribal chieftain seems to be an invention of agriculture, when people can’t just walk away any more.)
Maybe it’s because I grew up on the Internet in places where there was always a sysop, and so I take for granted that whoever runs the server has certain responsibilities. Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam). Maybe because I grew up in that wide open space where the freedom that mattered was the freedom to choose a well-kept garden that you liked and that liked you, as if you actually could find a country with good laws. Maybe because I take it for granted that if you don’t like the archwizard, the thing to do is walk away (this did happen to me once, and I did indeed just walk away).
And maybe because I, myself, have often been the one running the server. But I am consistent, usually being first in line to support moderators—even when they’re on the other side from me of the internal politics. I know what happens when an online community starts questioning its moderators. Any political enemy I have on a mailing list who’s popular enough to be dangerous is probably not someone who would abuse that particular power of censorship, and when they put on their moderator’s hat, I vocally support them—they need urging on, not restraining. People who’ve grown up in academia simply don’t realize how strong are the walls of exclusion that keep the trolls out of their lovely garden of “free speech”.
Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving. But this is more accused than realized, so far as I can see.
In any case the light didn’t go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently. While reading a comment at Less Wrong, in fact, though I don’t recall which one.
But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay. Being too humble, doubting themselves an order of magnitude more than I would have doubted them. It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.
This about the Internet: Anyone can walk in. And anyone can walk out. And so an online community must stay fun to stay alive. Waiting until the last resort of absolute, blatent, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late.
I have seen rationalist communities die because they trusted their moderators too little.
But that was not a karma system, actually.
Here—you must trust yourselves.
A certain quote seems appropriate here: “Don’t believe in yourself! Believe that I believe in you!”
Because I really do honestly think that if you want to downvote a comment that seems low-quality… and yet you hesitate, wondering if maybe you’re downvoting just because you disagree with the conclusion or dislike the author… feeling nervous that someone watching you might accuse you of groupthink or echo-chamber-ism or (gasp!) censorship… then nine times of ten, I bet, nine times out of ten at least, it is a comment that really is low-quality.
You have the downvote. Use it or USENET.
Part of the sequence The Craft and the Community
Next post: “Practical Advice Backed By Deep Theories”
Previous post: “The Sin of Underconfidencee”
- LW Team is adjusting moderation policy by 4 Apr 2023 20:41 UTC; 304 points) (
- Killing Socrates by 11 Apr 2023 10:28 UTC; 177 points) (
- We’re losing creators due to our nitpicking culture by 17 Apr 2023 17:24 UTC; 167 points) (EA Forum;
- Rereading Atlas Shrugged by 28 Jul 2020 18:54 UTC; 161 points) (
- Creating a truly formidable Art by 14 Oct 2021 4:39 UTC; 131 points) (
- Open, Free, Safe: Choose Two by 20 Mar 2021 16:42 UTC; 108 points) (
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- Go Forth and Create the Art! by 23 Apr 2009 1:37 UTC; 88 points) (
- New User’s Guide to LessWrong by 17 May 2023 0:55 UTC; 87 points) (
- Meta-tations on Moderation: Towards Public Archipelago by 25 Feb 2018 3:59 UTC; 78 points) (
- Automatic Rate Limiting on LessWrong by 23 Jun 2023 20:19 UTC; 77 points) (
- The Craft & The Community—A Post-Mortem & Resurrection by 2 Nov 2017 3:45 UTC; 76 points) (
- The Case for a Bigger Audience by 9 Feb 2019 7:22 UTC; 68 points) (
- Covid 1/6/22: The Blip by 6 Jan 2022 18:30 UTC; 68 points) (
- A List of Nuances by 10 Nov 2014 5:02 UTC; 67 points) (
- What’s with all the bans recently? by 4 Apr 2024 6:16 UTC; 65 points) (
- What’s with all the bans recently? by 4 Apr 2024 6:16 UTC; 64 points) (
- “Brain enthusiasts” in AI Safety by 18 Jun 2022 9:59 UTC; 63 points) (
- 8 Nov 2021 21:25 UTC; 61 points) 's comment on Speaking of Stag Hunts by (
- The Craft and the Community by 26 Apr 2009 17:52 UTC; 45 points) (
- The Web Browser is Not Your Client (But You Don’t Need To Know That) by 22 Apr 2016 0:12 UTC; 44 points) (
- 30 Mar 2023 18:09 UTC; 44 points) 's comment on “Dangers of AI and the End of Human Civilization” Yudkowsky on Lex Fridman by (
- Why Academic Papers Are A Terrible Discussion Forum by 20 Jun 2012 18:15 UTC; 44 points) (
- Possible changes to EA, a big upvoted list by 18 Jan 2023 18:56 UTC; 43 points) (EA Forum;
- How concerned are you about LW reputation management? by 17 May 2021 2:11 UTC; 42 points) (
- 15 Oct 2020 20:52 UTC; 40 points) 's comment on Avoiding Munich’s Mistakes: Advice for CEA and Local Groups by (EA Forum;
- The Wannabe Rational by 15 Jan 2010 20:09 UTC; 39 points) (
- What Is Optimal Philanthropy? by 12 Jul 2012 0:17 UTC; 39 points) (
- Elitist Jerks: A Well-Kept Garden by 25 Apr 2011 18:56 UTC; 36 points) (
- Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas by 12 Jun 2016 7:38 UTC; 34 points) (
- 8 Jun 2015 19:30 UTC; 34 points) 's comment on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction by (
- Intransitive Trust by 27 May 2024 16:55 UTC; 34 points) (
- 17 Sep 2020 0:56 UTC; 33 points) 's comment on Open & Welcome Thread—September 2020 by (
- Less Wrong Should Confront Wrongness Wherever it Appears by 21 Sep 2010 1:40 UTC; 32 points) (
- Slack Group: Rationalist Startup Founders by 3 Apr 2023 0:44 UTC; 31 points) (
- On “Geeks, MOPs, and Sociopaths” by 19 Jan 2024 21:04 UTC; 31 points) (
- 15 Jun 2019 3:17 UTC; 30 points) 's comment on Discourse Norms: Moderators Must Not Bully by (
- 22 Oct 2021 5:47 UTC; 28 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- 11 Sep 2020 20:55 UTC; 28 points) 's comment on John_Maxwell’s Shortform by (
- Lesswrong Potential Changes by 19 Mar 2016 12:24 UTC; 27 points) (
- Boo lights: groupthink edition by 15 Feb 2010 18:29 UTC; 27 points) (
- The Sequences on YouTube by 7 Jan 2024 1:44 UTC; 26 points) (
- 28 Sep 2020 7:04 UTC; 25 points) 's comment on Honoring Petrov Day on LessWrong, in 2020 by (
- Frontpage Posting and Commenting Guidelines by 26 Sep 2017 6:24 UTC; 24 points) (
- New York Less Wrong: Expansion Plans by 1 Jul 2012 1:20 UTC; 23 points) (
- 28 Jun 2020 19:39 UTC; 22 points) 's comment on A reply to Agnes Callard by (
- 14 May 2012 7:59 UTC; 22 points) 's comment on Petition: Off topic area by (
- Rational discussion of politics by 25 Apr 2015 21:58 UTC; 20 points) (
- Why aren’t there more forum-blogs like LW? by 27 Sep 2013 7:28 UTC; 20 points) (
- Privacy and writing by 6 Apr 2024 8:20 UTC; 20 points) (
- 4 Jun 2013 20:13 UTC; 19 points) 's comment on Many Weak Arguments vs. One Relatively Strong Argument by (
- 6 Aug 2012 9:07 UTC; 19 points) 's comment on Self-skepticism: the first principle of rationality by (
- 25 Aug 2014 23:41 UTC; 19 points) 's comment on Announcing The Effective Altruism Forum by (
- 10 May 2018 21:45 UTC; 18 points) 's comment on Something is rotten in the channel of Slack by (
- 11 Nov 2011 17:58 UTC; 18 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 20 Jun 2014 11:43 UTC; 18 points) 's comment on [LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality by (
- 28 Sep 2020 22:45 UTC; 17 points) 's comment on On Destroying the World by (
- Notes on Social Responsibility by 19 Mar 2022 14:44 UTC; 17 points) (
- 8 Mar 2020 12:51 UTC; 16 points) 's comment on Credibility of the CDC on SARS-CoV-2 by (
- LessWrong Is Not about Forum Software, LessWrong Is about Posts (Or: How to Immanentize the LW 2.0 Eschaton in 2.5 Easy Steps!) by 15 Jul 2017 21:35 UTC; 16 points) (
- 24 Jun 2012 18:53 UTC; 16 points) 's comment on [Link] RSA Animate: extremely entertaining LW-relevant cartoons by (
- 1 Apr 2012 19:16 UTC; 14 points) 's comment on Open Thread, April 1-15, 2012 by (
- 1 Jul 2011 6:59 UTC; 14 points) 's comment on Introducing… The Less Wrong Forum! by (
- 18 Apr 2012 16:01 UTC; 12 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 by (
- 13 Dec 2019 19:33 UTC; 12 points) 's comment on ialdabaoth is banned by (
- 25 Aug 2012 23:38 UTC; 11 points) 's comment on Neil Armstrong died before we could defeat death by (
- 19 Nov 2013 22:34 UTC; 11 points) 's comment on Self-serving meta: Whoever keeps block-downvoting me, is there some way to negotiate peace? by (
- 19 Mar 2012 0:10 UTC; 11 points) 's comment on 6 Tips for Productive Arguments by (
- 30 Sep 2011 16:46 UTC; 11 points) 's comment on Who owns LessWrong? by (
- 16 Aug 2023 18:30 UTC; 11 points) 's comment on Moderation Action List (warnings and bans) by (
- Is there any way someone could post about public policy relating to abortion access (or another sensitive subject) on LessWrong without getting super downvoted? by 6 Jun 2022 23:49 UTC; 10 points) (
- Is it cool if I post responses/thoughts as I read through the sequences? by 2 Jul 2011 23:31 UTC; 10 points) (
- 20 Jul 2009 21:53 UTC; 10 points) 's comment on Being saner about gender and rationality by (
- 20 Jul 2009 10:42 UTC; 10 points) 's comment on Being saner about gender and rationality by (
- 5 Nov 2014 4:22 UTC; 10 points) 's comment on November 2014 Media Thread by (
- 23 Mar 2011 3:59 UTC; 10 points) 's comment on Costs and Benefits of Scholarship by (
- 28 Apr 2010 0:29 UTC; 9 points) 's comment on What is missing from rationality? by (
- 10 Jan 2012 13:58 UTC; 9 points) 's comment on On Leverage Research’s plan for an optimal world by (
- 21 Nov 2014 18:58 UTC; 9 points) 's comment on Open thread, Nov. 17 - Nov. 23, 2014 by (
- 30 Dec 2011 13:14 UTC; 9 points) 's comment on [Humor] [Link] Eclipse Maid, a posthuman maid role-playing game by (
- 16 Dec 2019 8:43 UTC; 9 points) 's comment on ialdabaoth is banned by (
- 6 Sep 2012 19:04 UTC; 8 points) 's comment on Open Thread, September 1-15, 2012 by (
- 30 Oct 2011 6:45 UTC; 8 points) 's comment on Doublethink (Choosing to be Biased) by (
- 12 Nov 2022 21:26 UTC; 8 points) 's comment on Musings on the appropriate targets for standards by (
- 2 Aug 2010 10:58 UTC; 8 points) 's comment on Forager Anthropology by (
- 10 Nov 2012 18:34 UTC; 7 points) 's comment on Open Thread, November 1-15, 2012 by (
- 30 Jul 2012 2:40 UTC; 7 points) 's comment on Notes on the Psychology of Power by (
- What Is Rationalist Berkley’s Community Culture? by 12 Aug 2017 15:21 UTC; 7 points) (
- 10 Nov 2022 16:07 UTC; 7 points) 's comment on Mastodon Linking Norms by (
- 27 Aug 2012 5:37 UTC; 7 points) 's comment on Open Thread, August 16-31, 2012 by (
- 10 Jul 2010 4:08 UTC; 6 points) 's comment on Open Thread: July 2010 by (
- 14 Apr 2023 16:57 UTC; 6 points) 's comment on Killing Socrates by (
- 29 Dec 2013 7:28 UTC; 6 points) 's comment on Critiquing Gary Taubes, Part 3: Did the US Government Give Us Absurd Advice About Sugar? by (
- 5 Jul 2014 18:32 UTC; 6 points) 's comment on [moderator action] Eugine_Nier is now banned for mass downvote harassment by (
- 27 Apr 2014 11:06 UTC; 6 points) 's comment on Open thread, 21-27 April 2014 by (
- 20 Sep 2013 8:20 UTC; 6 points) 's comment on The Belief Signaling Trilemma by (
- Rationalist Community and Ritual by 14 Nov 2023 23:39 UTC; 6 points) (
- 14 Apr 2012 21:41 UTC; 6 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- 14 Dec 2012 7:51 UTC; 6 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- 1 Jan 2014 2:24 UTC; 5 points) 's comment on What is the Main/Discussion distinction, and what should it be? by (
- 28 Mar 2014 10:46 UTC; 5 points) 's comment on Distribution of knowledge and standardization in science by (
- 15 Aug 2012 20:04 UTC; 5 points) 's comment on Let’s be friendly to our allies by (
- 19 Apr 2023 14:49 UTC; 5 points) 's comment on laserfiche’s Shortform by (
- Moderators, please wake up and start protecting this community! by 3 Jul 2014 9:13 UTC; 5 points) (
- 22 Jul 2009 5:11 UTC; 5 points) 's comment on Of Exclusionary Speech and Gender Politics by (
- 10 Sep 2013 11:33 UTC; 5 points) 's comment on The genie knows, but doesn’t care by (
- 2 Jul 2013 3:05 UTC; 5 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 by (
- 27 Apr 2014 7:08 UTC; 5 points) 's comment on Open thread, 21-27 April 2014 by (
- 16 Nov 2016 10:34 UTC; 5 points) 's comment on Open thread, Nov. 14 - Nov. 20, 2016 by (
- 12 Feb 2012 21:26 UTC; 5 points) 's comment on Elevator pitches/responses for rationality / AI by (
- 21 Jan 2011 4:04 UTC; 5 points) 's comment on Tallinn-Evans $125,000 Singularity Challenge by (
- 26 Oct 2013 10:09 UTC; 5 points) 's comment on Less Wrong’s political bias by (
- 31 Aug 2012 16:19 UTC; 5 points) 's comment on Dealing with trolling and the signal to noise ratio by (
- 27 Apr 2010 9:07 UTC; 5 points) 's comment on Proposed New Features for Less Wrong by (
- [SEQ RERUN] Well-Kept Gardens Die By Pacifism by 9 May 2013 5:46 UTC; 5 points) (
- 25 Nov 2011 15:28 UTC; 5 points) 's comment on Beyond the Reach of God by (
- 18 Dec 2022 23:38 UTC; 4 points) 's comment on AGI Isn’t Close—Future Fund Worldview Prize by (EA Forum;
- 26 Dec 2012 17:53 UTC; 4 points) 's comment on META: Deletion policy by (
- “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) by 4 Sep 2014 16:58 UTC; 4 points) (
- 27 Aug 2012 9:18 UTC; 4 points) 's comment on LessWrong could grow a lot, but we’re doing it wrong. by (
- 21 Dec 2015 15:22 UTC; 4 points) 's comment on Open thread, Dec. 21 - Dec. 27, 2015 by (
- Karma fluctuations? by 11 Jun 2020 0:33 UTC; 4 points) (
- 23 Mar 2011 4:45 UTC; 4 points) 's comment on Costs and Benefits of Scholarship by (
- 9 Jul 2015 21:36 UTC; 4 points) 's comment on Beyond Statistics 101 by (
- LessWrong Boo Vote (Stochastic Downvoting) by 22 Apr 2009 1:18 UTC; 4 points) (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- 20 Sep 2010 18:30 UTC; 4 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 18 May 2011 8:15 UTC; 3 points) 's comment on Liars for Jesus by (
- 28 Oct 2014 9:36 UTC; 3 points) 's comment on Open thread, Oct. 20 - Oct. 26, 2014 by (
- 18 May 2023 15:04 UTC; 3 points) 's comment on New User’s Guide to LessWrong by (
- 6 Sep 2012 10:57 UTC; 3 points) 's comment on Preventing discussion from being watered down by an “endless September” user influx. by (
- 8 Jul 2010 8:19 UTC; 3 points) 's comment on Cryonics Wants To Be Big by (
- 11 Oct 2020 11:30 UTC; 3 points) 's comment on The Treacherous Path to Rationality by (
- 22 Jul 2009 6:52 UTC; 3 points) 's comment on Deciding on our rationality focus by (
- 17 Sep 2023 18:46 UTC; 2 points) 's comment on The commenting restrictions on LessWrong seem bad by (
- 20 Jan 2015 2:03 UTC; 2 points) 's comment on Open thread, Jan. 19 - Jan. 25, 2015 by (
- 19 May 2011 20:09 UTC; 2 points) 's comment on Suffering as attention-allocational conflict by (
- 12 Apr 2012 14:56 UTC; 2 points) 's comment on Rationally Irrational by (
- 24 Jun 2021 23:59 UTC; 2 points) 's comment on How can there be a godless moral world ? by (
- 9 Jul 2014 10:13 UTC; 2 points) 's comment on [moderator action] Eugine_Nier is now banned for mass downvote harassment by (
- 3 May 2013 2:27 UTC; 2 points) 's comment on What do professional philosophers believe, and why? by (
- 3 Feb 2012 11:25 UTC; 2 points) 's comment on Looking for information on cryonics by (
- 29 Apr 2014 20:28 UTC; 2 points) 's comment on Open thread, 21-27 April 2014 by (
- 30 Jan 2011 10:00 UTC; 2 points) 's comment on David Chalmers’ “The Singularity: A Philosophical Analysis” by (
- 29 Jan 2012 15:19 UTC; 2 points) 's comment on I’ve had it with those dark rumours about our culture rigorously suppressing opinions by (
- 17 May 2012 16:00 UTC; 2 points) 's comment on Open Thread, May 16-31, 2012 by (
- 15 May 2011 21:11 UTC; 2 points) 's comment on What we’re losing by (
- 5 Jun 2014 15:43 UTC; 2 points) 's comment on [Meta] The Decline of Discussion: Now With Charts! by (
- 22 Oct 2024 16:58 UTC; 2 points) 's comment on What is autonomy? Why boundaries are necessary. by (
- 9 Jun 2010 20:48 UTC; 1 point) 's comment on Open Thread June 2010, Part 2 by (
- 28 Apr 2010 0:51 UTC; 1 point) 's comment on What is missing from rationality? by (
- 1 Dec 2010 22:12 UTC; 1 point) 's comment on Defecting by Accident—A Flaw Common to Analytical People by (
- 5 Jun 2009 23:56 UTC; 1 point) 's comment on Post Your Utility Function by (
- 24 Nov 2014 12:24 UTC; 1 point) 's comment on Is this dark arts and if it, is it justified? by (
- 25 Sep 2013 4:24 UTC; 1 point) 's comment on Polyphasic Sleep Seed Study: Reprise by (
- 23 Apr 2009 18:56 UTC; 1 point) 's comment on Go Forth and Create the Art! by (
- 22 Apr 2009 13:34 UTC; 1 point) 's comment on LessWrong Boo Vote (Stochastic Downvoting) by (
- 2 Jan 2024 15:50 UTC; 1 point) 's comment on Looking for Reading Recommendations: Content Moderation, Power & Censorship by (
- 23 Apr 2009 0:34 UTC; 1 point) 's comment on Issues, Bugs, and Requested Features by (
- 18 Jul 2017 12:45 UTC; 1 point) 's comment on LessWrong Is Not about Forum Software, LessWrong Is about Posts (Or: How to Immanentize the LW 2.0 Eschaton in 2.5 Easy Steps!) by (
- 22 Nov 2012 4:42 UTC; 1 point) 's comment on Rationality Quotes November 2012 by (
- 15 Mar 2013 15:08 UTC; 1 point) 's comment on Decision Theory FAQ by (
- 6 Mar 2014 20:51 UTC; 0 points) 's comment on [LINK] Latinus rationalior ist. by (
- 30 May 2014 2:12 UTC; 0 points) 's comment on Tell Your Rationalist Origin Story by (
- 12 Apr 2012 15:35 UTC; 0 points) 's comment on Rationally Irrational by (
- 29 Nov 2017 17:12 UTC; 0 points) 's comment on Uncritical Supercriticality by (
- 23 Jun 2015 7:20 UTC; 0 points) 's comment on Min/max goal factoring and belief mapping exercise by (
- 22 Apr 2009 2:58 UTC; 0 points) 's comment on LessWrong Boo Vote (Stochastic Downvoting) by (
- 29 Sep 2013 9:46 UTC; 0 points) 's comment on Why aren’t there more forum-blogs like LW? by (
- 25 Apr 2013 10:35 UTC; 0 points) 's comment on Being Half-Rational About Pascal’s Wager is Even Worse by (
- 4 Jul 2012 7:19 UTC; 0 points) 's comment on Thoughts on moral intuitions by (
- 31 Aug 2012 7:31 UTC; 0 points) 's comment on [META] Karma for last 30 days? by (
- 29 Aug 2010 1:08 UTC; 0 points) 's comment on Cryonics Questions by (
- 15 Dec 2012 23:39 UTC; 0 points) 's comment on 2012 Survey Results by (
- 22 Dec 2012 21:44 UTC; 0 points) 's comment on 2012 Survey Results by (
- 25 Apr 2014 2:32 UTC; 0 points) 's comment on Self-Congratulatory Rationalism by (
- 14 Oct 2011 2:05 UTC; -1 points) 's comment on [link] Relative angels and absolute demons by (
- 17 Dec 2010 23:48 UTC; -1 points) 's comment on Varying amounts of subjective experience by (
- 6 Mar 2013 16:30 UTC; -1 points) 's comment on Amputation of Destiny by (
- 5 Jan 2013 7:26 UTC; -2 points) 's comment on You can’t signal to rubes by (
- 9 Feb 2011 15:07 UTC; -2 points) 's comment on My hour-long interview with Yudkowsky on “Becoming a Rationalist” by (
- 9 Feb 2016 17:46 UTC; -2 points) 's comment on The Fable of the Burning Branch by (
- 9 Dec 2012 23:37 UTC; -2 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- 3 Sep 2012 15:58 UTC; -3 points) 's comment on Open Thread, September 1-15, 2012 by (
- 24 Apr 2012 11:39 UTC; -3 points) 's comment on Muehlhauser-Wang Dialogue by (
- 1 Jul 2019 15:44 UTC; -4 points) 's comment on Causal Reality vs Social Reality by (
- 13 Dec 2011 19:13 UTC; -5 points) 's comment on Q&A #2 with Singularity Institute Executive Director by (
- 7 Jan 2014 6:33 UTC; -5 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- 25 Jun 2012 15:26 UTC; -5 points) 's comment on Open Thread, June 16-30, 2012 by (
- 20 Aug 2013 1:35 UTC; -6 points) 's comment on Open thread, July 29-August 4, 2013 by (
- Elitism isn’t necessary for refining rationality. by 10 Sep 2012 5:41 UTC; -32 points) (
May I suggest that length of comment should factor significantly into the choice to up/downvote?
I once suggested that upvote means “I would take the time to read this again if the insights from it were deleted from my brain” and downvote means “I would like the time it took to read this back.”
Time figures into both of these. If you read a few words and don’t profit from them, well, neither have you lost much. If you read several paragraphs, reread them to ensure you’ve understood them (because the writing was obtuse, say), and in the end conclude that you have learned nothing, the comment has, in some sense, made a real imposition on your time, and deserves a downvote.
This being said, one should not hesitate to downvote a short message if it does not add at all to the discussion, simply to keep the flow of useful comments without superfluous interruption that would hamper what could otherwise be a constructive argument.
It’s about insight density. It’s not as if you can take an insightful comment and write it really short to get a certain upvote. If you have a longer comment, you have room for more insight. If you have a short comment, you can’t be all that insightful.
You can express an insight succinctly, or you can be long-winded. A long comment has space for more insight, but that space is often wasted. Stunk and White’s The Elements of Style makes that point for prose, and Edward Tufte’s The Visual Representation of Quantitative Information makes it for plots. Every element of a piece should do work.
I’d rather follow your first point up as, if you have a short comment because you took the time to purify and condense your thoughts, that’s a good thing.
But, don’t forget the overhead for the comment simply existing in the first place. You rapidly run into diminishing returns for shortening a comment to less than a few lines. Ten words conveying a thought is not effectively twice as dense as twenty words conveying that thought.
(Note) This mostly has to do with karma with a minor rant/point at the end. If that doesn’t sound interesting, it probably won’t be.
Some of the most interesting things I have registered about LessWrong thus far have to do with the karma game. I am convinced that there are huge swaths of information that can be learned if the karma data was opened for analysis.
If I had to guess at the weaknesses of the karma system I would peg two big problems. The first is that (some? most? many?) people are trying to assign an integer value to a post that is something outside of the range [-1,1] and then adjust their vote to affect a post’s score toward their chosen value. This seems to have the effect that everything is drawn toward 0 unless it is an absolutely stellar post. Then it just drifts up. I think the highest comment I have seen was in the high teens. I know there are more than twenty people visiting the site. Do they not read comments? Do they not vote on them?
The second problem spot is that I find it hard to actually use the feedback of karma. I have no way of knowing how well I am doing other a number. I have noticed that my karma has jumped lately and this leads me to believe I have made a change for the better. Unfortunately, I have no easy way of seeing which comments did well and which did poorly. Was it my tone? Did I get wiser? Are my comments more useful? Since I am new, my comment level is low and I can dig through what is there and learn, but this will simply get harder as time goes on. The karma system seems to work well on a comment by comment basis but not so much as a teaching tool. I see this as a problem because this is exactly what I need and I feel like I am squeezing a square peg into a round hole. It makes me think I am not using it correctly.
I find both of the above problems frustrating to me personally. I see a comment get voted down and think, “Okay, that was bad.” If I ask for clarification, it goes back up, which just makes it confusing. “Uh, so was it bad or not bad?” The difference between the highest rated comment of mine and the lowest is less than 10. I think the highest is 5 and the lowest was at −2 before I deleted it.
Now, don’t get me wrong, I am not complaining that my super-great-excellent posts are not voted to 20 karma in a single weekend. I am complaining that my crappy posts are all sitting at 0 and −1. I just started posting here and already have over 50 karma and the dark secret is that I am a complete poser. I barely even know the terms you guys use. I have not read much of Overcoming Bias and if you gave me a test on key points of rationality I would probably limp through the guessable stuff and start failing once the questions got hard. I can pick apart the logic within a given post, but the only real contributions I have made are exposing flaws in other comments. How in the world am I succeeding? I do not know.
To put this back into the original point, if people are shy about telling me my posts are low quality I can (a) never learn the difference between “mediocre” and “bad” and (b) any fool can limp by with comments that just repeat basic logic and use key terms in the right order. The chances of that being fun are low. One of my great paranoias is that I am the fool and no one pointed it out; I am the elephant in the room but have no mirror. I don’t want to trample on your garden and smush the roses. I want to partake in what seems to be a really awesome, fun community. If I don’t fit, kick me out.
(To be a little less harsh on myself, I do not consider myself a fool nor am I trying to play the role of a fool. If I am one, please let me know because I apparently have not figured it out yet.)
The karma system is a integral part of the Reddit base code that this site is built on top of. It’s designed to do one thing—increase the visibility of good content—and it does that one thing very well.
I agree, though, that there is untapped potential in the karma system. Personally I would love to see—if not by whom—at least when my comments are up/down voted.
Ah, that is good to remember. This seems to tilt my problem further toward fitting a square peg into the round hole. I guess that would be my own fault. :(
I have the same apprehension. I’m somewhere between “complete poser” and “well-established member of the community,” I just sort of found out about this movement about 50 days ago, started reading things and lurking, and then started posting. When I read the original post, I felt a little pang of guilt. Am I a fool running through your garden?
I’m doing pretty well for myself in the little Karma system, but I find that often I will post things that no one responds to, or that get up-voted or down-voted once and then left alone. I find that the only things that get down-voted more than once or twice are real attempts at trolling or otherwise hostile comments. Then again, many posts that I find insightful and beneficial to the discussion rarely rise about 2 or 3 karma points. So I’m left to wonder if my 1-point posts are controversial but good, above average but nothing special, or just mediocre and uninteresting.
Something that shows the volume of up- and down-votes as well as the net point score might provide more useful feedback.
I usually don’t vote because I don’t feel comfortable enough in my own understanding of these discussions to have an opinion about the relative value of a particular comment. Probably if I saw something that gave me an immediate and strong reaction, I’d be more likely to vote one way or another.
I know someone else who reads posts but seldom reads the comments.
We keep coming back to this: we very much need a “start here” document we can point people to, and say “please read this set of documents before posting”.
In the mean time, here is a list of Eliezer’s posts to Overcoming Bias.
What I would like to see is a book that goes through all of the major biases and gives examples of each as well as heuristics for calibrating yourself better.
Do we even have a ready at hand list of the major biases? That would be a good wiki article.
http://en.wikipedia.org/wiki/List_of_cognitive_biases
Our wiki article on Bias references the Wikipedia and Psychology Wiki lists of biases, and provides an outline of most of the specific biases discussed on OB.
Personally, I consider it my own responsibility to learn the terms. And I am learning them, I just have other stuff to do in the meantime. A “start here” would be useful and the place I started was the about page. Since then I think of a topic I think is relevant and then search OB and LW for topics already about that subject. More often than not, someone else has already said what I am thinking. That, mixed with reading comments, has gotten me as far as I am now.
Of course, a list would have made it a little easier. :)
When you see a term that you don’t immediately understand, let us know, so we can add it to the wiki.
Better still, ask for the page to be created by following the instructions under “Getting help” on the front page of the wiki.
Who is “us”? How should one let you know?
I guess that CronoDAS had the people who have been on the site at least awhile in mind when he wrote “us.” If you see jargon being used that doesn’t already have an explanation at hand, you could always just reply to the comment that used the term and ask. The jargon page he alluded to is at http://wiki.lesswrong.com/wiki/Jargon
Thank you.
If you click on your username (or any other user’s), you get a history page with only your posts. That saves you the trouble of digging through all the stories you commented on, and lets you look at all your scores in one place.
Thanks. Is there anyway to see which comments have replies from that page?
no, but you can see from your inbox, which, for some odd reason, is not linked to anywhere.
ETA: Well, not linked to anywhere is a stretch. You can navigate there as follows:
click some user’s name
click “send a message” (off to the right near your and their karma scores)
there’ll be a menu under the site logo with “compose, inbox, sent”
I find it’s easier to just bookmark the inbox page, or let your browser start autocompleting it for you
The user info in the sidebar now has an envelope which is a link to a users inbox. The envelope is red if there are new messages, otherwise it is gray.
The inbox and sent pages are now styled similar to the rest of lesswrong. In addition they now also have the sidebar.
Thanks!
I have an enhancement suggestion: have two colors for the “Inbox” icon, one to indicate that there are only comment replies (green color?), and another one for private messages (orange). This way, I won’t need to check the inbox for the comments, if I know that I have read them anyway, but I won’t miss private messages as a result of not checking it when new comments arrive.
Thank you!
The inbox is a feature that came for free with the Reddit codebase but it was “lost” when the site was restyled. You will notice that the formatting of inbox page is totally messed up, this is also because it wasn’t included in redesign. Notification of replies is on the list of things to implement but there’s some higher priority work going on at the moment. Since it is a small change and many people seem to be requesting it I hope that we will get to it soon.
Whoa, that is the most useful feature yet. Fantastic; thank you.
no problem =)
http://lesswrong.com/message/inbox
(Is there a navigation link to this? I only knew about it from the welcome post)
ETA: This was in response to a question about whether there’s any way to navigate to the inbox
sort of...
click some user’s name
click send a message
there’ll be a menu under the site logo with “compose, inbox, sent”
I find it’s easier to just bookmark the inbox page, or let your browser start autocompleting it for you
Thank you for the analysis. Would it help if you saw who, in particular, downvoted/upvoted each of your comments? There is this feature “make my votes public”, but it’s virtually unusable in its current implementation (as it’s scoped by voters, not by articles that are being voted for), and it doesn’t seem to apply to comments at all. If the list of people who publicly voted about your comments (not everyone else’s) is directly visible next to the comments, I expect that to be useful to the newcommers, and it won’t clutter the interface overly much.
I would find just knowing the total up and down to accomplish more. The only reason I would want to know who voted is to see if the immediate replies are voted up or down. I have noticed a few people who will reply in disagreement and vote down. (This is not a problem; it is just a pattern I noticed.)
The karma system isn’t enough for the purpose of learning; I fully agree to that. And to the point of this article, I usually don’t downvote people, rather I try to correct them if I see something wrong. That, if anything, seems more appropriate to me. If I see an issue somewhere, it isn’t enough to point it, I must be able to explain why it is an issue, and should propose a way to solve it.
But Eliezer has me swayed on that one. Now I’ll downvote, even though I am, indeed, very uncertain of my own ability to correctly judge whether a post deserves to be downvoted or not. For that matter, I am very uncertain about the quality of my own contributions as well, so there too I can relate to your experience. Sometimes, I feel like I’m just digging myself deeper and deeper, that I am not up to the necessary quality required to post in here.
Now, if I was told what, in my writings, correlates with high karma, and what does, with low karma, I think I might be tempted to optimize my posting to karma—gathering, rather than adapting them to the purpose of making high quality, useful contributions.
That’s a potential issue. Karma is correlated to quality and usefulness, but ultimately, other things than quality alone can come into play, and we don’t want to elicit people’s optimizing for those for their own sake alone (like, persuasiveness, rhetorics, seductive arguments, well written, soul sucking texts, etc.).
We really need to get beyond the karma system. But apparently none of the ways so far proposed would be workable, for lack of programming resources. We’ll need to be vigilant till then.
I disagree, I don’t think you should downvote what you don’t understand. This will only pull the discussion to the level of the least competent people.
if people downvote what they don’t understand, and it’s a good comment, then it should have more upvotes than downvotes if most people understand it. If it has more downvotes than upvotes in this scenario, then it was not explained well enough for the majority of readers.
These are generalizations, of course, and depend largely on actual voting habits. But so was the note that it will pull the discussion to the level of the ‘least competent people’ - possibly the same observation could be stated as pulling the discussion to the level of the majority of the readership.
That was my first idea. But I am not the only player here. I know I overcompensate for my uncertainty, and so I tend to never downvote anything. Other people may not have the same attitude, for down, and upvoting. Who are they ? Is their opinion more educated than mine ? If we all are too scrupulous to vote when our opinion is in fact precious, then our occasional vote may end up drowned in a sea of poorly decided, hastily cast ones.
Besides, I am still only going to downvote if I can think of a good reason to do so. For sometimes, I have a good reason to downvote, but no still no good reasons, or even no time, to reply to all ideas I think need a fix, or those which are simply irrelevant to the current debate.
You are trying to fight fools with your intuition. How much confidence do you place in it? Is your intuition more informed than the decisions of average voters? Hard to say, I wouldn’t be so sure in this compound statement. It only becomes clear where you know yourself to be competent or ignorant, above or below the “average voter”. At least abstaining from voting has clear semantics, you don’t introduce your judgment at all. On the other hand, in many cases it should be easy to recognize poor quality.
I don’t place any confidence in my intuition as a general, indiscriminately good-for-everything case. I try to only have confidence on a case by case basis. I try to pay attention to all potential bias that could screw my opinion, like anchoring. And try to not pay attention to who wrote what I’m voting upon. Then I have to have a counterargument. Even if I don’t elaborate it, even if I don’t lay it down, I have to know that if I had the time or motivation, I could rather reply, and say what was wrong or right in that post.
My decisions and arguments, could, or could not be more informed than those of the average voter. But if I add my own in the pool of votes, then we have a new average. Which will only be slightly worse, or slightly better. Could we try to adapt something of decision markets there ? The way they’re supposed to self correct, under the right conditions, makes me wonder if we could dig a solution in them.
And maybe someone could create an article, collecting all the stuff that could help people make more informed votes on LW, that’d help too. Like the biases they’d have to take into account, stuff like the antikibitzer, or links to articles such as the one about aumann voting or this very one.
I’d like to weigh in with a meta-comment on this meta-discussion: y’all are over-thinking this, seriously.
In the vein of Eliezer’s Tsuyoku Naritai!, I’d like to propose a little quasi-anime (borrowed from the Japanese Shinsengumi by way of Rurouni Kenshin) mantra of my own:
Don’t obssess over what fractional vote a read-but-not-downvoted comment should earn, don’t try to juggle length with quality with low-brow/high-brow distinctions (as Wittgenstein said, a good philosophy book could be written using nothing but jokes), don’t ponder whether the poster is a female and a downvote would drive her away, or consider whether you have a duty to explain your downvote—just vote.
Is it a bad comment? (You know deep down that this is an easy question.) Aku soku zan! Downvote evil instantly! Is it a useless comment? Aku soku zan!
(And if anyone replies to this with a comment like ‘I was going to upvote/downvote your comment, but then I decided deep down to downvote/upvote’ - aku soku zan!)
Yes, yes, but we still need to think carefully about what qualifies as ‘evil’.
If we go around slaying things instantly, we’d better be damn sure we know what those things are. Otherwise we’re likely to destroy plenty of good stuff by mistake—not to mention being a menace to everyone around us.
No no! This sort of comment is exactly wrong—Once you start second-guessing your qualification of evil, it’s a small step to going with the majoritarian flow and thence to ever more elaborate epicycles of karma. Aku soku zan!
For nearly 10 years I have referenced this thread in various forums I’ve moderated. While I never entirely agreed with every aspect, it has mostly held up well as a lodestar over the years.
Until recently.
And now, with the benefit of enough sequential observation over time, I am comfortable describing what I believe is a major hidden assumption, and thereby weakness, in this entire argument:
For the concept of “walled gardens” relating to online communities to succeed and thrive, there must exist an overlay of credibly alternative platforms. Or, more directly, there must exist fair and healthy competition among and within the media upon which the discussion are taking place.
This argument was created largely before the “sciences” of social media were refined. Today, we are living with the consequences of intersecting sciences of human psychology, sociology, computer science, statistics and mathematics. Dense, voluminous texts have been penned which describe precise models for determining how to create, manage and extract-value from “online communities”. Some of those models even go so far as to involve manipulation of human physiological responses—i.e., intentional mechanisms within the platforms designed specifically to trigger release of chemicals in the human brain.
There exists an analogue for this maturity curve within television advertising. As both the medium and techniques matured, the need to evaluate how we, as a society, managed its impact fundamentally changed.
Today, in late 2018, there effectively exists no credible “public town square” whereby free speech exists as it was intended (within the intention of the US Constitution). What exists in its place is a de facto oligopoly of media companies posing as tech companies who have divided up the horizontal market and who exercise overwhelming “market power” (as in HSR power) over would-be competitors. Those competitors are then relegated to competing as “free speech purists”, which leads to the traps outlined by the original argument: a cesspool of fools and insults.
This situation allows the dominant players to then, in turn, point to the worst aspects of their would-be competitors whenever they feel threatened by them—or are otherwise politically or economically motivated. Using catch phrases like, “hate speech” or whatever “ism” catches the gestalt, the oligopolists then pressure the supply-chain of would-be competitors, forcing them out of business. They eliminate their ability to process payments. They cut off their upstream bandwidth providers. They remove their ability to be routed or resolved. They eliminate all possibility of collecting advertising revenues.
And they do all this with the virtuous facade that they, the incumbent giants, are safeguarding a “well kept garden”. All while conveniently forgetting that they only rose to such dominance by exploiting the very freedom of speech—including an early tolerance for the opinions they now so self-righteously claim to oppose.
There are various academic ways to describe this type of situation. But the solution is the same: largely unrestricted free speech and a renewed focus on one’s personal responsibility in selecting herself in or out of conversations.
There is more than a single solution to this problem. Yes, one solution is to enforce First-Amendment style free-speech requirements on the oligopolistic giants that control the majority of the discourse that happens on the Internet. Another solution would be to address the fact that there are oligopolistic giants.
My solution to the above problem would be to force tech companies to abide by interoperability standards. The reason the dominant players are able to keep up their dominance is because they can successfully exploit Metcalfe’s Law once they grow beyond a certain point. You need to be on Facebook/Twitter/etc because everyone you know is on that social network, and it requires too much energy to build the common knowledge to force a switch to a better competitor.
However, the reason it’s so costly to switch is because there is no way for a competitor to be compatible with Facebook while offering additional features of their own. I can’t build a successor social network which automatically posts content to Facebook while offering additional features that Facebook does not. If there were an open standard that all major social networks had to adopt, then it would be much easier for alternative social networks to start up, allowing us to have both well-kept gardens and relative freedom of speech. “Well-kept gardens” and “free speech” are only in apparent conflict because market forces have limited us to three or four gardens. If we allowed many more gardens, then we wouldn’t have the conflict.
It may be true that well-kept gardens die from activism, but it’s also the case that moderation can kill communities.
There speaks the voice of limited experience. Or perhaps LiveJournal, Reddit, Google+ and Facebook really are not worth saving?
I’ve seen enough discussion forums killed by abusive moderators that I look carefully before signing up for anything these days. When I write a lengthy response, like this, I post it on my own site rather than face the possibility that it will be silently deleted for disagreeing with a moderator.
However, I’ve also been a moderator, and I’ve seen situations where moderation was desperately needed. In my experience on both sides of the issue, there are some basic criteria for moderation that need to be met to avoid abuse:
Moderation needs to be visible. Comments that are removed should be replaced with a placeholder saying so, and not simply deleted. Otherwise there will be accusations of repeated unfair deletion, and any act of moderation will quickly snowball into an argument about how much censorship is occurring, and then an argument about whether that argument is being censored, and so on until everyone leaves the site.
Moderation needs to be accountable. Moderators must have individual accounts, and moderation actions need to be associated with individual accounts. Without this, it’s pretty much impossible to identify an abusive moderator. I recently got banned from a subreddit for asking which rule I had broken with a previous posting, and there was no way to find out who had banned me.
Moderation needs to be consistent. There needs to be a description of what the criteria for moderation actually are. It doesn’t need to be legalistic and all-encompassing, and it may be subject to change, but it needs to exist. Some people feel that actually writing down the criteria encourages people to argue about them. The alternative, though, is that person A gets banned or censored for doing something that person B does all the time; that leads to much worse ill-will and ultimately is worse for the community.
Moderation rules need to apply to the moderators. A special case of the above, but it deserves highlighting. Few things are more infuriating than being banned by a moderator for doing something that the person doing the banning does all the time. Once this kind of moderation starts happening (e.g. Gizmodo), the atmosphere becomes extremely toxic.
Moderation needs an appeals process. There are abusive power-tripping assholes out there, and they love to find their way onto forums and become moderators. You need a mechanism for identifying any who find their way into your forum. Having some sort of appeals process is that mechanism. Ideally appeals should be resolved by someone who isn’t part of the moderation team. Failing that, they should be resolved by someone other than the person being complained about, obviously.
It also helps if the moderation activity can be openly discussed in a partitioned area of the site. There will be desire to discuss moderation policy, so plan ahead and have a space where people can do so without derailing other threads. That way, you can also redirect meta-discussion into the moderation discussion area to avoid thread derailment, without making the problem worse.
(Also posted at my web site)
Agreed. I’ve seen many good communities destroyed by over-modeation. Usually it starts as a reaction to a troll invasion, but over time the definition of “troll” tends to expand to suit the mod’s mood. There was one (previously very reasonable) community I recently left after it got to the point where the mods banned a smart, long-time poster who occasionally talked about being a transexual, apparently concluding that she must be a troll for saying such things.
We all know how easy it is for many well-intentioned people to go from “I disagree with a lot of that person’s opinions” to “that person is an evil mutant” without even realizing what happened.
A quick factual note: 4chan unconditionally bans child pornography and blocks (in a Wikipedia sense) the IPs, as I found out myself back when I was browsing through Tor. They’ll also moderate off-topic posts or posts in the wrong section. They actually have a surprisingly lengthy set of rules for a place with such an anarchistic reputation.
From what have just said, I surmise that 4chan is a actually a well-tended garden. I could well be a well-tended, thoughtfully organized, subtly organized anarchy garden.
Most of the rules are mostly there for the sake of legal cover—the only things that are strongly enforced are:
a) Child pornography bans. b) Bans on organizing illegal activities, namely “raids” on other websites that can result in serious damage. c) Mass spam, especially spam that is meant to propagate scripts that are used for further spamming. d) Topicality rules. This only applies to some of the boards.
Moderation is most reliable for (a). 4chan is hardly a well-tended garden, let alone a “thoughtfully organized” one. Moderation is often capricious as well, with certain memes being unofficially targeted every once in a while (furries, Boxxy, etc.) It’s hard to really find an apt term or even a metaphor to properly summarize 4chan’s governing ethos… some kind of chaotic swarm or something, perhaps.
Also, it’s important to note the difference between 4chan as a whole, which is indeed an erratically-tended garden of sorts, and the “random” sub-board, which is a seething cesspit of trolling and memes, with occasional flashes of socially-uninhibited lucidity, and indeed has anarchy levels that are (as they say) over 9000.
Update: new ‘feature’ - apparently, you can now only downvote if you’ve done less downvoting than your karma. Example from my screen:
Current comment: 93t. This implies 11,792 comments, if I count correctly. You’ve downvoted 21% of all comments? I think it’s more likely we’re looking at some kind of bug, but if you’ve actually downvoted 21% of all comments then more power to you. Still, I’d like to verify first that it’s not a bug.
That sounds about right—I try to read all comments and downvote over 1⁄3 of the time, but I’ve missed some in days of inactivity.
I think I just read the explanation for the strange phenomena some people have reported; that of karma disappearing rapidly over a few hours of downvotes on older threads. It’s just thomblake catching up.
Sadly, that does not completely explain the phenomenon.
If only I had an army of sockpuppets!
Be careful what you wish for. It seems your wish was granted in the form of Eugine.
I’ve verified the numbers, thomblake has posted 2538 down votes. 93t is 11801 in base 36. Adding 436 articles drop the percentage slightly to 20.7%.
Is there a way for us to see on our own how many downvotes and upvotes we’ve given?
I mean, I guess there is a way to check your total downvotes now, but I’d have to downvote a lot of posts to get the information that way.
No there isn’t a way to check vote counts at the moment.
An unexpected consequence of this change is that upvoting thomblake now has benefits (he can downvote more) that don’t correlate to the quality of his posting. While this will give him a (weak) incentive to produce better comments, it’ll also encourage me to upvote him more, reducing the quality-signalling function of his karma.
It’s nice to hear that my tendency to downvote heavily is so valued.
I guess I need to go back and undo hundreds of downvotes on old comments if I want to have a hand in tending the garden.
Certainly not worth your time. Maybe we can go start our own rationalist community! With blackjack! And hookers! In fact, forget the rationalism!
It was mistakenly assumed that most people’s down vote count would not be approaching their karma, particularly for high karma users. I’ll do some more research and discuss it with Eliezer.
Initial quick fix: downvote limit = 4x karma.
Quick fix deployed. I did some analysis of user’s down vote count and karma. This change allows everyone to down vote that doesn’t have a massively skewed down vote to karma ratio (Like 21 to 2 or 548 to 137). Obviously this still leaves thomblake roughly 500 short.
Out of curiosity, why 4?
So in order to facilitate the downvoting that we have been encouraged to do, we must restrict downvoting so as to keep it within our karma.
Are upvotes also so restricted?
Y’know, this new feature seems to be of dubious value in itself, but it’s a great way to disassociate upvotes from comment quality. Before, people would be more willing to upvote a good comment from a person whose judgment they didn’t agree with or like, providing effective feedback as to what they felt about the comment and its contents. Now, though, providing that upvote gives people more ability to exercise their judgment and thus more power. People don’t like giving people they dislike more power. Ergo, people will give upvotes not according to their evaluation of individual comments, but as approval of the person who posts them.
Nope. I’d suggested that originally for balance, but the concern here (I think) was that someone could wreak more damage with unrestricted downvotes. Someone could create a bunch of accounts and downvote a bunch of stuff to oblivion. To use the ‘pruning the garden’ metaphor, we don’t want people to come off the street with machetes and chainsaws.
But yes, I find it very ironic that this feature was implemented at the same time as encouragement to downvote more. On the other hand, they do go together, as since I can’t be the one doing most of the downvoting anymore (he said jokingly), other people need to step it up.
I’m concerned that this makes the ability to downvote a limited resource. That’s good in some ways, but as long as we’re talking about “what if someone created a whole bunch of accounts to mess things up” scenarios, it raises an unpleasant possibility.
If someone mass-created accounts to post flame bait and complete garbage, we’d respond by voting them down severely, which restricts the ability to use downvotes productively in actual discourse.
I don’t know much about the way this site is set up. Was that scenario already considered, but viewed as unlikely for reasons I’m not seeing?
Which means that you won’t be able to downvote anyone for considerable time in the future. This is a bug, the limitation shouldn’t apply retroactively. And maybe one should be given 3x amount of Karma for downvoting. Ideally, of course, the votes should just be weighted, so that you can mark any post, maybe on a scale, and all of the posts you ever voted for get a rating change according to your overall Karma (this shouldn’t be linear, something more stable like square root or even logarithm).
Present Karma affecting future votes, or present karma affecting all votes cast? I can see arguments for both, although I worry that the latter might not be stable or computable for certain sets of parameters (my downvote lowers your karma which weakens your upvote which lowers my karma which weakens the aforementioned downvote, etc...)
Just so long as I get to be a multiclass fighter/rogue/sorcerer who specializes in enchantment spells, I’ll be happy.
I can see myself linking to this more than anything else you’ve ever written. Sing it, brother!
Note that the voting system we have here is trivially open to abuse through mass account creation. We’re not in a position to do anything about that, so I hope that you, the site admins, are defending against it.
Wikipedia is an instructive example. People think it’s some kind of democracy. It is not a democracy: Jimbo is absolute ruler of Wikipedia. He temporarily delegates some of his authority to the bureaucrats, who delegate to the admins, who moderate the ordinary users. All those with power are interested to get ideas from lots of people before making decisions, but it’s very explicit policy that the purpose is to make the best encyclopaedia possible, and democracy doesn’t enter into it. It is heavily policed, and of course that’s the only way it could possibly work.
There is no strong reason that reasonable, informative discourse should be an attractor for online communities. Measures like karma or censorship are designed to address particular problems that people have observed; they aren’t even intended to be a real solution to the general issue. If you happen to end up with a community where most conversation is intelligent, then I think the best you can say is that you were lucky for a while.
The question is, do people think that this is the nature of community? There is a possible universe (possible with respect to my current logical uncertainty) in which communities are necessarily reliant on vigilance to survive. There is also a possible universe where there are fundamentally stable solutions to this problem. In such a universe, a community can survive the introduction of many malicious or misguided users because its dynamics are good rather than because its moderator is vigilant. I strongly, strongly suspect that we live in the second universe. If we do, I think trying to solve this problem is important (fostering intelligent discourse is more important than the sum of all existing online communities). I don’t mean saying “lets try and change karma in this way and see what happens;” I mean saying, “lets try and describe some properties that would be desirable for the dynamics of the community to satisfy and then try and implement a system which provably satisfies them.”
I think in general that people too often say “look at this bad thing that happened; I wish people were better” instead of “look at this bad thing that happened; I wish the system required less of people.” I guess the real question is whether there are many cases where fundamental improvements to the system are possible and tractable. I suspect there are, and that in particular moderating online discussion is such a case.
This might actually be a good idea. If LessWrong could beget the formulation of some theory of good online communities (not just a set of rules that make online communities look like real-world communities because they work), that would certainly say something for our collective instrumental rationality.
Eliezer,
I used to be not so sure how I felt about this subject, but now I appreciate the wonderful community you and others have gardened, here.
I think I fundamentally disagree with your premise. I concede, I have seen communities where this happened . . . but by and large, they have been the exception rather than the rule.
The fundamental standard I have seen in communities that survived such things, versus those that didn’t fall under two broad patterns.
A) Communities that survived were those where politeness was expected—a minimal standard that dropping below simply meant people had no desire to be seen with you.
B) Communities where the cultural context was that of (And I’ve never quite worded this correctly in my own mind) acknowledging that you were, in effect, not at home but at a friendly party at a friends house, and had no desire to embarrass yourself or your host by getting drunk and passing out on the porch - {G}.
Either of these attitude seems to be very nearly sufficient to prevent the entire issue (and seem to hasten recovery even on the occasion when it fails), combined they (in my experience) act as a near invulnerable bulwark against party crashers.
Now exactly how these attitudes are nurtured and maintained, I have never quite explained to my own satisfaction—it’s definitely an “I know it when I see it” phenomena, however unsatisfying that may be.
But given an expectation of politeness and a sense of being in a friendly venue, but one where there will be a group memory among people whose opinions have some meaning to you, the rest of this problem seems to be self-limiting.
Again, at least in my experience - {G}. Jonnan
I agree with you, and I also agree with Eliezer, and therefore I don’t think you’re contradicting him. The catch is here:
This implies that the party crashers, upon seeing that everyone else is acting polite and courteous, will begin acting polite and courteous too. In a closer model of an internet community, what happens is that they act rough and rowdy … and then the host kicks them out. Hence, moderators.
Unless you really mean that the social norms themselves are sufficient to ward off people who made the community less fun, in which case your experience on the internet is very different from mine.
If everyone is accustomed to a norm of politeness, a wandering troll seeking to stir up arguments ‘for the lulz’ will find few bitter arguments, and no willing collaborators.
Still, if a few impolite people happen to come at the same time, start arguing with each other, and persist long enough to attract more impolite people from outside, the community is ruined.
Also the norm violators do not need to be consistent. For example they may be polite most of the time towards most members of community, but impolite towards a few selected ‘enemies’. If the rest of community does not punish them for this, then their ‘enemies’ may decide to leave.
One problem I have with hesitation to downvote is that some mediocre comments are necessary. Healthy discussion should have the right ratio of good comments to mediocre comments, so that people may feel relaxed, and make simple observations, increasing rate of communication. And current downvote seems too harsh for this role. On the other hand, people who only make tedious comments shouldn’t feel welcome. This is a tricky balance problem to solve with comment-to-comment voting.
I would downvote more, if we had a separate button, saying “mediocre”, that would downvote the comment, say, by 0.3 points (or less, it needs calibration). The semantics of this button is basically that I acknowledge that I have read the comment, but wasn’t impressed either way. From the interface standpoint, it should be a very commonly used button, so it should be very easy to use. Bringing this to a more standard setting, this is basically graded voting, --, - and ++ (not soft/hard voting as I suggested before though).
An average mediocre comment should have (a bit of) negative Karma. This way, people may think of good comments they make as currency for buying the right to post some mediocre ones. In this situation, being afraid to post any mediocre comments corresponds to excessive frugality, an error of judgment.
Also, this kind of economy calls for separation of comment Karma and article Karma, since the nature of contributions and their valuation are too different between these venues.
I just had a related idea. Let people mark their own comments as highbrow, lowbrow, or NSFW. Highbrow if it’s a serious comment, lowbrow if it’s a bad pun. And then there could be related viewing options. This way, people who want to relax wouldn’t be told that they’re bad and stupid, but those who came here on business wouldn’t have to see it.
This can’t work organically, generation of content has to be performed in the mode of presentation sufficiently compatible with the mode of consumption. Taking out a portion of comments from a discussion raptures it, making it too tarnished to hold together. It takes human intelligence to selectively abbreviate a narrative, an automatic system that just takes track of some kind of threshold is incapable of doing that gracefully. Removing offensive outliers works, but little else. See also this comment, made before it was made possible to easily see comments’ context.
The requested feature list for this site’s software is now huge—we’re going to need a lot more coders if we’re to make such progress.
Even if it were a good idea to split the community like that, what are we to do with people who consistently post middlebrow posts, like pointed jokes, or philosophy interspersed with anime references?
Why have a button that performs a default action? If, by default, a read comment is worth 0.3 points, give it those points every time it’s read.
This could be used in reverse, too. Have comments’ points decay (say, for the first 4 days only) - to motivate people to save the ones they want to keep, from dropping below the readable-threshold.
Edit: In order to preserve the Karma of writers, the decay could be implemented in a smart way (say, readability threshold for comments increases as they age, so, if a comment doesn’t get 3 upvotes by day 5 or after 10 reads, for example, it disappears)
The first point is answered here. The second point is not about the problem discussed in the article, it won’t help in defence against trolls.
The mediocre button should be the same as simply not voting, I think. Especially since it’d have to be used quite often, no-one wants to be pushing a button for every mediocre comment. Maybe a similar effect could be reached if comments gradually accumulate negative karma with time?
That would be nice, but unfortunately you need to somehow signal that you have really considered the comment, understood it, and decided that it’s nothing special. Simply downloading the page, or even reading the comment, doesn’t do the trick. See also this discussion on validity of voting in ignorance.
This post makes me think of SL4:
http://www.vetta.org/2006/09/friendly-ai-is-bunk/
Updated link:
http://commonsenseatheism.com/wp-content/uploads/2011/02/Legg-Friendly-AI-is-bunk.pdf
Hah! Second thoughts, I wonder...?
Good thing this community died for entirely unrelated reasons, then!
Yeah, fan clubs die for simpler reasons :-)
Different people will have different ideas of where on the 4chan—colloquium continuum a discussion should be, so here’s a feature suggestion: post authors should be able to set a karma requirement to comment to the post. Beginner-level posts would draw questions about the basics, and other posts could have a karma requirement high enough to filter them out.
There could even be a karma requirement to see certain posts, for hiding Beisutsukai secrets from the general public.
I’d worry that:
a) It would be incredibly difficult to initially accumulate karma to begin with if it turned out that most posts that weren’t “Introduce yourself!” had a decent karma requirement.
b) You’d end up excluding non-regulars who might have very substantial contributions to specific discussions from participating in those discussions. For example, I’m an economist, and most of my posts have been and probably will be in topics that touch on economic concepts. But I don’t have much karma as a consequence, and I’d think it’d be to the community’s detriment if I was excluded for that reason.
Karma is not a very good criterion, it’s too much about participation, and less so about quality. It’s additive. A cutoff of 20 points to post articles seems a reasonable minimum requirement, but doesn’t tell much. The trolls who cause slow suffocation will often have 20 points, while new qualified people won’t. Only extreme values of Karma seem to carry any info, when controlled for activity. Comment rating as feedback signal is much more meaningful.
What about looking at average karma per a comment rather than total karma? That might be a useful metric in general. There may be some people with very high karma that is due to high participation with a lot of mediocre comments. Someone with higher average karma might then be someone more worth paying attention to.
The negotiation of where LW threads should be on the 4chan-colloquium continuum is something I would let users handle by interacting with each other in discussions, instead of trying to force it to fit the framework of the karma system. I especially think letting people hide their posts from lurkers and other subsets of the Less Wrong userbase could set a bad precedent.
Woah. If we accept your suggestion, how long before karma turns into money, with bargaining and stuff?
4chan is actually pretty popular, I don’t know if you are aware. Somehow their lack of censorship hasn’t kept them from being “fun” for millions of people.
Yeah, but they don’t get a lot of philosophical discussion done.
I can see how your moderation strategy might be different if you were optimizing for intelligent debate of issues in your community as opposed to optimizing for maximum fun had by the members of the community. In that case, though, you probably shouldn’t conflate the two in your post.
For the record, I am not a denizen of 4chan, but I do have a lot of fun in moderation-light internet communities. I have been fortunate enough to see a natural experiment: the moderation team at my main online hangout was replaced en masse with a much easier going team a couple years back while leaving the community intact, and it was amazing how much easier it became to dick around to be funny when you no longer had to worry that you’d get a three-month ban for “being stupid”.
I won’t be having much fun if the discussion ceases to be intelligent. Maybe the people who’ll come after will have fun, but this is a community-wireheading scenario: you don’t want to wirehead, but if you do wirehead, wireheaded you will have lots of fun...
Not to mention that 4chan has spawned contributions to internet “culture” that have probably had a bigger impact on society than a lot of stuff the academics have cooked up.
Would that be a positive or negative impact?
Don’t forget the part where they triggered epileptic seizures for fun!
And the interesting question is : given decentralized censorship, or even no censorship at all, what sort of community can emerge from that ?
My impression is that 4chan is resilient from becoming a failed community, because they have no particular goal, except maybe every one doing what pleases themselves on a personal basis, given it doesn’t bother everyone else.
Any single individual will, pretty naturally and unwittingly, act as a moderator, out of personal interest. 4chan is like a chemical reaction that has displaced itself towards equilibrium. It won’t move easily one way or the other now, and so it’ll remain as it is, 4chan. But just what it is, and what sort of spontaneous equilibrium can happen to a community, remains to be seen.
On the other hand, 4chan’s view of “fun” includes causing epileptic seizures in others.
The balance for a moderator is between too much craziness and too much groupthink.
Moderation easily becomes enforcement of a dogma. In English literary theory today, you’re required to be a cultural relativist. You only get to choose one of three kinds of cultural relativist to be: Marxist, feminist, or post-modernist. Anyone else is dismissed as irrelevant to the discourse. This is the result of “moderation,” which I place in quotes because it is anything but moderate.
It is especially problematic when the moderator is a key contributor. A moderator should, ideally, be a neutral referee.
Revisiting this post in 2017, I’m calling it wrong in retrospect. It seems to me that LessWrong is less vibrant than it used to be, and this is not because of too little moderation, but may be partly because of too much, both from above (post promotion, comments from EY, and harassment of divergent views from moderators) and from below (karma voting). LW has become a place of groupthink on many issues. Karma did not prevent that, and may have accelerated it.
EY encouraged this. He refused to engage with criticism of his ideas other than with rudeness or silence. He chased away Richard Loosemore, one of the only people on LW who was qualified to talk about AI and willing to disagree with EY’s ideas. EY’s take on him was:
(And, looking at that thread, how exactly did timtyler, one of the other stars of LW, get banned?)
The last time he came around here, he basically wanted to say that the whole idea of AI risk is stupid because it depends on the assumption that AI is all about reinforcement learning, and reinforcement learning “obviously” can’t do anything scary. It didn’t seem to me that he defended any part of that very effectively, and he seemed disappointingly insistent on fighting strawmen.
I agree it’s a shame not to have more intelligent advocacy of diverse views, but it’s not clear to me that Richard Loosemore really contributed much.
(Also, it may be relevant that that comment of EY’s was voted to −18. If Richard L ran away because one prominent person was rude about him and got downvoted into oblivion for it .. well, maybe it’s sad but I don’t think we can blame it on LW groupthink.
I would not have characterized him in that way. He wrote a lot, for sure, but I never found what he wrote very interesting. (Of course no one else is obliged to share my interests.)
Given a finite amount of time in a day, I have to decide how to use it. While I can afford to take a quick look at each comment when there are only few of them, I have no choice but to ignore some when there are pages of them (and other top-level posts to read). One nice thing with the karma system is the “best to worst” comments order: I can read the first ones and stop reading when encountering too many “boring” ones in a row (but maybe not “boring” enough to merit a downvote).
However, if many people use a similar algorithm to mine, the “bad” comments won’t be read often and thus won’t get further downvotes. Worst: the “good but new” comments (starting at 0) can get stuck in that pool of unread comments.
Vladimir_Nesov suggested to add a “mediocre” voting option affecting karma by −0.3 (instead of the −1 or +1). I would instead suggest a “I read this” button, worth 0 karma, together with some counter indicating the total amounts of votes irrespective of them being −1, 0 or +1. When you read a post/comment, you always vote: −1 if you judge it bad, +1 if you judge it good and 0 if you are not ready to do any of the previous.
With such a device, people could once in a while “sacrifice” some of their time reading low karma comments with few total reading count. Moreover, the current −4 threshold for hiding a post could become a function of this total count (some kind of variance).
I think the “mediocre” vote is really a vote on a post being noise. Instead of just “karma,” one could have four numbers: signal, noise, agree, disagree. You can only vote these numbers up, and you can only vote up one of the 4.
A post would then have two scores: a “signal to noise ratio” and a “agree/disagree” score, which would be a the agrees minus the disagrees. (And actually, the signal to noise ratio would not necessarily be treated as a ratio. Both signal and noise numbers will probably be displayed.)
A vote on agree/disagree would be treated as an implicit upvote on “signal” by the post visibility algorithm.
This would make the karma system harder to game. You can vote “noise” to try and censor a post you disagree with, but then you can’t also disagree with it.
This is an old quality/agreement debate. My position is that agreement is irrelevant, only usefulness for improving your understanding of something you are interested in is. Communicate likelihood ratio, not posterior belief.
Voted up because while this isn’t the first time this sort of thing has been proposed (and I might disagree with the “implicit upvote”), I think “signal” and “noise” are awesome names for that feature.
My secret garden or why my garden is not well-kept
I talk of a real garden here—my garden, a place for me to rest, to see flowers, birds and butterflies, to read in.
Years ago, before my time, it was a well-kept garden.
There was a lot of grass and some bushes, and that was it, easy to maintain, well kept.
Because I like flowers, i tried this and that, some things thrived and others died and it was never the garden I planned in the beginning.
I never planted the clover and the moss that creeped into the grass. In the beginning, I tried to fight it, but suddenly I began to understand that the garden won life trough it. There suddenly were bees and little flowers in the grass.
I despised roses in the beginning. They seemed like plastic flowers to me. But then I discovered the old roses, roses who did the one thing a rose to my mind should do : they smell. All the roses in my garden have different smells.
I have plants I never planted, but I like them. I even like dandelion, even if I have to fight it every year.
There will be dandelion anyway. A bit of fighting is necessary, because else the garden would have only dandelion.
I took my gardening philosophy a bit from “the secret garden” from Burnett and a bit from chinese and japanese gardens, including new plants, if they take root, only rooting out the ones that try to overtake the garden or that I know are poisenous.
This year, because of clima change, a lot—even the big tree I love so—has to go. I don´t know which flowers will thrive next year. I wait and see.
I know this: in the oh so well kept gardens in my childhood there were no sweet smells and nothing was allowed to grow, if it was not planted in that exact place.
Let the new seeds, that fly in, place to grow. Don´t root them out, because they are not what you expected.
A garden can be too well kept to be a living, thriving thing.
If you’re trying to quote Gurren-Lagann here, I believe you botched the quote. “Believe in me who believes in you!” But maybe it was dubbed differently. In any case, I do find some amusement in your approvingly quoting a show which was more or less premised on a rejection of rationality. “Throw away your logic and kick reason to the curb!” I’ll have to remember that for the next anti-rationalism quotes thread.
But anyways, I did like this post, although as you implicitly concede it’s just one narrative of community development among many. I’m sure that there have been as many communities to have fallen due to despotic moderation or impoverished by rigid ideological guidelines as there have been ruined in the ways described in the OP. Oftentimes the “idiots” who “ruin” the comm are actually the lonely voices of reason. It’s a fine line to walk, and I look forward to someday seeing a modern-day Machiavelli write a tract on “The Internet Community Moderator”. Because it really is that tricky.
On the other hand, “lonely voices of reason” are unlikely to overrun a community of idiots the way idiots can overrun a more intelligent community.
Unless LWers got together and staged an invasion… wouldn’t that make for an interesting day at some forum...
I’ve seen it happen, actually. I went to a Christian youth forum looking for some shooting-fish-in-a-barrel debating fun, and over time I noticed that a handful of rationalists gradually came to dominate discussion, to the point where the majority would avoid making ridiculous statements in order to avoid being called out on it.
A few bright, articulate people who can type fast are surprisingly effective. If LW ever invades some other forum, that forum will either get out the banhammer or be overrun.
OK, that sounds like a lot of fun. Which would be exactly the wrong reason for us to do it.
That being said, what would be the result of a coup like that? If we could actually expect to change a few minds in the process, it might be worth trying.
On the other hand, reputation is a valuable commodity. What would such an act do to our visibility and reputation?
I don’t think Gurren Lagann is meant to be taken seriously; it struck me, when I was watching it, as a reductio of the old-style mecha genre (a loving one, one done by fans of it, but still a reductio). It’s a funny quip because it’s so contradictory to the usual believe-in-yourself spiel, is all.
Could you give an example where you’ve seen this? In 20 years online I’ve seen it once, maybe.
I can’t think of a specific example that a broad audience might know about, but it’s relatively easy to see how this could arise. Take a community of “idiots”, by whatever criteria we’d use to apply the term to the lone troll. Many of them exist which espouse all sorts of nonsense. Throw in someone who actually understands the topics which these people purport to discuss. Unless that person is incredibly subtle and eloquent, they will be denounced as an idiot in any number of ways.
I can speak here from my own experience as an economist who’s tried to make arguments about public choice and decentralized knowledge to a general (online) audience in order to defend free markets. A lot of crowds really will have none of it. I think this is a frustration which even the best libertarian-leaning individuals have run into. But given persistence, one can gain ground… and subsequently be accused of “ruining” a safe space which was reserved for the narrow worldview which you challenged. In face, any community with “safe space” disclaimers is probably extremely vulnerable to this—I just doubt you’ve engaged with many.
OK, yes, that’s a counterexample. However, in all those instances, the community itself is screwed in a fundamental way, and the fix is not for people to welcome the “idiots”: the fix is to leave the community and go somewhere more sensible. Is there an example of a community good enough that you would recommend anyone to join, but which would have been improved by taking the criticism of unpopular members more seriously? It doesn’t have to be a well-known example, and you don’t have to link to it; even anecdotal evidence would be enlightening here.
Damaging such a broken community might be a good thing, much as tearing down a dangerously decrepit building can be better than letting squatters stay in it until it collapses on them.
(I think that analogy as gone about as far as it should go)
The real broken analogy is comparing an online social club to a community, of all things. In the real world, a community is a group of people who deal with one another and compromise their values in order to share protection from bodily harm and cultivate a nurturing surrounding ecology. Invoking this metaphor (and the instinct of protection which it naturally evokes) in reference to a virtual social space is a recipe for cultishness, irrationality and grossly authoritarian “gardening” methods, such as enforcing petty politeness norms in the face of serious challenges involving failures of reasoning or ethics.
The fact that we’re even talking about how a basic challenge to groupthink might “destroy a community” shows this more clearly than anything else could. The term “virtual community” should be permanently taboo-ed here, as a matter of minimally required sanity.
You make a good case for this
That’s wrong. It would be a poor isomorphism, it’s a fantastic analogy.
In inferring that’s necessarily the case, one probably commits the fallacy of composition.
Speaking of words worthy of “taboo”...
Speaking of “Invoking this metaphor (and the instinct of protection which it naturally evokes)”...
Most real-world communities involve such compromises, if only in the values of autonomy and freedom. We will gladly make these compromises given the attendant benefits, but this is not to say that a change in core values has not occurred.
You are misinterpreting my point. I’m not saying we should care about how a pure social club is managing itself. I’ll even endorse such clubs calling themselves “online communities”, if this makes folks happier. But the moment an “online community” is having harmful effects on the real world (say, by fostering misconceptions about economics and free markets [as per sibling thread], or by spreading lies such as “global warming is caused by solar activity, not by human beings”), is when respectfully engaging with that “community” becomes justified and praise-worthy. And so is criticizing them as “authoritarian” if they were to dismiss genuine grievances about their real-world impact with petty concerns about “politeness” and “manners”.
This is part of the reason why Less Wrong discourages political discussion. We do not know how to have a genuinely useful/productive political debate online, and we don’t want trolls to come here and complain about how their pet political cause is being handled. So we focus on the smaller problems of epistemic and instrumental rationality, leaving politics to specialized “open politics” websites which can experiment and take the heat for what they do. In turn, open politics folks will hopefully refer to Less Wrong for basic rationality stuff.
All else equal, one infers values from actions.
When multiple parties make a pact to modify their actions from what they would otherwise have done, it is a mistake to think that they have necessarily modified their values. In particular, if they agree to modify their actions to be similar to each other’s, to each perform the actions that best jointly satisfy their values, it is a mistake to think that any individual or all of them now has the value set that would be implied by an individual independently choosing to act as each has agreed.
Criticizing for doing a type of thing is misguided. “Slavery!”
Concerns about politeness, manners, and social norms that underlie clear communication are tied into real-world impact. You seem to be artificially constructing criticism by looking for types of things and labeling them with the term that describes them and connotes they are evil without making concrete criticisms of things actually said or done here (I didn’t see “politeness” or “manners” invoked in this comment thread, for example).
Often, it depends. ;)
It is legitimate to have preferences (or ethics or morals) that do deprecate all instances of doing a type of thing.
Worse, what I said was logically self-contradictory.
Let’s try again:
For almost any type of thing, it is not true that it is always deleterious.
Criticizing a type of thing is useful to the extent you and others are poor at reasoning about and acting on specifics.
Good point.
That works!
There is something related (albeit about an industry, rather than a community) in http://www.shirky.com/weblog/2009/03/newspapers-and-thinking-the-unthinkable/
“Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.”
“Could you give an example where you’ve seen this? In 20 years online I’ve seen it once, maybe.”
Are you aware of the famous advice regarding poker games and suckers?
This comment is a great example of how torn I am sometimes when allocating votes. At first glance, it seems like you’re just saying something insulting and possibly mean, and adding little to the conversation—verdict: downvote.
Then, I realize that this is actually a good counterargument, once the time is taken to unpack it. Clearly it would be the case that a participant in the community would think that the “idiots” aren’t the lonely voices of reason, if they actually were. If this happens frequently, then such a person would not notice the phenomenon at all.
Now I must decide whether to leave it neutral or upvote it. Since I had to do the work to get at the point of the comment, rather than having it spelled out in the comment (preferably with references where applicable), I would think it’s not worth an upvote. On the other hand, the act of working out something like this is itself valuable (as we see in Zen stories), so maybe it is worth an upvote.
YMMV.
I see quite a different perspective:
I intend to steal this quip frequently.
In my opinion, if handled improperly (E.g. Reddit) up/down vote system enable predatory herd dynamics since the accumulation of karma points trigger a competition between members and an excessive sense of self-worth.
When you don’t have a specific karma count for every single community you are in, what can happen is people starting acting on purpose to get karma (that is also dopamine), adapting their behaviour to the general trend of every community they join, for the mere sake of acquiring what is perceived as “credibility” and “respect”.
Gaining karma throwing toxic, very low-effort, provocative comments becomes the praxis, and it’s very effective since people always struggle to prove their superiority, be it charismatic or intellectual.
People also tends to validate who keeps the game on the same field they feel confident, leaving little to no room for a real, healthy exchange of different perspectives.
Lots of interesting and good things come out of 4chan. The signal/noise is low, but there is still lots of signal, and it had no high ideals to start with at all.
I wonder if an explicitly rationalist site without standards would devolve into something that was still interesting and powerful. I think I would trade LW/OB for a site where a thousand 13 year old bayesians insulted each others’ moms and sometimes built up rationality. In the long run it’s probably worth more.
Also, I have a higher quality comment which my posting time is too small to contain.
Lots of memes come out of 4chan. I’m not sure I’d call any of them “good” in any way beyond their being amusing. (Of note: “4chan was never good” is a meme in and of itself.)
The thousand 13-year old Bayesian LW would never “build up” anything approximating rationality, I’d conjecture. It would select for rational arguments to some extent, but it would also select for creative new obscenities, threads about how to get laid, and rationalist imagefloods (whatever that would consist of) being spammed over and over. 4chan has almost 200 million posts and I can’t think of any meaningful contribution it has made to human knowledge.
Don’t get me wrong, it has its purpose, but I don’t believe you could ever get a community with a 4chan-style atmosphere to promote any sort of epistemic virtues, largely because I think what it would take to be noticed there would almost intrinsically require some kind of major violation of those virtues.
That may well be one of the most scathing accusations I’ve ever heard leveled, but I’m not sure if it’s quite entirely true. Surely there’ve been a succinct atheistic demotivator or two to come out?
I haven’t been paying particularly close attention so I could be wrong, but it seems like 4chan has also made real contributions toward raising public awareness about the Church of Scientology and its crimes.
That’s the impression that I got too—does anyone have figures? Is recruitment down, or did the church have to spend a significant amount of money on damage control?
Scientology met its Vietnam (to quote a former CoS public relations officer who had by then escaped) in 1995 when it took on alt.religion.scientology. By 1997, it came out when they were suing Grady Ward that their income in 1997 was a quarter what it was in 1995. It was at that stage they’d already lost—the momentum against them was only going to increase (and this is indeed what happened) - and the rest was mopup.
tl;dr: they have taken such a hit from the Internet over the past fifteen years that their current income is a shadow of what it once was. However, they have enough reserves—Hubbard was very big on reserves—to keep all ther offices open for years and possibly decades if they wanted to.
No one’s done a definite estimate of the impacts, but “Project Chanology” did attract thousands of protestors and a lot of mainstream media attention. I didn’t mean to argue that 4chan has never accomplished anything positive, or even that there isn’t a lot of creative activity there—I just don’t see any of it as having advanced the frontiers of human understanding in any meaningful sense.
http://4.bp.blogspot.com/_dzzZwXftwcg/R7nmF_bpsQI/AAAAAAAACKo/Os0WrGbEguo/s400/remember-santa.jpg
Edited to link to accessible image.
I get error 403 trying to access it. But I suppose you meant this : remember santa
Well what do you think a positive 1000 13 year old LW would look like?
Competing with 4chan for the attention of 13 year olds is the scope of the problem we face. I’m saying that 1000 young bayesians is a goal, and that if that community comes to exist, it just won’t look at all like LW, or have it’s mores.
The LW atmosphere probably won’t grab that audience. (And many of the new posts would be perceived as low quality here, even if they were above average for 13 year olds.)
Also, the only memes you will see ‘coming out of 4chan’ are the most viral ones. If it also contained a rationalist subculture, it might not be obvious, unless you were one of the 13 year olds whose thinking was changed.
Ahem: http://sexdrugsandappliedscience.blogspot.com/2009/04/bayesian-inference-for-boys.html
sfw?
I think so:
It talks about sex, but not with swear words or in terms of body parts.
thanks =)
What about Project Channology? Helps contribute by fighting a flagrant face of irrationality, at least.
Kudos! A lot of my friends don’t understand why I practice martial arts, they just don’t understand how priviledged they are in never having needed it.
I’m skeptical of the value of advanced martial arts as a practical self-defense tool. The way that, say, muggers operate, I suspect that even Bruce Lee would find himself compelled to hand over his wallet. (You’re walking along, and suddenly some guy sneaks up behind you and puts you in a chokehold or something, while another guy, in front of you, demands your money.) Three random guys with baseball bats could probably beat up any single martial arts expert that they got the drop on—and if they have knives or guns...
More generally, the risk of getting injured or worse costs too much to attempt violence, even if you can win in 90% of encounters. Only where you can’t avoid confrontation by paying whatever moderate amount of money you have on you, there is a point in being stronger, which further discounts the worth of training, down from the already low probability of being attacked at all.
Perhaps this happens occasionally, but I know several people who’ve been mugged, and all of them have been mugged by a single person. In fact, I know a number of martial artists through my own training who have been subject to mugging attempts, and all of them successfully defended themselves.
It’s likely there’s some selection bias going on, since it’s rather embarrassing to admit to other practitioners that you failed to defend yourself from a mugger, but while there are certainly situations that no human, however skilled, can fight their way out of, martial arts are definitely better than useless at defending oneself and one’s property.
Of course, learning martial arts is rarely the most effective way to defend oneself. It’s usually more practical to stay out of situations where you’d need to use them at all. The way I see it, anyone who practices martial arts solely for self defense is in it for the wrong reason.
At least in terms of practical usefulness, it beats athletic skills like basketball, although you’ll never get paid as much for it even if you’re really good.
A small non-random sample, but I saw a discussion of the usefulness of martial arts where about half the participants said that what they’d actually used is the knowledge of how to fall safely.
I can attest that falling safely is in fact very useful!
There’s actually been a decent bit of research into martial arts falling in non-combat situations. Kojustukan has a pretty good summary.
When my father was (successfully) mugged, it was by a group of three. (He also remarked that lone muggers tend to fail, unless they have a gun—it’s too easy to simply run away from them.)
Of course, the plural of anecdote is not data, etc.
I’d be interested to know where he got that information. Personally, I’m inclined to be skeptical; I think most people would rather not take the risk of trying to run away. If they have you at knifepoint, or have a grip on you, then you’re at a distinct disadvantage trying to escape. The upside is that most muggers don’t really want to hurt their victims, but that’s a very risky thing to rely on.
I suspect Bruce Lee would’ve handled himself fine. The whole reason he was sent to the US by his family was that he was brawling too much in the streets (and presumably winning, although I can’t immediately find any online sources which say that).
You might find a couple of my blog posts interesting, the more recent is http://williambswift.blogspot.com/2009/04/avoiding-combat.html , and it links to the other. I include this quote in the more recent post: “Violent crime is feasible only if its victims are cowards. A victim who fights back makes the whole business impractical.” (Jeff Cooper, Principles of Personal Defense).
Typo: at the bottom of the post, where the previous post is referred. Underconfidence has an extra ‘e’
I see that Robert Scoble has recently posted on a good way of creating a responsible online community:
http://scobleizer.com/2009/04/20/decentralized-moderation-is-the-chat-room-savior/
Um, so, who is the fool you’re talking about, or are you just speaking hypothetically?
Definitely I’m one of them. Or just me. I’ve been posting a lot in ineffective directions and my ideas don’t seem aligned well with the group. Sorry, Eliezer. I enjoyed my LW experience—it is a fun community. Best.
(Written later:) Reading through MrHen’s comment, it is interesting to me that we are both new to the group (I’m 2 weeks older) and both feel like posers. (We have karma scores around 55). I think it is interesting that in response to a general reprimand from Eliezer, we both had similar thoughts in our heads (I claim this) but responded quite differently. I have heard before that a gender difference when it comes to grant resubmission in the sciences is that women take the first rejection personally and don’t resubmit at the same rate. While MrHen requested more feedback, I wanted to make an apology and exit before I further offended, even though I wasn’t certain to what extent it was me.
Was my guess that the “fool is me” an overly sensitive response to criticism? I was worried that my harping on religion might be factious, and so I already felt guilty.
How does a person know if they don’t fit, or if their ideas align well enough?
My impression: anyone serious who wants to participate should feel free to do so in the welcome thread. You can ask feedback there on what background understandings or goals are expected; you can share your reasons for being a theist and ask if others have good responses; etc.
If this isn’t the case, perhaps we should create an explicit “Newcomers’ questions”, or “Background” thread, specifically for being a place where people without the expected LW background can ask for help, question conclusions the rest of the community wants to take as established and move on from (e.g., concerning theism), etc.?
I agree with cyphergoth that it would be nice to have certain background material that can be assumed in LW conversations, so that we can maintain a high level of conversation throughout most of LW.
I also think it would be a pity for byrnema to just leave, given the skills and willingness to learn that she’s demonstrated in her posts.
If we want to be a vibrant community, we’ll probably need to both maintain high-quality discussions most places, so that we can establish things and move on, and also to create “bridges” so that folks who have something to contribute but who don’t yet quite meet the rather high demands for full LW contribution can (preferably interactively) gain that background. (For example, I’m concerned that we don’t have more teens or even undergrad-aged people here. Of the 234 people who completed my recent survey, 5 are under eighteen, and 15 others are eighteen to twenty—compared to 57 who are twenty-two to twenty-four. It’s hard to have the full prereqs while young, but a good rationalist movement needs to help its kids train.)
I agree. Members who feel unsure of their ability to contribute at their current level should refrain from commenting too much, and think about what they say more carefully (but they shouldn’t be strongly segregated). General questions that would otherwise be short articles may be asked in the Newcommers’ threads, a combination of Welcome thread and Ideas you are not ready to post thread, but separate from Open threads.
Look, if E wants to stomp you, expect to feel it. The whole point of the above is that he sees no virtue in holding back.
I’d be very surprised if Eliezer was obliquely referring to you. You’ve said things that go against the consensus here, but they’ve been of generally good quality.
I would presume that Eliezer has specifically told any banning candidates exactly why their contributions are problematic, and that he’s withholding named examples here out of politeness. So if you haven’t had a serious rebuke about your conduct on LW, I don’t think you’re implicated.
Probably, just as my response is probably an overly-sensitive response to perceived passive-aggressive behavior.
That is, I’ve been in too many groups, business and otherwise, where the leader speaks vaguely about People who are Doing Bad Things, but neither names any names, nor adequately delineates the bad things such that you can tell whether you’re doing them—now OR in the future.
I find it kind of annoying, at the least.
Although I wouldn’t go so far as to assert that I speak for the majority of the community (although I hope I do), my view is that so long as you are making a good faith effort to contribute and grow along with the community you are okay. After looking over your comment/post history I will say that I have no doubts that you are making such an effort.
My impressions about this group has been that the tone is overall welcoming and supportive, and dissenting views and misapprehensions are met with civility and patience. This is exactly what I expect from a rationalist group, and why I like it here.
From feedback in this thread, I understand that no plurality wants me kicked off LW for stomping on flowers but, indeed, perhaps theistic views (or, in my case, theistic sympathies) are not compatible with your programme. Since there seems to be some debate left, I would like to participate and have a hand in my fate regarding inclusion here.
An explanation of how theism could possibly be consistent with being rational is begged in nearly every other comment in this thread. I would like to provide one, and I will do so in the Welcome Page, as suggested by AnnaSalamon. I’m not certain that I’m ready—a better understanding of LW would help me prepare a more relevant and convincing argument—but the time seems right. I will paste a link here when my argument is posted.
I would like to assure that I will not persist indefinitely in this argument, it is not my intention to be factious. When and if the tide has shifted so it seems that the general view of the group is that atheism is a precondition (minimum standards, consistency, etc) then I will know that my views or not aligned well enough. Already I am of a mind that this is not the place to debate theism generally—there are other forums for that. However, this would be the place to debate the relationship between theism and rationality, to the extent to which it is undecided and of interest.
Try the Open Thread, not the welcome page.
Also this links to much of the previous discussion of those arts by which even a moderately competent rationalist would flatly rule out all present-day religions.
If a topic is consistently downvoted, it really does seem to me that one ought to take a hint that perhaps the discussion is not locally desired, for whatever reason. I try to take those hints that are represented by my own comments being downvoted or my own posts not being upvoted. Consider doing likewise.
Eliezer, why recommend open thread? The idea was that the falsity of theism is something most of LW (including open thread) would like to move on from, but that we might well want a beginner-friendly area such as the welcome thread, or the “Ideas you’re not ready to post” thread, or if not those than some other thread we could create, where less background is okay. Folks who want an exclusive conversation can just stay out of that thread.
This is an example of why I thought it would be useful to have a rationality wiki. People could simply point at the content it contained on specific issues (such as the relationship between theism and rationality) instead of having to reinvent the wheel over and over again.
Requesting permission to post top-level in response to The Uniquely Awful Example of Theism. It’s not perfect, but I wrote it carefully.
You have enough karma—go for it!
Byrnema… I really think that you should take the hint.
If I were in your shoes, I’d be fairly scared of posting about this again if I’d expect to be shot down. But please don’t be afraid. I think such a post would really be interesting.
If it is shot down, that’s a fact about the ideas, or maybe how they were laid down, not about you, after all. In that case, it’s up to the people who disagree, to explain how they think you’re wrong, or why they disagree.
If you hold the ideas you’re exposing, as dear, or part of your identity, it may even hurt a bit more than simply being rebuked, but even then, really, I think it’ll only help the community, and you, to move forward, to add them on the mat, and see where it leads.
I think we should start saying explicitly that this is an atheists-only community.
It’s not that we don’t want to help theists become more rational; we do. But this website isn’t primarily an outreach mechanism; it’s a facilitator for discussions among people who are already on board with the basics of what we’re about. And those basics absolutely rule out theism and other supernatural stuff. I think we could say fairly categorically that if you don’t understand why theism is ruled out, you’re not ready to post here.
I would be at least as concerned about initially repulsing atheists who don’t feel that they want to be part of an overly-exclusionary community, as about driving out intelligent theists.
One good reason to, if not exclude theists, then at least minimize their presence here, is the very real possibility that they could become a large minority. Once their number has reached a certain percentage of the LW community, it’s almost a certainty that they will team up with agnostics and atheists who believe in belief to enforce deference towards their beliefs; by constantly chiding vocal anti-theists, or simply by making it known that they are theists once in a while, they will make it rude to state that theism is, in fact, irrational. This isn’t a new phenomenon, it’s one that already exists in nearly every society in the Western world, as well as most internet communities. I’d hate to see it take hold at Less Wrong too.
Probably this is a better idea than the one I started with—there’s a distinction between an atheist community and an atheist-only community, and the former may be wiser.
I feel like repeating myself here: just don’t foster the discussion concerned directly with religion. We don’t fight religion, we fight for people’s sanity. Otherwise, it’s like not allowing sick people to a hospital. In most cases, good hygiene within the community should be enough to keep the patients from harming each other.
So part of the question is whether this is a hospital or a medical conference.
Data point: I followed a link to OB from reddit, got (largely) cured, and am now doing my best to assist the doctors.
Fair enough. More specifically, the problem is in the efficiency of this criterion: yes, there is a correlation, but is a rule worth enforcing, would it hurt more than help? So, I guess the point comes down to me not considering this particular feature as salient as you do.
So in essence, you’re asking whether this is:
a place for experts on rationality to come and discuss / build on important developments, or
a place for people who need rationality to come and get better
ne?
The distinction isn’t quite as simple as I’m making—we are all actively fighting our own and each other’s irrationalities—but I still think there’s a line that can be drawn of whether a person is fundamentally in tune with the rationalist values that this site is all about.
However, I am given pause by the fact that everyone except Annoyance seems to disagree with me.
I agree with you in a weak sense . My position is that while we shouldn’t officially exclude theists from participation, we should nevertheless be free to take atheism completely for granted -- as would be manifested, for instance, in unhesitatingly using theism as a canonical example of irrationality. The kind of theist who will be welcome here is the kind who can handle this.
I think it is a huge mistake to assume that someone who is irrational in one area isn’t perfectly rational in other areas. I can easily imagine an intelligent theist writing helpful posts on nearly every subject that comes up here. The fact that he’s making errors when it comes to his thinking on God is totally beside the point. Creationists I’d be considerably more skeptical of- but I don’t think its impossible to imagine a good post on say keeping your brain healthy, or rhetorical tricks for convincing people coming from people with very wrong views in other areas. If a theist came here I take it you should down vote his/her bad arguments for theism and upvote any positive contributions they make. End of story.
Learning someone is a theist might be bayesian evidence that they are fools but its not strong enough evidence to prevent entry to the community even before seeing what the person has to offer.
I don’t see what we have here as a bag of tricks you can use to improve your rationality, but a programme that forms a coherent whole, as I set out here. To be a theist is to fail to grasp the whole thrust of the programme.
I’m not sure that is the case. To be a biblical literalist waiting for the rapture is certainly to have nothing in common with the programme. But there are theists who could share your concerns about the effect of our biases and who could make helpful contributions to that cause. And even though this isn’t a deconversion website I think if you want to evangelize for the rationalist project having a community that is open and friendly even to those who still have some irrational beliefs left over is really important. Would you let in questioning theists? Fideists?
If we’re so keen on outreach, why aren’t we talking about it?
EDIT: I should add that I wouldn’t want to leave the impression that my typical picture of a theist is some thundering fundamentalist; I have several pagan and liberal Christian friends, and they are smart and lovely people. Still, the Escher-brained things they say when you try and talk about the subject discourage me from the idea that they’re going to help us move things forward.
Superhappies would ask you, in the name of fairness, to invent a symmetric rite of admission for atheists. Some Bayesian-obvious truth that would sound similarly unacceptable to their social circle.
For example, we atheists could get a taste of theists’ feelings by declaring aloud that “women/blacks and men/whites don’t have equal intelligence on average” and watching the reactions. A “bigoted” version of Dawkins or Eliezer could arise and argue eloquently how this factual statement is irrelevant to morality, just like the issue of god’s existence. That was inflammatory on purpose; you could go for something milder, like the goodness of monarchy relative to democracy.
For cooperation to arise, the opposing side needs to have relative advantage. For the theists to ask atheists to argue for theism, they should consider atheists to be better at arguing for theism than they are. Fairness is not just about symmetry, but also about cooperation. And cooperation requires improvement in the outcome for all sides.
I wasn’t asking atheists to argue for theism. And I don’t understand your reply at all. Could you explain?
I confess I don’t understand what you mean by this. Are you wondering why more people haven’t commented on that post? Why I haven’t commented on that post?
Does this have something to do with our previous exchange?
Good questions. I guess I am venting my frustration that my lovely post has had so few comments. It feels like that there’s a conversation to be had about the whole subject that we keep nibbling at the edges of in exchanges like this when we should be driving hard for the center. If my post is a poor way to start on that, someone should make a better one.
So to tie that back into our exchange, I feel like I’d be better armed to discuss who we should be encouraging to post here in the name of outreach if we’d had a discussion on what sort of outreach we might do and what role this website might play in that.
However, it’s also more than possible that I have entirely the wrong end of the stick, in which case I’d appreciate guidance on where the right end might be found :-)
You’re right that debating factors the effect outreach would be a lot easier if we had criteria for what effective outreach means.
I think people prefer posts that go a long way toward making some argument- in contrast with those that just ask for input. Even if people like the question they’re less likely to promote the post. But your comment outlining the programme got a lot of karma. Why not make that into a full post and talk about the sorts of things you’d like our goals to be.
One other possibility is that its just too soon to do outreach. Maybe we need more time to mature and systematize our ideas.
I didn’t want to do that because I wanted to discuss everyone’s ideas, not just my own which I’m not wholly confident of, but you might be right that it would be a better way forward. Thanks.
Outreach? For someone who seems so avowedly anti-religious, you seem very eager to appropriate all the trappings of classical, unthinking religion. I’m fine discussing rationality here, but talk of proselytizing makes me nauseous.
I think you may be underestimating the degree of irony with which we’re using religious language.
Not to mention greatly overestimating the extent to which a superficial similarity implies a deep one. I plan to continue to urinate, even though the Pope does so too.
Funny, I would have said that most biblical literalists have a lot more in common with us than moderate Christians. Literalists tend to believe they believe in God because of the evidence they see for His existence, whereas moderates usually acknowledge that there is no evidence, but they believe anyway. Of course, literalists make several mistakes about what constitutes evidence, and they reach some damn crazy conclusions, but the basic cognitive mechanism of reason is there: Beliefs must be supported by logic and evidence. Moderates don’t even have that much. It’s why fundamentalists who are forced to accept that a big part of their belief system is false (such as Genesis) tend to drop the whole thing and become atheists, while moderates will go on making excuses to keep on believing in their infinitely malleable worldview.
I doubt anyone needs to be warned that their argument for religious faith would have to be exceptional indeed to earn a positive response here.
I’m proposing something stronger than that: it’s not appropriate to post arguments for religious faith here at all. In fact, I’m proposing something stronger than that: if you don’t understand why theism is ruled out, you’re not ready to post here at all.
Agreed, with reservations. (Some might be useful examples. Some might be sufficiently persuasive prima facie to be worth a look even though we’d be astonished if they turned out actually to work.)
If theism were just one more thing that people can easily be wrong about, perhaps you’d be right. As it is, there’s huge internal and external pressure influencing many people in the direction of theism, and some people are really good at compartmentalizing; and as a result there are lots of people who are basically good thinkers, who are basically committed to deciding things rationally, but who are still theists. I don’t see any reason to believe that no one in that position could have anything to offer LW.
Once again: Would you want to keep out Robert Aumann?
Um. No. Busted.
Still, we can agree that Aumann is not on board with the programme...
“Still, we can agree that Aumann is not on board with the programme...”
What on earth are you talking about? A legendary rationalist is “not on board with the programme” here at a website ostensibly devoted to the discussion of rationality because he might be a theist? Get a grip. There is no such “programme” that would exclude him.
The site would be helped most not by categorically excluding theists, but by culling out all the blinkered and despicable cult-like elements that seem to worm their way persistently into the manner of speaking around here.
Aumann is a mathematician-of-rationality, not a rationalist. Completely different skillset. It would be great to have him here, but not because he agrees with the site’s basic goals and premises.
Or to put it another way: expert on rationality versus expert at rationality?
This is one of the more asinine things I’ve seen on here. There are many, many brilliant people who happen to be theists, and to categorically exclude their contributions and viewpoints would be doing the community a grave disservice. I’m an atheist myself, but I’ve never thought for a second that “God doesn’t exist” is any kind of fundamental, unassailable axiom of rationality. It’s not.
AlexU, could you re-phrase your comment to have more descriptive discussion of the consequences you want to avoid, or of the evidence that leads you to disagree with ciphergoth? Right now, your comment mostly reads as “I really really want to express how aligned I/we am with niceness/tolerance/etc., and how socially objectionable ciphergoth’s comment is.” If you can think through the reasons for your response, and include more testable descriptions per connotation, the conversation will probably head more useful places.
ETA: I have the same suggestion concerning your other two recent comments.
The site is about rationality, not dogma—I think. Posts should be judged on the strength and clarity of their ideas, not the beliefs of the individual posters who espouse them. To categorically exclude an entire class of people—some of whom are very good rationalists and thinkers—simply because they don’t subscribe to some LW party line, is not only short-sighted, but perversely, seems to run entirely counter the spirit of a site devoted to rationality.
The consequences, I imagine, would be less interesting, less broad discussion, with a constricting of perspective and a tendency to attract the same fairly narrow range of people who want to talk about the same fairly narrow range of topics. It will select not for good rationalists per se, but some mix of people who overly fancy themselves good rationalists, as well as the standard transhumanism/Singularity crowd that’s here because of EY.
Observe: Although this post has the same conclusion, because it has different arguments, it is voted up while similar-concluding different-argued comments by the same poster are voted down. (I agree with this judgment; this is how it is supposed to be.) Those wondering exactly what it takes to get voted up or voted down have here a good example before them.
“To categorically exclude an entire class of people—some of whom are very good rationalists and thinkers—”
But that’s the point. No one who belongs to that class is a good rationalist. I’m sure there are people who belong to that class who in limited contexts are good rationalists, but speaking globally, one cannot be a rationalist of any quality and exempt some assertion from the standards of rationality.
This isn’t about the perfect being the enemy of the good. It’s about minimum standards, consistency, and systematic honesty.
If you possess evidence that shows theism to be rationally justifiable, present it.
You can’t speak globally when it comes to the human brain.
Sure, if brains had any sort of global consistency or perfect internal software reuse, you could say that being a rationalist rules out believing in irrational things.
But as a practical matter, you can’t insist on consistency when someone might simply not have thought of applying the same logic to all their beliefs… especially since MOST of the beliefs we have are not perceptible as beliefs in the first place. (They just seem like “reality”.)
In addition, our brains are quite capable of believing in contradictory things at the same time, with one set controlling discourse and the other controlling behavior. In work with myself and others who have no conscious religious beliefs, I’ve often discovered, mid-mindhack, that there’s some sort of behavior in the person being driven by an unconscious desire to go to heaven or not be sent to hell. So even someone who thinks they’re an atheist can believe in silly things, without even knowing it.
So, IMO, it makes as much sense to ban people with supernatural beliefs, as it does to ban people who have idiotic beliefs about brains being consistent.
Actually, come to think of it, the belief that people’s brains must be consistent IS a supernatural belief, as there’s no physical mechanism in the brain that allows O(1) updating of belief structures that don’t share common components. To insist that the moment one becomes a rationalist, one must then become an atheist, is to insist on a miracle inconsistent with physics, biology, and information science.
So, if we are going to exclude people with inconsistent or supernatural beliefs, let’s start with the people who insist that the brain must be supernaturally consistent. (This is actually pretty reasonable, since such an error in thinking arises from the same mind-projection machinery that gives rise to theism, after all...)
I would expect the potential commentariat at Less Wrong to be terribly small if anyone holding a firm belief that is not rationally justifiable were banned.
I am highly skeptical that I have fully purged myself of all beliefs where I have been presented with correct damning evidence against them. If anything, reading here has raised my estimate of how many such beliefs I might hold. Even as I purge many false propositions, I become aware of more biases to which I am demonstrably subject. Can anyone here who is aware of the limitations of our mental hardware say otherwise?
I am not as convinced as most posters here that all possible versions of theism are utterly wrong and deserve to be accorded effectively zero probability, but in any case, it’s clear that LW (and OB) communities generally wish to consider the case against theism closed. To the extent that the posters do not attack theism or theists in an obviously biased way, I have respected the decision and post and vote accordingly, including downvoting people who try to reopen the argument in inappropriate places.
I also intend to make a habit of downvoting those who waste time denouncing theism in inappropriate contexts or for specious reasons having more to do with signaling tribal bonds than with bringing up any new substantive argument against theism.
I don’t recall who suggested that we need another canonical example of irrationality, but I agree wholeheartedly. In fact, I’d suggest we need a decent short list to rotate through, so that no one topic gets beaten up so consistently as to encourage an affective death spiral around it.
I would rather emphasize Raising the Sanity Waterline. If we bar theists outright, we miss the opportunity to discuss rationality with them in different contexts. We don’t get to learn what insights they might have when their biases are not in the way. We don’t get to teach them about all the biases using nonreligious examples, so that they might, on their own, figure out to check for those same biases in their theistic beliefs. If we allow theists, we still have the karma system to bury any obnoxious comments they make on discussions of religion, and the same karma system will encourage them to participate in the areas where they will get the most benifet.
Are you so confident in your perfect, unerring rationality that you’ll consider that particular proposition completely settled and beyond questioning? I’m about as certain that there is no God as one can get, but that certainty is still less than 100%, as it is for virtually all things I believe or know. Part of maintaining a rational outlook toward life, I’d think, would be keeping an attitude of lingering doubt about even your most cherished and long-held beliefs.
Yes, that will always be technically true—no belief can be assigned a probability of 100%. Nevertheless, my utility calculations recognize that the expected benefit of questioning my stance on that issue is so small (because of its infinitesimal probability) that almost anything else has a higher expected value.
Why then should I question that, when there is so much else to ask?
Where are you getting the idea that Annoyance said this?
Why isn’t this comment voted higher? (I presume it is because it is relatively new.) This is exactly the kind of comment that makes it easier on new/shy people. This sort of feedback is phenomenal. It may be harsh but it (a) gives specific criticisms without being vindictive (b) offers a reinterpretation of the original post and (c) offers suggestions on how to proceed in the future.
I feel that AlexU’s response was a vast improvement and is evidence to the value of AnnaSalamon’s comment.
Since you and I have voted it up, I guess two people have voted it down. That seems strange to me too.
Really must set up my LessWrong dev environment so I can add a patch to show both upvotes and downvotes!
Indeed. If that is the only change to this site’s system or ethic that comes out of this discussion, it will have been worth it.
A well-made point, AlexU. Unfortunately, while the point is correct, the argument which it is a part of is not.
Atheism isn’t axiomatic. It follows necessarily from the axioms of reason applied to the available evidence. If someone is a theist, that means either that they reject reason, or they have shocking evidence which is not available to others… and which they need to make available if they want others to recognize that their position is a sane one.
At present, there is absolutely no reason to think that anyone is in possession of such hidden evidence. Given the non-existence of such data, it follows that theists reject reason—which other, independent evidence confirms.
In this world, one cannot be informed, sane, and believe that the Earth is flat. The available evidence simply does not support that position. Nor does it support belief in a deity or deities.
No, but one can be fairly informed, sane, and a theist.
There are instrumental reasons for accepting theism that are hardly matched by rejecting it. For the most part, people don’t think the question of God’s existence is very important—if it is the case that a good Christian would live the same in the absence of God’s existence (a common enough contention) then nothing really turns on the question of God’s existence. Since nothing turns on the question, there’s no good reason to be singled out as an atheist in a possibly hostile environment.
If anything, there’s something terribly (instrumentally) irrational about calling oneself an atheist if it confers no specific benefit. And for many people, the default position is theism; the only way to become an atheist is to reject a commonly-held belief (that, again, nothing in life really turns on).
So I’d agree that a scholar of religion might be (epistemically) irrational to be a theist. But for the everyday person, it’s about as dangerous as believing the Earth to be a sphere, when it really isn’t.
Yeah. Another way of putting this is that no one is completely sane. People act irrationally all the time and it doesn’t make sense to target a group of people who have irrational beliefs about an issue that hardly affects their life while not targeting others (including ourselves) for acting irrationally in a bunch of different ways that really affect the world.
“No, but one can be fairly informed, sane, and a theist.”
No.
I wish I could elaborate more, but your statement is simply wrong. In our world, with our evidence, sanity and being informed rule out theism.
There are different standards for what to consider sane. At least among ourselves, we should raise the sanity waterline. But as the word is normally used, informed and rational theists are considered possible.
Not sure I disagree with your position, but I voted down because simply stating that your opponent is wrong doesn’t seem adequate.
I would like you to elaborate more. I gave an argument in favor of being a theist. I have seen few good ones in favor of being an atheist.
I’m not at all convinced that atheism is the best epistemic position (most epistemically rational). I’m an atheist for purely methodological reasons, since I’m a philosopher, and dead dogma is dangerous. I could see someone being a theist for purely instrumental reasons, or by default since it’s not a very important question.
“I have seen few good ones in favor of being an atheist.”
That misses the point. Atheism is the null hypothesis; it’s the default. In the complete absence of evidence, non-commitment to any assertion is required.
The idea of a null hypothesis is non-Bayesian.
A null hypothesis in Bayesian terms is a theory with a high prior probability due to minimal complexity.
I’m not sure it’s so clear cut.
They key point is that when you do the p value test you are determining p(data | null_hyp). This is certainly useful to calculate, but doesn’t tell you the whole story about whether your data support any particular non-null hypotheses.
Chapter 17 of E.T. Jaynes’ book provides a lively discussion of the limitations of traditional hypothesis testing, and is accessible enough that you can dive into it without having worked through the rest of the book.
The Cohen article cited below is nice but it’s important to note it doesn’t completely reject the use of null hypotheses or p-values:
I think it’s funny that the observation that it’s “non-Bayesian” is being treated here as a refutation, and got voted up. Not terribly surprising though.
Could you be more explicit here? I would also have considered that if the charge of non-Bayesianness were to stick, that would be tantamount to a refutation, so if I’m making a mistake then help me out?
The charge was not that the idea is not useful, nor that it is not true, either of which might be a mark against it. But “non-Bayesian”? I can’t unpack that accusation in a way that makes it seem like a good thing to be concerned about. Even putting aside that I don’t much care for Bayesian decision-making (for humans), it sounds like it’s in the same family as a charge of “non-Christian”.
One analogy: non-mathematical, not formalized, not written in English, and attempts to translate generally fail.
See [*] for a critique of null hypothesis and related techniques from a Bayesian perspective. To cite:
[*] J. Cohen (1994). `The Earth Is Round (p < .05)’. American Psychologist 49(12):997-1003. [pdf].
Being non-Bayesian is one particular type of being untrue.
Now, what does this mean? Sounds horribly untrue.
But atheism isn’t actually the default. A person must begin study at some point in his life—you start from where you actually are. Most people I’m aware of begin their adult lives as theists. Without a compelling reason to change this belief, I wouldn’t expect them to.
“But atheism isn’t actually the default.”
Well… yes, it is. I do not know of any theistic infants. Actually, I’m not aware that infants have any beliefs as such.
Young children seem predisposed to attribute things to powerful but non-present entities, but I’m fairly certain there are logical fallacies involved.
The fact that many people accept certain concepts as given without questioning them thoroughly—or at all—does not constitute a justification for believing those things. I have often heard the claim that philosophy does not attempt to examine premises but only to project and study the consequences of the premises people bring to it; I consider that to be one of the reasons why ‘philosophy’ is without merit.
It seems that Annoyance and thomblake are using different definitions of “default”.
Annoyance uses it the same as null hypothesis, the theory with the smallest complexity and therefore the best prior probability, that any other theory needs evidence to compete with. In this sense, atheism is the default position, supposing that the universe follows mindless laws of nature without the need for initial setup or continuous intervention by any sort of intelligent power is simpler than supposing the universe acts the same way because some unexplained deity wills it. This definition is useful to figure out what our beliefs ought to be.
Thomblake seems to mean by “default”, the belief one had when achieving their current level of rationality, that they will keep until they find a reason to change it. For most people, who are introduced to a religion at young age before they get a chance to learn much about anything approaching rationality, some sort of theism would be this default. This definition is useful to figure out why people believe what they believe, and how to convince them to change their beliefs.
Now, I am not sure what we mean by “sanity”, but I think someone who maintains a default position (in thomblake’s sense) that they would not have adopted if first presented in their current level of rationality, while they may benifet from achieving an even higher level of rationality (or simply haven’t reviewed all their default positions), they are not necessarily incapable of achieving the higher level.
I’m not even entirely sure that we’re all using the word ‘atheism’ to refer to the same things.
This highlights the problems that arise when people use the same terminology for different concepts.
You keep doing this. Simply stating the opposite of another statement is not helping. Even if you clarify a little later it seems to be indirectly and without a solid response to the original point.
That’s why you need to read the sentences following the one you quoted.
Infants without beliefs do not last long. They get beliefs eventually. Trying to argue this point just pushes the relevant stuff up the tree and makes the argument about semantics that are not particularly useful for the topic at hand.
And… are you saying that the null hypothesis is whatever an infant believes? How is that useful? I think it degrades definitions of things like “atheism” by saying that if you make no choice it is the same as making the correct choice. Coming to the correct conclusion for the wrong reason is the wrong solution.
The null hypothesis could be wrong. Logical fallacies are irrelevant.
This is irrelevant to the topic. So, at the end, I spent my time telling you your comment was mostly irrelevant. I should just downvote and bury it like I did the other one.
“And… are you saying that the null hypothesis is whatever an infant believes? ”
Yes, I have stopped beating my wife, thank you for asking.
I think you need to review what the concept of the null hypothesis actually is.
That wasn’t a loaded question. That was asking for clarification.
(PS) I just noticed that “-1 points” is plural. Is that correct for negative numbers?
Oddly enough, yes. “0 points” is also the standard. The singular only applies for 1.
Yes, that’s one of the odd things about plurality, and why I argue that it’s a silly thing to encode in so much of our language. Singular means exactly one, plural means any other number. Sometimes we use the singular and “of a” for fractions, like “one quarter of a pie”, but “0.25 pies” is also correct.
ETA citation
“That wasn’t a loaded question. That was asking for clarification.”
No, clarification is when you have an imprecise idea and ask someone to provide more content to make it clearer. What you did was ask about something that was neither said nor implied.
I have no interest in denying the infinite number of meanings I don’t express in any given post, only in discussing the meanings I do express. Feel free to ask questions about those.
I am frankly amazed that so simple and evident an assertion should receive so many negative votes. (Not surprised, merely amazed. It would have to violate my expectations to be a surprise.)
Can I assert that Santa Claus does not exist and cannot be rationally considered to exist without receiving similar votes, or do I need to review the demonstration of why such is the case to avoid the wrath of the voters?
A more pertinent question: why should any of us care about negative votes when they’re given out so poorly?
Downvoted because it adds nothing to what you said before. Repetition of bald assertions, even true ones, is one habit we want to avoid.
I didn’t vote the post in question up or down, but I would speculate that it was received negatively simply because the tone came across as rude.
There’s sometimes a tendency in rationalists to observe (accurately) that our society overemphasizes politeness over frankness, and then to take it upon ourselves to correct this. Unfortunately, being human, we tend to do this selectively: by being ruder to others, sometimes to an overcompensating extent, while still reacting poorly to the rudeness of others. At least, that’s an issue I’ve had in the past. Your mileage may vary.
My personal take on it is that keeping to the standard level of etiquette is less trouble than the alternative, especially when trying to function in a conversational setting with a wide range of people. The metaphor of apparently unnecessary politeness as a “social lubricant” of sorts has been helpful to me in this regard.
But as I said, I’m only guessing here. I think you’d be within your rights to simply stop caring about the votes you get, be they positive or negative. Just be aware that you may be giving up on useful feedback information that way.
Great comment, agreed on all points. One of my mottos is “As polite as possible; as rude as necessary”.
I can’t see anything in Annoyance’s writings that could not be conveyed with less rudeness except their urge to ensure we all understand the contempt they hold us all in.
I like that motto a lot. Another one that bears on this is Postel’s Law: “Be conservative in what you do; be liberal in what you accept from others.”
In the case of wanting to deemphasize politeness, this would suggest being more lenient in the amount of rudeness you allow from others, but not increasing it in your output. Sort of the principle behind Crocker’s Rules.
That comment could equally well have gone in “The ideas you’re not ready to post,” come to think of it.
And, then again, some people just enjoy being obnoxious.
My downvote (along with most others I presume) is not about agreement, but about whether you are adding anything useful to the discussion. Argument by repeated assertion is not supposed to be a staple of rationalist discourse. Either it’s worth your time to provide some links to an actual argument or it isn’t.
Do you really expect points for needing to get in the last word?
Your statement was simply wrong, by most commonly used definitions of sanity. Try pleading insanity in court based purely on a belief in god. Your comment also added nothing of value to the discussion.
The rational thing to do when you get downvoted would be to at least consider the possibility that your own judgement is at fault rather than assuming it is proof that negative votes are given out without good reason.
“Your statement was simply wrong, by most commonly used definitions of sanity.”
True, but not useful. The most commonly-used definitions of sanity are not only incorrect but insane.
“Your comment also added nothing of value to the discussion.”
That’s very useful feedback, indeed. Now I appreciate your thoughts and votes much more accurately.
How can a definition be incorrect?
If you find the common usage incoherent or otherwise not useful, don’t use it. To do otherwise is to lie.
The assumption of both above comments is that there can be multiple commonly-used definitions of a word. Annoyance is using one of the commonly-used definitions that doesn’t fit into the ‘most’ above. He asserts that the other definitions are not only incorrect but insane, and I think this answers your question—a definition can be incorrect in the case that it is insane. Though I think calling a definition ‘insane’ is an odd use of the word.
I think you have to remember that saying something obvious is not the same as saying something useful. If someone came by and said “It is rational to believe in Santa Claus” it does not help to say “No it isn’t. Sorry, can’t elaborate.”
I would have to write an entire post—and a quite lengthy one at that—to do justice to the demonstration, and it’s already common knowledge.
If repeating something short and simple that’s already been said is so undesirable, why in the world would I wish to post something large, complex, and cumbrous that’s already widely known? Why would any of you wish me to do so?
Sorry, I deleted my comment because two other people basically said the same thing. I was hoping to get it out before you responded. My bad.
I am not necessarily saying I would rather you post a huge wall of text. Personally, I would just link to a good summary of the material and say, “This has been covered before.”
Another way to respond would be to play coy and ask for more details. This, at the very least, encourages more dialogue.
Another solution is to just not respond at all.
None of these are particularly fun, but I like to think you can at least avoid the negative response from the community.
You’re doing fine.
It strikes me as a purely theoretical point—an admonition that we’re not downvoting enough.
But who knows—someone who EY has repeatedly called a ‘troll’ has been pretty consistently on the ‘top contributors’ list since near when we started here.
Note that he’s been consistently losing Karma ever since the automatic self-upvote was turned off.
I doubt that’s the whole story—he doesn’t post nearly as frequently as a lot of folks around here, and if you look at his recent comment history, a lot of his comments are at about −5 or so that (as far as I can tell) would be at about a 0 if they were posted by anyone else. Seems like he’s getting unusually and inappropriately slammed with downvotes lately.
Look at it this way: it is folly to evaluate the known in terms of the unknown, while it’s necessary to evaluate the unknown in terms of the known.
It’s much, much easier to decide the value of a comment or comment history than to judge the value of how people vote for it. How many people read a comment, but don’t vote? How many positive and negative votes are there? What do we know about how insightful and wise the voting community is as a whole, and how do we determine how well the voters manifest those qualities in individual cases?
The quality of the comments is clearer—more known—than the quality of the votes. It follows that the karma system doesn’t provide us with a way to judge the comments, but a way to judge the community. Not a great way by any means, admittedly, but a method.
That may be right. People don’t just vote for comments, but also for the person. In time, the impression sunk in, which modified the baseline of voting decisions.
mormon1 and psycho (probably the same person with multiple accounts) tried to troll, but were/was quickly deleted.
You seem to be implying that communities such as 4chan are “bad”. You did not say that explicitly, but you heavily implied it. Why do you think free-speech communities are bad? Your choice of the word “fool” to characterize someone with whom you disagree is questionable at best.
A quick factual note: 4chan unconditionally bans child pornography and blocks (in a Wikipedia sense) the IPs, as I found out myself back when I was browsing through Tor. They’ll also moderate off-topic posts or posts in the wrong section. They actually have a surprisingly lengthy set of rules for a place with such an anarchistic reputation.
One: I support the above post. I’ve seen quite a few communities die for that very reason.
Two: Gurren Lagann? (pause) Gurren Lagann? Who the h*ll do you think I am?
Barry Kort posted notice of this article on a thread at The Wikipedia Review.
We have there, over the years, often considered this problem, with Wikipedia being the sublimely ridiculous example.
Here are a couple of threads that come to mind:
Wiki Induced Knowledge Impairment
The Web Is Making People Stupid
Open to discussion here or there …
Jon Awbrey
What communities actually die in that way? If they don’t actually end but continue differently then it’s like saying science fiction died because new authors with their newfangled take on the genre changed things (disclaimer: I don’t really know anything about science fiction).
In the case of spam there is a problem of high volume (raivo pommer estee is a good counter-example, as there’s generally no more than 1 per thread and it’s short) but otherwise I don’t really see the harm in idiots posting. Anybody is free to skip past stuff they don’t care about (I do it all the time) and people get value out of even reading stupid comments, so I don’t see what’s so terrible that it outweighs it. I’m with Hopefully Anonymous on how I rate blogs by their comment policies.
And yet here you are here, rather than 4chan.
I’m out of downvotes, but this is not a reasonable criticism of his point.
It’s my impression that 4chan is about anime and lolcats. I hate both. It is also my impression that there are more people who are at 4chan rather than here compared to here rather than 4chan. I think 4chan was set up to be just what it is. Is there a Less Wrong analogue that got turned into a 4chan.
Usenet.
For the record, I am also having a hard time deciding how to vote on comments to this post. Is it too early to turn up the standards and vote everyone down?
Don’t overcompensate ? Reversed neutrality isn’t intelligent censorship, and downvoting people more than usual, just to obey the idea that now you should downvote, won’t work well I think. Take a step back, and some time to see the issue from an outside view.
Agreed—What seems to be happening, funny enough, is an echo chamber. Eliezer said “you must downvote bad comments liberally if you want to survive!” and so everyone’s downvoting everyone else’s comments on this thread.
Except they are not. The complete irony is that my comment about downvoting dropped to −4 and has been climbing ever since. It displayed the exact behavior I was complaining about in my other comment. I expected this comment to drop like a rock and now it is sitting with all of my other bad and mediocre posts at a useless −1. My comment was terrible. It should be downvoted.
(Edit) Oh, I guess I should voice my agreement with infotropism. I think downvoting “more” is just overcompensating.
Which was terrible and sitting at −1? I don’t understand. All I was trying to indicate is that I’ve noticed a pronounced deviation from standard upvoting and downvoting practices in this thread, mostly towards downvoting.
This comment has been fluctuating between −1 and −4 for a while. As of now it is at −3.
I was using it as an example of people upvoting a comment that really was not doing anything. Since it is back to −3, I suppose I have no valid point left. So, yeah, you could be right.
awesome post, eliezer. you sound like quirrel.
I think you’re reading that into it. I read this well before MoR was a gleam in Eliezer’s eye, and didn’t find it very what-I-might-later-label-as-Quirrelly-which-is-actually-”Hanson-y”.
Reading this, I found myself wondering what Eliezer felt he needed to justify censoring, and why he felt the need to talk us into approving his doing so.
Then I took a look at the “Top Contributors”, and noticed that despite having a karma score higher than several people on the list, my account wasn’t listed.
Hm.
Mine neither. I don’t think it’s censorship.
I see you above thomblake and below Vladimir. Do you see something different?
It seems as though ‘top contributors’ has been sluggish lately—I’ve observed a few cases of changes in karma not affecting the list, where I saw it update immediately in the past. I also observed what Annoyance notes above.
ETA: also, there are several high-karma people who seem absent from the list for no apparent reason.
It’s been reported. The sluggishness is due to caching, but there are too few people in the list and there are people with higher karma that are not being included.
I see it now, but that’s changed today. Anyway, this is already in the bug tracker.
I remember recently someone noted that having anything ‘banned’ removes you from ‘top contributors’ - can’t seem to find the reference. Not sure if there are any other relevant mechanisms.
PRACTICE DOWNVOTING ON THIS COMMENT FAGS LOLOLOLOL
Damn, now I can’t decide between downvoting this comment as practice, and upvoting it for giving me an opportunity to practice.
check his comment history, find a couple good comments you missed, and upvote them. Then downvote this one.
I have taken your advice.
I just go through this loop of thinking “OK, that’s a clever comment, nicely self-referential, I’m upvoting that. No, wait, I’m showing just the lack of discipline Eliezer is talking about! I should have the strength to downvote. CannibalSmith is setting a good test. Neat of him to do it. Must upvote...”
I downvoted. Clearly I really do need the practice. It’s clever of him to provide the opportunity to do so. I should upvote...
So, at the time of writing, this comment is at −16, which suggests that a lot of us have set our preferences to display all comments regardless of score.
Should we, perhaps, be more trusting of our fellow readers, and actually accept the existence of a threshold below which comments get hidden?
Too few comments ever get hidden for this to be of any use at this point, people are just being curious.
I’ve upped mine to 1 and will probably raise it further.
thanks =)
ETA: If somebody’s practicing on me too, I’m fine with that, but may I propose a general policy that we are allowed to be polite and, you know, human without random downvotes?
Voted up for your ETA...
Obeying. Even though I had some strong reasons to upvote. Edit : you’re running for a record there—the most downvoted comment on LW :-)
Gotta do something with all that karma. :)
Like others, I want to vote this up and down; you should make another comment saying “tip jar”. :-)
Kossack!
With −28 points, why is the comment I am responding to not the last top-level one on the page?
It is for me. Perhaps you inadvertently changed the comment sort order dropdown?
Thanks. You have to sort by “Top” to get that effect. I was sorting by “Popular”.
I don’t think I ever changed it—perhaps “Popular” is the default.
Unnecessary and gratuitous. Downvoted. :-)