Why Our Kind Can’t Cooperate
From when I was still forced to attend, I remember our synagogue’s annual fundraising appeal. It was a simple enough format, if I recall correctly. The rabbi and the treasurer talked about the shul’s expenses and how vital this annual fundraise was, and then the synagogue’s members called out their pledges from their seats.
Straightforward, yes?
Let me tell you about a different annual fundraising appeal. One that I ran, in fact; during the early years of a nonprofit organization that may not be named. One difference was that the appeal was conducted over the Internet. And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd. (To point in the rough direction of an empirical cluster in personspace. If you understood the phrase “empirical cluster in personspace” then you know who I’m talking about.)
I crafted the fundraising appeal with care. By my nature I’m too proud to ask other people for help; but I’ve gotten over around 60% of that reluctance over the years. The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year’s annual appeal. I sent it out to several mailing lists that covered most of our potential support base.
And almost immediately, people started posting to the mailing lists about why they weren’t going to donate. Some of them raised basic questions about the nonprofit’s philosophy and mission. Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them. (They didn’t volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)
Now you might say, “Well, maybe your mission and philosophy did have basic problems—you wouldn’t want to censor that discussion, would you?”
Hold on to that thought.
Because people were donating. We started getting donations right away, via Paypal. We even got congratulatory notes saying how the appeal had finally gotten them to start moving. A donation of $111.11 was accompanied by a message saying, “I decided to give **** a little bit more. One more hundred, one more ten, one more single, one more dime, and one more penny. All may not be for one, but this one is trying to be for all.”
But none of those donors posted their agreement to the mailing list. Not one.
So far as any of those donors knew, they were alone. And when they tuned in the next day, they discovered not thanks, but arguments for why they shouldn’t have donated. The criticisms, the justifications for not donating—only those were displayed proudly in the open.
As though the treasurer had finished his annual appeal, and everyone not making a pledge had proudly stood up to call out justifications for refusing; while those making pledges whispered them quietly, so that no one could hear.
I know someone with a rationalist cause who goes around plaintively asking, “How come the Raelian flying-saucer cult can get tens of thousands of members [probably around 40,000] interested in complete nonsense, but we can’t even get a thousand people working on this?”
The obvious wrong way to finish this thought is to say, “Let’s do what the Raelians do! Let’s add some nonsense to this meme!” For the benefit of those not immediately stopped by their ethical inhibitions, I will observe that there may be a hundred failed flying-saucer cults for every one that becomes famous. And the Dark Side may require non-obvious skills, which you, yes you, do not have: Not everyone can be a Sith Lord. In particular, if you talk about your planned lies on the public Internet, you fail. I’m no master criminal, but even I can tell certain people are not cut out to be crooks.
So it’s probably not a good idea to cultivate a sense of violated entitlement at the thought that some other group, who you think ought to be inferior to you, has more money and followers. That path leads to—pardon the expression—the Dark Side.
But it probably does make sense to start asking ourselves some pointed questions, if supposed “rationalists” can’t manage to coordinate as well as a flying-saucer cult.
How do things work on the Dark Side?
The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves. So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented—even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.
(“Pluralistic ignorance” is the standard label for this.)
If anyone is still unpersuaded after that, they leave the group (or in some places, are executed)—and the remainder are more in agreement, and reinforce each other with less interference.
(I call that “evaporative cooling of groups”.)
The ideas themselves, not just the leader, generate unbounded enthusiasm and praise. The halo effect is that perceptions of all positive qualities correlate—e.g. telling subjects about the benefits of a food preservative made them judge it as lower-risk, even though the quantities were logically uncorrelated. This can create a positive feedback effect that makes an idea seem better and better and better, especially if criticism is perceived as traitorous or sinful.
(Which I term the “affective death spiral”.)
So these are all examples of strong Dark Side forces that can bind groups together.
And presumably we would not go so far as to dirty our hands with such...
Therefore, as a group, the Light Side will always be divided and weak. Atheists, libertarians, technophiles, nerds, science-fiction fans, scientists, or even non-fundamentalist religions, will never be capable of acting with the fanatic unity that animates radical Islam. Technological advantage can only go so far; your tools can be copied or stolen, and used against you. In the end the Light Side will always lose in any group conflict, and the future inevitably belongs to the Dark.
I think that one’s reaction to this prospect says a lot about their attitude towards “rationality”.
Some “Clash of Civilizations” writers seem to accept that the Enlightenment is destined to lose out in the long run to radical Islam, and sigh, and shake their heads sadly. I suppose they’re trying to signal their cynical sophistication or something.
For myself, I always thought—call me loony—that a true rationalist ought to be effective in the real world.
So I have a problem with the idea that the Dark Side, thanks to their pluralistic ignorance and affective death spirals, will always win because they are better coordinated than us.
You would think, perhaps, that real rationalists ought to be more coordinated? Surely all that unreason must have its disadvantages? That mode can’t be optimal, can it?
And if current “rationalist” groups cannot coordinate—if they can’t support group projects so well as a single synagogue draws donations from its members—well, I leave it to you to finish that syllogism.
There’s a saying I sometimes use: “It is dangerous to be half a rationalist.”
For example, I can think of ways to sabotage someone’s intelligence by selectively teaching them certain methods of rationality. Suppose you taught someone a long list of logical fallacies and cognitive biases, and trained them to spot those fallacies in biases in other people’s arguments. But you are careful to pick those fallacies and biases that are easiest to accuse others of, the most general ones that can easily be misapplied. And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws. So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don’t like. This, I suspect, is one of the primary ways that smart people end up stupid. (And note, by the way, that I have just given you another Fully General Counterargument against smart people whose arguments you don’t like.)
Similarly, if you wanted to ensure that a group of “rationalists” never accomplished any task requiring more than one person, you could teach them only techniques of individual rationality, without mentioning anything about techniques of coordinated group rationality.
I’ll write more later (tomorrow?) on how I think rationalists might be able to coordinate better. But today I want to focus on what you might call the culture of disagreement, or even, the culture of objections, which is one of the two major forces preventing the atheist/libertarian/technophile crowd from coordinating.
Imagine that you’re at a conference, and the speaker gives a 30-minute talk. Afterward, people line up at the microphones for questions. The first questioner objects to the graph used in slide 14 using a logarithmic scale; he quotes Tufte on The Visual Display of Quantitative Information. The second questioner disputes a claim made in slide 3. The third questioner suggests an alternative hypothesis that seems to explain the same data...
Perfectly normal, right? Now imagine that you’re at a conference, and the speaker gives a 30-minute talk. People line up at the microphone.
The first person says, “I agree with everything you said in your talk, and I think you’re brilliant.” Then steps aside.
The second person says, “Slide 14 was beautiful, I learned a lot from it. You’re awesome.” Steps aside.
The third person—
Well, you’ll never know what the third person at the microphone had to say, because by this time, you’ve fled screaming out of the room, propelled by a bone-deep terror as if Cthulhu had erupted from the podium, the fear of the impossibly unnatural phenomenon that has invaded your conference.
Yes, a group which can’t tolerate disagreement is not rational. But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational. You’re only willing to hear some honest thoughts, but not others. You are a dangerous half-a-rationalist.
We are as uncomfortable together as flying-saucer cult members are uncomfortable apart. That can’t be right either. Reversed stupidity is not intelligence.
Let’s say we have two groups of soldiers. In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy. In group 2, everyone at all levels knows all about tactics and strategy.
Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?
In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.
Doing worse with more knowledge means you are doing something very wrong. You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better. You definitely should not do worse. If you find yourself regretting your “rationality” then you should reconsider what is rational.
On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge. I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.
We would seem to be stuck in an awful valley of partial rationality where we end up more poorly coordinated than religious fundamentalists, able to put forth less effort than flying-saucer cultists. True, what little effort we do manage to put forth may be better-targeted at helping people rather than the reverse—but that is not an acceptable excuse.
If I were setting forth to systematically train rationalists, there would be lessons on how to disagree and lessons on how to agree, lessons intended to make the trainee more comfortable with dissent, and lessons intended to make them more comfortable with conformity. One day everyone shows up dressed differently, another day they all show up in uniform. You’ve got to cover both sides, or you’re only half a rationalist.
Can you imagine training prospective rationalists to wear a uniform and march in lockstep, and practice sessions where they agree with each other and applaud everything a speaker on a podium says? It sounds like unspeakable horror, doesn’t it, like the whole thing has admitted outright to being an evil cult? But why is it not okay to practice that, while it is okay to practice disagreeing with everyone else in the crowd? Are you never going to have to agree with the majority?
Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus. We signal our superior intelligence and our membership in the nonconformist community by inventing clever objections to others’ arguments. Perhaps that is why the atheist/libertarian/technophile/sf-fan/Silicon-Valley/programmer/early-adopter crowd stays marginalized, losing battles with less nonconformist factions in larger society. No, we’re not losing because we’re so superior, we’re losing because our exclusively individualist traditions sabotage our ability to cooperate.
The other major component that I think sabotages group efforts in the atheist/libertarian/technophile/etcetera community, is being ashamed of strong feelings. We still have the Spock archetype of rationality stuck in our heads, rationality as dispassion. Or perhaps a related mistake, rationality as cynicism—trying to signal your superior world-weary sophistication by showing that you care less than others. Being careful to ostentatiously, publicly look down on those so naive as to show they care strongly about anything.
Wouldn’t it make you feel uncomfortable if the speaker at the podium said that he cared so strongly about, say, fighting aging, that he would willingly die for the cause?
But it is nowhere written in either probability theory or decision theory that a rationalist should not care. I’ve looked over those equations and, really, it’s not in there.
The best informal definition I’ve ever heard of rationality is “That which can be destroyed by the truth should be.” We should aspire to feel the emotions that fit the facts, not aspire to feel no emotion. If an emotion can be destroyed by truth, we should relinquish it. But if a cause is worth striving for, then let us by all means feel fully its importance.
Some things are worth dying for. Yes, really! And if we can’t get comfortable with admitting it and hearing others say it, then we’re going to have trouble caring enough—as well as coordinating enough—to put some effort into group projects. You’ve got to teach both sides of it, “That which can be destroyed by the truth should be,” and “That which the truth nourishes should thrive.”
I’ve heard it argued that the taboo against emotional language in, say, science papers, is an important part of letting the facts fight it out without distraction. That doesn’t mean the taboo should apply everywhere. I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry. When there’s something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.
We need to keep our efforts to expose counterproductive causes and unjustified appeals, from stomping on tasks that genuinely need doing. You need both sides of it—the willingness to turn away from counterproductive causes, and the willingness to praise productive ones; the strength to be unswayed by ungrounded appeals, and the strength to be swayed by grounded ones.
I think the synagogue at their annual appeal had it right, really. They weren’t going down row by row and putting individuals on the spot, staring at them and saying, “How much will you donate, Mr. Schwartz?” People simply announced their pledges—not with grand drama and pride, just simple announcements—and that encouraged others to do the same. Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them. That’s probably about the way things should be in a sane human community—taking into account that people often have trouble getting as motivated as they wish they were, and can be helped by social encouragement to overcome this weakness of will.
But even if you disagree with that part, then let us say that both supporting and countersupporting opinions should have been publicly voiced. Supporters being faced by an apparently solid wall of objections and disagreements—even if it resulted from their own uncomfortable self-censorship—is not group rationality. It is the mere mirror image of what Dark Side groups do to keep their followers. Reversed stupidity is not intelligence.
- Cached Selves by 22 Mar 2009 19:34 UTC; 215 points) (
- SIAI—An Examination by 2 May 2011 7:08 UTC; 182 points) (
- Military Service as an Option to Build Career Capital by 9 Aug 2022 20:22 UTC; 164 points) (EA Forum;
- Defecting by Accident—A Flaw Common to Analytical People by 1 Dec 2010 8:25 UTC; 125 points) (
- What I’ve learned from Less Wrong by 20 Nov 2010 12:47 UTC; 113 points) (
- Bayesians vs. Barbarians by 14 Apr 2009 23:45 UTC; 103 points) (
- How to Save the World by 1 Dec 2010 17:17 UTC; 103 points) (
- Your Price for Joining by 26 Mar 2009 7:16 UTC; 87 points) (
- 10 Dec 2013 19:14 UTC; 87 points) 's comment on Open thread for December 9 − 16, 2013 by (
- Support That Sounds Like Dissent by 20 Mar 2009 22:28 UTC; 87 points) (
- Church vs. Taskforce by 28 Mar 2009 9:23 UTC; 83 points) (
- Can Humanism Match Religion’s Output? by 27 Mar 2009 11:32 UTC; 82 points) (
- Optimal Employment by 31 Jan 2011 12:50 UTC; 77 points) (
- The Craft & The Community—A Post-Mortem & Resurrection by 2 Nov 2017 3:45 UTC; 77 points) (
- 15 Jul 2013 23:43 UTC; 72 points) 's comment on Open thread, July 16-22, 2013 by (
- 11 Apr 2013 4:01 UTC; 71 points) 's comment on LW Women Submissions: On Misogyny by (
- The Case for a Bigger Audience by 9 Feb 2019 7:22 UTC; 68 points) (
- The Correct Contrarian Cluster by 21 Dec 2009 22:01 UTC; 67 points) (
- Four Focus Areas of Effective Altruism by 9 Jul 2013 0:59 UTC; 67 points) (
- You’re Calling *Who* A Cult Leader? by 22 Mar 2009 6:57 UTC; 67 points) (
- Being Half-Rational About Pascal’s Wager is Even Worse by 18 Apr 2013 5:20 UTC; 63 points) (
- Of Gender and Rationality by 16 Apr 2009 0:56 UTC; 62 points) (
- 8 Feb 2014 17:53 UTC; 61 points) 's comment on White Lies by (
- Proposal: Butt bumps as a default for physical greetings by 1 Apr 2023 12:48 UTC; 53 points) (
- 18 Mar 2011 7:22 UTC; 49 points) 's comment on Less Wrong NYC: Case Study of a Successful Rationalist Chapter by (
- Bystander Apathy by 13 Apr 2009 1:26 UTC; 48 points) (
- The End (of Sequences) by 27 Apr 2009 21:07 UTC; 46 points) (
- Making Rationality General-Interest by 24 Jul 2013 22:02 UTC; 45 points) (
- The Craft and the Community by 26 Apr 2009 17:52 UTC; 44 points) (
- Our Phyg Is Not Exclusive Enough by 14 Apr 2012 21:08 UTC; 43 points) (
- 23 Nov 2013 21:35 UTC; 40 points) 's comment on 2013 Less Wrong Census/Survey by (
- Slava! by 3 Oct 2010 2:47 UTC; 40 points) (
- Win Friends and Influence People Ch. 2: The Bombshell by 28 Jan 2024 21:40 UTC; 38 points) (
- Thoughts on the REACH Patreon by 30 Apr 2018 20:51 UTC; 36 points) (
- The Samurai and the Daimyo: A Useful Dynamic? by 13 Apr 2020 22:38 UTC; 35 points) (
- 23 Apr 2012 17:35 UTC; 35 points) 's comment on To like each other, sing and dance in synchrony by (
- 5 Apr 2013 0:15 UTC; 32 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- Building Cooperative Epistemology (Response to “EA has a Lying Problem”, among other things) by 11 Jan 2017 17:45 UTC; 31 points) (EA Forum;
- Winning is for Losers by 11 Oct 2017 4:01 UTC; 31 points) (
- Mode Collapse and the Norm One Principle by 5 Jun 2017 21:30 UTC; 28 points) (
- 13 Aug 2013 9:10 UTC; 26 points) 's comment on Biases of Intuitive and Logical Thinkers by (
- 11 Apr 2012 7:44 UTC; 26 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 by (
- 15 May 2012 18:47 UTC; 26 points) 's comment on I Stand by the Sequences by (
- 22 Aug 2011 15:24 UTC; 23 points) 's comment on Please do not downvote every comment or post someone has ever made as a retaliation tactic. by (
- 25 Nov 2015 9:10 UTC; 23 points) 's comment on Open thread, Nov. 23 - Nov. 29, 2015 by (
- 19 Aug 2011 23:50 UTC; 22 points) 's comment on Spaced Repetition literature review prize: And the winner is... by (
- HELP! I want to do good by 28 Apr 2011 5:29 UTC; 22 points) (
- 4 Feb 2011 23:08 UTC; 21 points) 's comment on On Charities and Linear Utility by (
- Parallelizing Rationality: How Should Rationalists Think in Groups? by 17 Dec 2012 4:08 UTC; 21 points) (
- 24 Jul 2009 21:41 UTC; 20 points) 's comment on Welcome to Less Wrong! by (
- 29 Apr 2013 18:58 UTC; 20 points) 's comment on LW Women Entries- Creepiness by (
- Playing Video Games In Shuffle Mode by 23 Mar 2009 11:59 UTC; 20 points) (
- 26 Dec 2012 5:50 UTC; 19 points) 's comment on META: Deletion policy by (
- 15 Apr 2022 9:12 UTC; 19 points) 's comment on Refine: An Incubator for Conceptual Alignment Research Bets by (
- Tell LessWrong about your charitable donations by 23 Jan 2012 21:35 UTC; 19 points) (
- The Futility of Status and Signalling by 13 Nov 2022 17:14 UTC; 19 points) (
- 25 Nov 2012 0:22 UTC; 18 points) 's comment on LW Women- Minimizing the Inferential Distance by (
- What can we learn from freemasonry? by 24 Nov 2013 15:18 UTC; 17 points) (
- Should we have secular churches? by 19 Jan 2011 22:02 UTC; 17 points) (
- 4 Aug 2014 3:35 UTC; 17 points) 's comment on Rationality Quotes August 2014 by (
- 28 Sep 2020 22:45 UTC; 17 points) 's comment on On Destroying the World by (
- 3 Nov 2012 22:19 UTC; 17 points) 's comment on In Defense of Moral Investigation by (
- 23 Jun 2014 15:03 UTC; 16 points) 's comment on Open thread, 23-29 June 2014 by (
- How to Not Get Offended by 23 Mar 2013 23:12 UTC; 16 points) (
- 15 May 2011 8:49 UTC; 15 points) 's comment on The elephant in the room, AMA by (
- How to improve the public perception of the SIAI and LW? by 8 Mar 2011 14:48 UTC; 15 points) (
- 10 May 2013 5:01 UTC; 15 points) 's comment on Open Thread, May 1-14, 2013 by (
- Group Bragging Thread (May 2015) by 29 May 2015 22:36 UTC; 14 points) (
- 17 Apr 2012 12:20 UTC; 14 points) 's comment on Be Happier by (
- 28 Sep 2020 23:32 UTC; 14 points) 's comment on On Destroying the World by (
- 23 Dec 2013 20:26 UTC; 14 points) 's comment on MIRI’s Winter 2013 Matching Challenge by (
- 15 Feb 2013 8:44 UTC; 14 points) 's comment on LW Women: LW Online by (
- Handling Emotional Appeals by 10 Dec 2011 7:30 UTC; 14 points) (
- 1 Jul 2010 2:01 UTC; 14 points) 's comment on A Challenge for LessWrong by (
- Four focus areas of effective altruism by 8 Jul 2013 4:00 UTC; 13 points) (EA Forum;
- 27 Dec 2010 2:24 UTC; 13 points) 's comment on Tallinn-Evans $125,000 Singularity Challenge by (
- A Novice Buddhist’s Humble Experiences by 4 Oct 2010 10:40 UTC; 13 points) (
- 18 Apr 2012 16:01 UTC; 12 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 by (
- 6 Feb 2011 11:01 UTC; 12 points) 's comment on How to Beat Procrastination by (
- 16 Feb 2013 22:48 UTC; 12 points) 's comment on Open thread, February 15-28, 2013 by (
- 22 Apr 2013 20:21 UTC; 12 points) 's comment on Ritual Report: Schelling Day by (
- 21 Sep 2011 7:05 UTC; 11 points) 's comment on A philosophy professor elicits college students’ reactions to Less Wrong by (
- 4 Aug 2014 7:52 UTC; 11 points) 's comment on MIRI 2014 Summer Matching Challenge and one-off opportunity to donate *for free* by (
- 28 Dec 2010 19:32 UTC; 11 points) 's comment on Tallinn-Evans $125,000 Singularity Challenge by (
- 21 May 2010 0:28 UTC; 11 points) 's comment on Be a Visiting Fellow at the Singularity Institute by (
- Proposed New Features for Less Wrong by 27 Apr 2010 1:10 UTC; 11 points) (
- 27 Jan 2017 10:15 UTC; 11 points) 's comment on 80,000 Hours: EA and Highly Political Causes by (
- 31 Jan 2012 19:41 UTC; 10 points) 's comment on . by (
- 31 Dec 2010 22:08 UTC; 10 points) 's comment on Optimizing Fuzzies And Utilons: The Altruism Chip Jar by (
- On Dragon Army by 24 Jun 2017 10:44 UTC; 10 points) (
- 5 Aug 2014 20:47 UTC; 10 points) 's comment on Why are people “put off by rationality”? by (
- 4 Aug 2013 10:33 UTC; 10 points) 's comment on Open thread, July 29-August 4, 2013 by (
- 7 Jan 2014 7:33 UTC; 10 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- Rationality Reading Group: Part Z: The Craft and the Community by 4 May 2016 23:03 UTC; 10 points) (
- 28 May 2010 19:09 UTC; 10 points) 's comment on On Enjoying Disagreeable Company by (
- 18 Jan 2014 16:31 UTC; 10 points) 's comment on Tell Culture by (
- Games for Rationalists by 12 Sep 2013 17:41 UTC; 10 points) (
- 27 Feb 2010 1:14 UTC; 10 points) 's comment on The Last Days of the Singularity Challenge by (
- Pre-commitment and meta at the Cambridge UK meetup by 29 Apr 2012 15:17 UTC; 9 points) (
- 13 Nov 2013 22:45 UTC; 9 points) 's comment on [Prize] Essay Contest: Cryonics and Effective Altruism by (
- 18 Feb 2016 21:09 UTC; 9 points) 's comment on The ethics of eating meat by (
- 28 Feb 2010 3:04 UTC; 9 points) 's comment on The Last Days of the Singularity Challenge by (
- 24 Sep 2013 13:45 UTC; 8 points) 's comment on Open Thread, September 23-29, 2013 by (
- 3 Aug 2013 2:34 UTC; 8 points) 's comment on Rationality Quotes August 2013 by (
- 22 Mar 2009 21:28 UTC; 8 points) 's comment on Cached Selves by (
- 5 Aug 2014 3:13 UTC; 8 points) 's comment on Rationality Quotes August 2014 by (
- 3 Oct 2010 11:19 UTC; 8 points) 's comment on Slava! by (
- 14 Feb 2013 6:14 UTC; 8 points) 's comment on Memetic Tribalism by (
- 11 Sep 2017 22:40 UTC; 8 points) 's comment on New business opportunities due to self-driving cars by (
- 7 Apr 2017 14:19 UTC; 7 points) 's comment on Project Hufflepuff: Planting the Flag by (
- 4 Dec 2021 16:36 UTC; 7 points) 's comment on What are the limitations on politically motivated relocation? by (
- 18 Jan 2012 23:44 UTC; 7 points) 's comment on [Meta] No LessWrong Blackout? by (
- 30 Jul 2013 12:20 UTC; 7 points) 's comment on Open thread, July 29-August 4, 2013 by (
- 13 Jun 2011 7:19 UTC; 7 points) 's comment on How not to move the goalposts by (
- Book Review: The Righteous Mind by 20 Jan 2022 21:42 UTC; 7 points) (
- 17 Sep 2021 2:23 UTC; 6 points) 's comment on The motivated reasoning critique of effective altruism by (EA Forum;
- 7 Oct 2013 21:20 UTC; 6 points) 's comment on The best 15 words by (
- 6 Aug 2013 21:16 UTC; 6 points) 's comment on Open thread, August 5-11, 2013 by (
- 21 Aug 2010 16:59 UTC; 6 points) 's comment on Transparency and Accountability by (
- 18 Jul 2014 8:18 UTC; 6 points) 's comment on An Experiment In Social Status: Software Engineer vs. Data Science Manager by (
- 12 Jun 2012 5:26 UTC; 6 points) 's comment on Intellectual insularity and productivity by (
- 4 Jun 2014 2:20 UTC; 6 points) 's comment on Open thread, 3-8 June 2014 by (
- 1 Aug 2019 0:23 UTC; 6 points) 's comment on Drive-By Low-Effort Criticism by (
- 8 Aug 2010 20:15 UTC; 5 points) 's comment on Bloggingheads: Robert Wright and Eliezer Yudkowsky by (
- 24 Jul 2013 17:03 UTC; 5 points) 's comment on MIRI’s 2013 Summer Matching Challenge by (
- 15 Apr 2009 18:23 UTC; 5 points) 's comment on Beware of Other-Optimizing by (
- 24 Jan 2011 7:13 UTC; 5 points) 's comment on Is Less Wrong discouraging less nerdy people from participating? by (
- 19 Apr 2012 13:21 UTC; 5 points) 's comment on How can we get more and better LW contrarians? by (
- 26 Jun 2011 16:25 UTC; 5 points) 's comment on Discussion: Yudkowsky’s actual accomplishments besides divulgation by (
- 2 Sep 2010 12:32 UTC; 5 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 21 May 2013 21:08 UTC; 5 points) 's comment on [LINK] Soylent crowdfunding by (
- 23 Oct 2014 2:15 UTC; 4 points) 's comment on Should Giving What We Can change its Pledge? by (EA Forum;
- 25 Apr 2021 22:26 UTC; 4 points) 's comment on Let’s Rename Ourselves The “Metacognitive Movement” by (
- [SEQ RERUN] Why Our Kind Can’t Cooperate by 30 Mar 2013 6:03 UTC; 4 points) (
- 24 Dec 2012 21:19 UTC; 4 points) 's comment on Ritual Report 2012: Life, Death, Light, Darkness, and Love. by (
- 2 Dec 2015 22:07 UTC; 4 points) 's comment on December 2015 Media Thread by (
- 16 May 2012 13:21 UTC; 4 points) 's comment on I Stand by the Sequences by (
- 27 Dec 2010 4:02 UTC; 4 points) 's comment on Tallinn-Evans $125,000 Singularity Challenge by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- 27 Oct 2014 19:01 UTC; 4 points) 's comment on question: the 40 hour work week vs Silicon Valley? by (
- 12 Apr 2023 22:09 UTC; 4 points) 's comment on [Lecture Club] Awakening from the Meaning Crisis by (
- 10 Feb 2014 9:59 UTC; 4 points) 's comment on Publication: the “anti-science” trope is culturally polarizing and makes people distrust scientists by (
- 17 Mar 2020 1:29 UTC; 4 points) 's comment on Open & Welcome Thread—March 2020 by (
- 14 Oct 2010 6:46 UTC; 4 points) 's comment on LW favorites by (
- 14 Sep 2012 18:03 UTC; 4 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 8 Jul 2010 21:40 UTC; 3 points) 's comment on July 2010 Southern California Meetup by (
- 21 Jun 2011 0:25 UTC; 3 points) 's comment on existential-risk.org by Nick Bostrom by (
- 16 Dec 2013 12:12 UTC; 3 points) 's comment on Karma awards for proofreaders of the Less Wrong Sequences ebook by (
- 15 Oct 2010 0:44 UTC; 3 points) 's comment on Mixed strategy Nash equilibrium by (
- 29 Mar 2012 21:16 UTC; 3 points) 's comment on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 by (
- 2 Nov 2012 5:24 UTC; 3 points) 's comment on Checking Kurzweil’s track record by (
- 31 May 2013 10:21 UTC; 3 points) 's comment on The Centre for Applied Rationality: a year later from a (somewhat) outside perspective by (
- 1 May 2021 23:23 UTC; 2 points) 's comment on Draft report on existential risk from power-seeking AI by (EA Forum;
- 31 May 2017 9:40 UTC; 2 points) 's comment on Open thread, May 29 - June 4, 2017 by (
- 30 Aug 2012 23:04 UTC; 2 points) 's comment on A rationalist My Little Pony fanfic by (
- 13 Aug 2015 9:51 UTC; 2 points) 's comment on Ideas on growth of the community by (
- 15 Jul 2011 23:50 UTC; 2 points) 's comment on 3 Levels of Rationality Verification by (
- 11 Apr 2012 9:33 UTC; 2 points) 's comment on In Defense of Ayn Rand by (
- 19 May 2011 20:09 UTC; 2 points) 's comment on Suffering as attention-allocational conflict by (
- 25 Jan 2014 16:41 UTC; 2 points) 's comment on Using vs. evaluating (or, Why I don’t come around here no more) by (
- 21 Aug 2020 2:35 UTC; 2 points) 's comment on Should we write more about social life? by (
- 1 Apr 2016 11:04 UTC; 2 points) 's comment on Consider having sparse insides by (
- How Irrationality Can Win: The Power of Group Cohesion by 9 Jul 2010 6:15 UTC; 2 points) (
- We desire genetically attributed success by 12 Jan 2010 13:56 UTC; 2 points) (
- 16 Jan 2012 20:56 UTC; 2 points) 's comment on A Sense That More Is Possible by (
- 4 Oct 2011 8:17 UTC; 2 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 9 by (
- 11 Mar 2011 2:13 UTC; 2 points) 's comment on Ben Goertzel on Charity by (
- 5 Nov 2012 23:21 UTC; 2 points) 's comment on Proofs, Implications, and Models by (
- 15 Mar 2023 15:59 UTC; 2 points) 's comment on Contra Common Knowledge by (
- 17 May 2012 16:00 UTC; 2 points) 's comment on Open Thread, May 16-31, 2012 by (
- 28 Jan 2014 2:43 UTC; 2 points) 's comment on What was that article named? by (
- 4 Feb 2011 16:18 UTC; 2 points) 's comment on Agreement button by (
- 11 Oct 2014 23:56 UTC; 2 points) 's comment on Open thread, Oct. 6 - Oct. 12, 2014 by (
- 16 Mar 2014 8:49 UTC; 2 points) 's comment on Intelligence-disadvantage by (
- 9 Apr 2013 23:26 UTC; 2 points) 's comment on The Universal Medical Journal Article Error by (
- 8 Apr 2009 9:32 UTC; 2 points) 's comment on Whining-Based Communities by (
- 7 Nov 2017 3:21 UTC; 2 points) 's comment on De-Centering Bias by (
- 22 Dec 2011 3:46 UTC; 1 point) 's comment on Is anyone else worried about SOPA? Trying to do anything about it? by (
- 20 Mar 2009 19:08 UTC; 1 point) 's comment on Why Our Kind Can’t Cooperate by (
- 22 Jan 2013 9:13 UTC; 1 point) 's comment on [Link] How Signaling Ossifies Behavior by (
- 2 Feb 2016 8:55 UTC; 1 point) 's comment on Open thread, Feb. 01 - Feb. 07, 2016 by (
- 12 Aug 2009 4:15 UTC; 1 point) 's comment on Causes of disagreements by (
- 24 Jan 2017 11:41 UTC; 1 point) 's comment on Projects-in-Progress Thread by (
- 20 Jan 2014 21:55 UTC; 1 point) 's comment on Group Rationality Diary, January 16-31 by (
- 7 Aug 2012 15:06 UTC; 1 point) 's comment on Self-skepticism: the first principle of rationality by (
- The Craft And The Community: Wealth And Power And Tsuyoku Naritai by 23 Apr 2012 16:06 UTC; 1 point) (
- 24 Apr 2012 12:33 UTC; 1 point) 's comment on The Craft And The Community: Wealth And Power And Tsuyoku Naritai by (
- 29 May 2015 21:13 UTC; 1 point) 's comment on Open Thread, May 25 - May 31, 2015 by (
- 13 Mar 2011 22:26 UTC; 1 point) 's comment on What other causes are relevant to LessWrong? by (
- 15 Sep 2022 12:55 UTC; 1 point) 's comment on Closet survey #1 by (
- 7 Mar 2017 7:41 UTC; 1 point) 's comment on Open Thread, March. 6 - March 12, 2017 by (
- 27 May 2011 13:12 UTC; 1 point) 's comment on The 48 Rules of Power; Viable? by (
- 16 May 2011 11:22 UTC; 1 point) 's comment on People who want to save the world by (
- 14 Sep 2012 15:58 UTC; 1 point) 's comment on [LINK] Interfluidity on “Rational Astrologies” by (
- 18 Dec 2012 1:43 UTC; 0 points) 's comment on Parallelizing Rationality: How Should Rationalists Think in Groups? by (
- 2 Sep 2013 21:50 UTC; 0 points) 's comment on Rudeness by (
- 14 May 2011 0:48 UTC; 0 points) 's comment on The elephant in the room, AMA by (
- 7 Dec 2010 13:42 UTC; 0 points) 's comment on Defecting by Accident—A Flaw Common to Analytical People by (
- 4 Mar 2014 13:37 UTC; 0 points) 's comment on Learning languages efficiently. by (
- 18 May 2015 19:54 UTC; 0 points) 's comment on Open Thread, May 18 - May 24, 2015 by (
- 25 Dec 2010 17:33 UTC; 0 points) 's comment on Newtonmas Meetup, 12/25/2010 by (
- 26 Apr 2013 3:14 UTC; 0 points) 's comment on Why safety is not safe by (
- 26 Sep 2015 22:21 UTC; 0 points) 's comment on Subjective vs. normative offensiveness by (
- 11 Mar 2011 9:59 UTC; 0 points) 's comment on Ben Goertzel on Charity by (
- 1 Jan 2016 16:03 UTC; 0 points) 's comment on What EAO has been doing, what it is planning to do, and why donating to EAO is a good idea by (
- 26 Apr 2013 13:36 UTC; 0 points) 's comment on LW Women Entries- LW Meetups by (
- 2 Sep 2010 12:21 UTC; 0 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 23 Mar 2009 12:23 UTC; 0 points) 's comment on Playing Video Games In Shuffle Mode by (
- 5 Dec 2010 2:26 UTC; 0 points) 's comment on Aieee! The stupid! it burns! by (
- 28 Jan 2014 22:34 UTC; 0 points) 's comment on Open thread, January 25- February 1 by (
- 25 Feb 2017 1:30 UTC; -1 points) 's comment on Why I left EA by (EA Forum;
- 15 Aug 2013 0:34 UTC; -1 points) 's comment on Engaging Intellectual Elites at Less Wrong by (
- 30 Mar 2011 0:47 UTC; -2 points) 's comment on I want to learn programming by (
- 10 Feb 2014 23:08 UTC; -2 points) 's comment on Publication: the “anti-science” trope is culturally polarizing and makes people distrust scientists by (
- 11 Sep 2009 18:06 UTC; -3 points) 's comment on Why Our Kind Can’t Cooperate by (
- 23 Jun 2011 14:40 UTC; -3 points) 's comment on They Changed It, Now It Sucks by (
- 3 Sep 2012 15:58 UTC; -3 points) 's comment on Open Thread, September 1-15, 2012 by (
- Confronting the Mindkiller—a series of posts exploring Political Landscape (Part 1) by 2 Jun 2012 20:17 UTC; -3 points) (
- 11 Apr 2013 3:28 UTC; -4 points) 's comment on LW Women Submissions: On Misogyny by (
- 15 May 2013 8:06 UTC; -9 points) 's comment on Avoiding the emergency room by (
In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There’s also a sense that “there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine”.
However, that’s not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don’t only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say “6 members marked their broad agreement with this point (click for list of members)”.
That would be great.
That would be a great feature, I think. Ditto on on broad disagreements.
This is a good point, but I think there’s a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don’t have the burden to formulate an argument because, implicitly, you’re referring to the first person’s argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn’t need to.
But on reflection, this arrangement is fallacious. Why shouldn’t agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don’t think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn’t the kind of agreement we’re talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven’t developed the same argument independently. Worse, I’ve just biased you towards my argument.
The better alternative is, when you agree with an argument, there should be the burden of devising a different argument that argues for the same conclusion. Of course, citing evidence also counts as an “argument”. In this manner, a community of rationalists can increase the strength of a conclusion through induction; the more arguments there are for a conclusion, the stronger that conclusion is, and the better it can be relied upon.
In that case you’re “writing the last line first”, I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn’t foolproof either, but might be less problematic.
In real life this is common, and the results are not always bad. It’s incredibly common in mathematics. For example, Fermat’s Last Theorem was a “last line” for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also “last line first”. That is, at the start you state the hypothesis that you’re about to test, and then you test the hypothesis—which test, depending on the result, may amount to an argument from evidence for the hypothesis.
Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn’t, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there’s another “last line first” which turned out pretty well in the end.
No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can’t tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true.
Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis, which the social process of science compensates for by requiring lots of evidence.
Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won’t let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it.
Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don’t invalidate the result of the experiment. The experiment is by its own good design protected from my intentions.
A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn’t good intentions on the part of the experimenter. That’s the whole point of the experiment: we can’t trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.)
Now let’s change both the intention and the method. Suppose you don’t know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is “preponderance of evidence”. It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You’re subject to confirmation bias! That means that you will automatically, without meaning to, start to pay attention to confirming and ignore disconfirming evidence. And you didn’t mean to!
Absolutely, but privileging the hypothesis is a danger whether or not you have decided in advance to conclude the hypothesis. Look at Eliezer’s own description:
This detective has, importantly, not decided in advance to conclude that Snodgrass is the murderer.
I think the thing which is jumping out as strange to me is doing this after you’ve been convinced, seemingly to enhance your credence. Still, this is a good point.
The danger that Eliezer warns against is absolutely real. So what’s special about math? In the case of math, I think that there is something special, and that is that it’s really, really hard to make a bogus argument in math and pass it by somebody who’s paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables.
So why is there ever a danger? The problem seems to arise with the mode of argument that involves “the preponderance of evidence”. That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you’ll find in the world.
The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can’t refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
Sorry, I’m not exactly sure what “writing the last line first” means. I’m guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?
I’m referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position—and if it’s right, you’re bound to end up with new arguments in favor of it anyway.
I agree, though, that agreement should not be taken as license to avoid engaging with a position.
I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration—and if we’re talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.
Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?
This requires either refraining from fully exploring the subject (so that you don’t think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...
Y’know, you may be right. I also suspect this is something that depends to a significant extent on the type of proposition under consideration.
Does it really signal that to other readers, or is that just in your mind? If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?
If they post just a “Amazing post, as usual Eliezer” without further informative contribution, then I too get this mild sense of “sucking up” going on.
Actually, this whole blog (as well as Overcoming Bias) does have this subtle aura of “Eliezer is the rationality God that we should all worship”. I don’t blame EY for this; more probably, people are just naturally (evolutionarily?) inclined to religious behaviour, and if you hang around LW and OB, then you might project towards the person who acts like the alpha-male of the pack. In fact, it might not even need to have any religious undertones to it. It could just be “alpha-male mammalian evolution society” stuff.
Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won’t get into which one is “smarter”, as they are both at least two levels above me) and I feel he is often—“under-appreciated” perhaps is the closest word?-- perhaps because he doesn’t posts as often, but perhaps also because people tend to “me too” Eliezer a lot more often than they “me too” Robin (but again this might be because EY posts much more frequently than RH).
It’s simpler than that: 1) Eliezer expresses certainty more often than Robin, and 2) he self-discloses to a greater degree. The combination of the two induces tendency to identification and aspiration. (The evolutionary reasons for this are left as an exercise for the reader.)
Please note that this isn’t a denigration—I do exactly the same things in my own writing, and I also identify with and admire Eliezer. Just knowing what causes it doesn’t make the effect go away.
(To a certain extent, it’s just audience-selection—expressing your opinions and personality clearly will make people who agree/like what they hear become followers, those who disagree/dislike become trolls, and those who don’t care one way or the other just go away altogether. NOT expressing these things clearly, on the other hand, produces less emotion either way. I love the information I get from Robin’s posts, but they don’t cause me to feel the same degree of personal connection to their author.)
I do believe I under appreciate Robin. However, what it feels like to me is that my personality at I suspect a gentic level is more similar to that of Eleizer than of Robin. In particular my impression of Robin is that he is more talented than Eleizer at social kinds of cognition. That does not mean I think Robin is less rational. It means that when I read Eleizer’s work I think “yeah, that’s bloody obvious!” whereas some of Robin’s significant contributions I actually have to actively account for my own biasses and work to consider his expertise and that of those he refers to.
My suspicion is that people who have similar minds to Robin would be less inclined to be involved in rationalist discourse than the more instinctively individualist. This accounts somewhat for the differences in ’me too’s but if anything makes Robin more remarkable.
“If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?”
It depends greatly on what they’re agreeing with, and what they’ve said and done before.
The nice thing about karma/voting sites like this one is that they provide an efficient and socially acceptable mechanism for signaling agreement: just hit the upmod button. Nobody wants to read or listen to page after page of “me too”; forcing people to tolerate this would be bad enough to negate the advantage of making agreement visible. Voting accomplishes the same visibility without the irritating side-effects.
There’s a bit of noise, as I sometimes vote up someone I disagree with if they raise an interesting point, and I very, very rarely vote someone down just because I disagree with them.
This “bit of noise” becomes significant on sites with a small number of subscribers, as a +/-2 vote is a “big deal”.
I think that’s a feature, not a bug. What an upvote expresses is nearer to “you should listen to this guy” than to “I agree with this guy”, but I think the former is more useful information.
There should be an emotional display of how many upvotes a post got.
Numbers are, well, too numbery for that.
Either a smile with ever growing smile.
or a ballon that grows bigger and bigger (for posts that really get way too upvoted, the ballon could explode into colorfull bright carnival paper, or candy, or Brad Pitt, or Russian Redheads...)
Ok, ballon or smile, who is with me?
I like the idea, but they seem kind of gimmicky. (thinking of LW’s comments section, it would be hard to give another icon the kind of prominence we want, without making it too big). How about a green/red bar, like the one on YouTube?
I must admit, I think I do find myself going into Vulcan mode when posting on LW. I find myself censoring very simple social cues—expressions of gratitude, agreement, emotion—because I imagine them being taken for noise. I think I’m going to make an effort to snap myself out of this.
Same here. It’s very natural for me to thank people when they say or do something awesome, to encourage promising newbies, and to express my agreement when I do agree, but I got the impression that such things are generally frowned upon here, so I found myself suppressing them.
Actually, I didn’t mind that much—the power of ideas discussed here way outweighs these social inconveniences, and I can easily live with that. But personally, I would prefer to be able to express my agreement and gratitude without spending too much calories on worrying about my tribal status.
(Of course we’ll need to keep the signal/noise ratio in check, but I’ll post my ideas on that in a separate comment).
Two thoughts.
In any relationship where I have influence, I expect to get more of what I model.
For example, in a community where I have influence, I expect demonstrating explicit support to push community norms towards explicit support, and demonstrating criticism to push norms towards criticism.
This creates the admittedly frustrating situation where, if a community is too critical and insufficiently supportive, it is counterproductive for me to criticize that. That just models criticism, which gets me more criticism; the more compelling and powerful my criticism, the more criticism I’ll get in return.
If a community is too critical and insufficiently supportive, I do better to model agreement as visibly and as consistently as I can, and to avoid modeling criticism. For example, to criticize people privately and support them publicly.
In any relationship where I have influence, I expect to get more of what I reward.
If a community is too critical and insufficiently supportive, I do well to be actively on the lookout for others’ supportive contributions and to reward them (for example: by praising them, by calling other people’s attention to them, and/or by paying attention to them myself). I similarly do well to withhold those rewards from critical contributions.
Voted up. (Explicit support and rewards, ahoy!)
Heh, it seems like this post has primed me for agreement, and I upvoted a lot more comments than I usually do. And it looks like many others did this as well—look at the upvote counts! I was reading and voting with Kibitzer on, and was surprised to see the numbers.
(Have I just lowered my status by signaling that I’m susceptible to priming?)
Nah, you’ve raised it, by signaling that you’re honest. At least, that’s how it would work among true rationalists (as opposed to anti-irrationalists). ;-)
They surprised me too. (I actually felt the urge to use an unnecessary exclamation point there the priming’s made me so enthusiastic...)
And I think that the status gained from the fact that you noticed being primed probably outweighed any lost due to it us being told it happened. Though now that we’re noticing it, we need to decide which frequency of upvoting we should be using so we can avoid the effect.
This article seems to model rational discourse as a cybernetic system made of two opposite actions that need to be balanced:
Agreement / support of shared actions
Disagreement / criticism
Agreement and disagreement are not basic elements of a statement about base reality, they’re contextual facts about the relation of your belief to others’ beliefs. Is “the sky is blue” agreement or dissent? Depends on what other people are saying. If they’re saying it’s blue, it’s agreement. If they’re saying it’s green, it’s dissent. Someone might disagree with someone by supporting an action, or agree with a criticism of what was previously a shared story. When you have a specific belief about the world, that belief is not made of disagreement or agreement with others, it’s made of constrained conditional anticipations about your observations.
This error seems likely related to using a synagogue fundraiser as the central case of a shared commitment of resources, rather than something like an assurance contract! There’s a very obvious antirational motive for synagogue fundraisers not to welcome criticism—God is made up, and a community organized around the things its members would genuinely like to do together wouldn’t need to invoke fictitious justifications. Rational coordination should be structurally superior, not just the same old methods but for a better cause.
Insofar as there’s something to be rescued from this post, it’s that establishing common knowledge of well-known facts is underrated, because it helps with coordination to turn mutual knowledge into common knowledge so everyone can rely on everyone else in the community acting on that info. But that also recommends blurting out, “The emperor’s naked!”.
There’s also the problem that sometimes people say stuff that’s off-topic and not helpful enough to be worth it—but compressing the complexity of that problem down to managing the level of agreement vs criticism is substituting an easier but unhelpful task in place of a more difficult but important one.
In hindsight, a norm against criticizing during a fundraiser, when there is always a fundraiser, leads to a community getting scammed by people telling an incoherent story about an all-powerful imaginary guy, just like they did in the synagogue example.
Many points that are both new and good. Like prase, and like a selection of other fine LW-ers with whom I hope to be agreeing soon, I think your post is awesome :)
One root of the agreement/disagreement asymmetry is perhaps that many of us aspiring rationalists are intellectual show-offs, and we want our points to show everyone how smart we are. Status feels zero-sum, as though one gains smart-points from poking holes in others’ claims and loses smart-points from affirming others’ good ideas. Maybe we should brainstorm some schemas for expressing agreement while adding intellectual content and showing our own smarts, like “I think your point on slide 14 is awesome. And I bet it can be extended to new context __”, or “I love the analogy you made on page 5; now that I read it, I see how to take my own research farther...”
Related: maybe we feel self-conscious about speaking if we don’t have anything “new” to add to the conversation, and we don’t notice “I, too, agree” as something new. One approach here would be to voice, not just agreement, but the analysis that’s going into each individual’s agreement, e.g. “I agree; that sounds just like my own experience trying to get an atheists club started”, or “I’m adopting these beliefs now, because I trust Eliezer’s judgment here, but I have little confirming evidence of my own, so don’t double-count my agreement as new evidence”. Voicing the causal structure of our agreement would:
Give us practice seeing how others navigate evidence and Aumann-type issues;
Expose us to others’ evidence;
Guard against information cascades (assuming honesty in those participating);
Let us affirm our identities as smart rationalists, while we express agreement. :)
I’ve often wrestled with this myself, and hesitated to comment for just this reason.
Me too.
Me too!
Me too.
Me too
I would encourage you to make this a fornt page post if you have the time. I think these thoughts and strategies are positive, rational and necessary group building skills for any long term group that fulfills rationalist goals. Or maybe it should be in the community guidelines(do these exist? I imagine the sequences as extended community guidelines) so most new members read them over.
“If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”
That’s been my non-verbal reasoning for years now! Not just here: everywhere. People have been telling me, with various degrees of success, that I never even speak except to argue. To those who have been successful in getting through to me, I would respond with, “Maybe it sounds like I’m arguing, but you’re WRONG. I’m not arguing!”
Until I read this post, I wasn’t even aware that I was doing it. Yikes!
The fact is that there is a strong motive to disagree: either I change my opinion, or you do.
On the other hand, the motives for agreeing are much more subtle: there is an ego boost; and I can influence other people to conform. Unless I am a very influent person, these two reasons are important as a group, but not much individually.
Which lead us to think: There is a similar problem with elections, and why economists don´t vote .
Anyway there is a nice analogy with physics: eletromagnetic force are much stronger than gravitational, but at large scale gravity is much more influent. (which is kinda obvius and made me think why no one pointed this on this post before)
BRAVO, Eliezer! Huuzah! It’s about time!
I don’t know if you have succeeded in becoming a full rationalist, but I know I haven’t! I keep being surprised / appalled / amused at my own behavior. Intelligence is way overrated! Rationalism is my goal, but I’m built on evolved wet ware that is often in control. Sometimes my conscious, chooses-to-be-rationalist mind is found to be in the kiddy seat with the toy steering wheel.
I haven’t been publicly talking about my contributions to the Singularity Institute and others fighting to save us from ourselves. Part of that originates in my father’s attitude that it is improper to brag.
I now publicly announce that I have donated at least $11,000 to the Singularity Institute and its projects over the last year. I spend ~25 hours per week on saving humanity from Homo Sapiens.
I say that to invite others to JOIN IN. Give humanity a BIG term in your utility function. Extinction is Forever. Extinction is for … us?
Thank you, Eliezer! Once again, you’ve shown me a blind spot, a bias, an area where I can now be less wrong than I was.
With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens™ :-|
Cool!
Just am curious.. What do you do for 25 hours a week to save humanity from itself?
Mostly, I study. I also go to a few conferences (I’ll be at the Singularity Summit) and listen. I even occasionally speak on key issues (IMO), such as (please try thinking WITH these before attacking them. Try agreeing for at least a while.):
“There is no safety in assuring we have a power switch on a super-intelligence. That would be power at a whole new level. That’s pretty much Absolute Power and would bring out the innate corruption / corruptibility / self-interest in just about anybody.”
“We need Somebody to take the dangerous toys (arsenals) away.”
“Just what is Humanity up to that requires 6 Billion individuals?”
All of that is IN MY OPINION. <-- OK, the comments to this post showed me the error of my ways. I’m leaving this here because comments refer to it.
Edited 07/14/2010 because I’ve learned since 2009-09 that I said a lot of nonsense.
I’m not sure what this was supposed to add, especially with emphasis. Whose opinion would we think it is?
I’ve been told that my writing sounds preachy or even religious-fanatical. I do write a lot of propositions without saying “In my opinion” in front of each one. I do have a standard boilerplate that I am to put at the beginning of each missive:
First, please read this caveat: Please do not accept anything I say as True.
Ever.
I do write a lot of propositions, without saying, “In My Opinion” before each one. It can sound preachy, like I think I’ve got the Absolute Truth, Without Error. I don’t completely trust anything I have to say, and I suggest you don’t, either.
Second, I invite you to listen (read) in an unusual way. “Consider it”: think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, “That’s so wrong!”, I really weaken my ability to “pan for the gold”.
If you have a reaction (e.g. “That’s WRONG!”), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you’ll see something even beyond what I offered.
There will plenty of time to criticize, attack, and destroy it AFTER you’ve “panned for the gold”. You won’t be missing an opportunity.
Third, I want you to “get” what I offered. When you “get it”, you have it. You can pick it up and use it, and you can put it down. You don’t need to believe it or understand it to do that. Anything you BELIEVE is “glued to your hand”; you can’t put it down.
-=-= END Boilerplate
In that post, I got lazy and just threw in the tag line at the end. My mistake. I apologize. I won’t do that again.
With respect and high regard,
Rick Schwall
Saving Humanity from Homo Sapiens (playing the game to win, but not claiming I am the star of the team)
This only makes it worse, because you can’t excuse a signal. (See rationalization, signals are shallow).
Also: just because you believe you are not fanatical, doesn’t mean you are not. People can be caught in affective death spirals even around correct beliefs.
Vladimir_Nesov wrote on 11 September 2009 08:34:32AM:
This only makes what worse? Does it makes me sound more fanatical?
Please say more abut “you can’t excuse a signal”. Did you mean I can’t reverse the first impression the signal inspired in somebody’s mind? Or something else?
OK I’ll start with a prior = 10% that I am fanatical and / or caught in an affective death spiral.
What do you recommend I do about my preachy style?
I appreciate your writings on LessWrong. I’m learning a lot.
Thank you for your time and attention.
With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)
I suggest trying to determine your true confidence on each statement you write, and use the appropriate language to convey the amount of uncertainty you have about its truth.
If you receive feedback that indicates that your confidence (or apparent confidence) is calibrated too high or too low, then adjust your calibration. Don’t just issue a blanket disclaimer like “All of that is IN MY OPINION.”
OK.
Actually, I’m going to restrain myself to just clarifying questions while I try to learn the assumed, shared, no-need-to-mention-it body of knowledge you fellows share.
Thanks.
I can’t help but think that those activities aren’t going to do much to save humanity. I don’t want to send you into an existential crisis or anything but maybe you should tune down your job description. “Saving Humanity from Homo Sapiens™” is maybe acceptable for Superman. It might be affably egotistical for someone who does preventive counter-terrorism re: experimental bioweapons. “Saving Humanity from Homo Sapiens one academic conference at a time” doesn’t really do it for me.
Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.
Jack wrote on 09 September 2009 05:54:25PM:
I don’t wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed.
Please explain to me how the destruction follows from the rule of a god-like totalitarian.
Thank you for your time and attention.
With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)
Maybe some Homo Sapiens would survive, humanity wouldn’t. Are the human animals in 1984 “people”? After Winston Smith dies is there any humanity left?
I can envision a time when less freedom and more authority is necessary for our survival. But a god-like totalitarian pretty much comes out where extinction does in my utility function.
IIRC, Winston Smith doesn’t die; by the end, his spirit is completely broken and he’s practically a living ghost, but alive.
Oh. My mistake. When you wrote, “Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.”, I read:
[Totalitarian rule… ] … [is] … the best way to destroy humanity, (as in cause and effect.)
OR maybe you meant: wishing … [is] … the best way to destroy humanity
It just never occurred to me you meant, “a god-like totalitarian pretty much comes out where extinction does in my utility function”.
Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?
Jack wrote on 09 September 2009 05:54:25PM :
I hear that. I wasn’t clear. I apologise.
I DON’T KNOW what I can do to turn humanity’s course. And, I decline to be one more person who uses that as an excuse to go back to the television set. Those activities are part of my search for a place where I can make a difference.
… but not acceptable from a mere man who cares, eh?
(Oh, all right, I admit, the ™ was tongue-in-cheek!)
Skip down to END BOILERPLATE if and only if you’ve read version v44m
First, please read this caveat: Please do not accept anything I say as True.
Ever.
I do write a lot of propositions, without saying, “In My Opinion” before each one. It can sound preachy, like I think I’ve got the Absolute Truth, Without Error. I don’t completely trust anything I have to say, and I suggest you don’t, either.
Second, I invite you to listen (read) in an unusual way. “Consider it”: think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, “That’s so wrong!”, I really weaken my ability to “pan for the gold”.
If you have a reaction (e.g. “That’s WRONG!”), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you’ll see something even beyond what I offered.
There will plenty of time to criticize, attack, and destroy it AFTER you’ve “panned for the gold”. You won’t be missing an opportunity.
Third, I want you to “get” what I offered. When you “get it”, you have it. You can pick it up and use it, and you can put it down. You don’t need to believe it or understand it to do that. Anything you BELIEVE is “glued to your hand”; you can’t put it down.
-=-= END BOILERPLATE version 44m
I think we may have different connotations. I’m going to reluctantly use an analogy, but it’s just a temporary crutch. Please drop it as soon as you get how I’m using the word ‘saving’.
If I said, “I’m playing football,” I wouldn’t be implying that I’m a one-man team, or that I’m the star, or that the team always loses when I’m not there. Rigorously, it only means that I’m playing football.
However, it is possible to play football for the camaraderie, or the exercise, or to look good, or to avoid losing. A person can play football to win. Regardless of the position played. It’s about attitude, commitment, and responsibility SEIZED rather than reluctantly accepted.
I DECLARE that I am saving humanity from Homo Sapiens. That’s a declaration, a promise, not a description subject to True / probability / False. I’m playing to win.
Maybe I’ll never be allowed to get on the field. I remember the movie Rudy, about Dan Ruettiger. THAT is what it is to be playing football in the face of being a little guy. That points toward what it is to be Saving Humanity from Homo Sapiens in the face of no evidence and no agreement.
You could give me a low probability of ever making a difference . But before you do, ask yourself, “What will this cause?”
It occurs to be that this little sub-thread beginning with “Mostly, I study. ” illustrates what Eliezer was pointing out in “Why Our Kind Can’t Cooperate.”.
“Some things are worth dying for. Yes, really! And if we can’t get comfortable with admitting it and hearing others say it, then we’re going to have trouble caring enough—as well as coordinating enough—to put some effort into group projects. You’ve got to teach both sides of it, “That which can be destroyed by the truth should be,” and “That which the truth nourishes should thrive.” ”
You, too, can be Saving Humanity from Homo Sapiens. You start by saying so.
The clock is ticking.
With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, even if I NEVER get on the field)
Two observations:
In American culture, when you give money to a charity, you aren’t supposed to tell people. Christian doctrine frowns heavily on that, and we are all partly indoctrinated with that doctrine. That’s why no one sent their “yes” response to the list.
You just wrote a post with 22 web links, and 19 of them were to your own writings. I think that says more about why we can’t cooperate than anything else in the post.
Far from being a negative aspect of the post, the self-linking is a key element of Eliezer’s effort to build a common vocabulary for rationalists. I’ve personally found them extremely helpful for reminding myself of the context of the words, when I’ve forgotten. They’re basically footnotes.
How can we cooperate if we don’t even speak the same language?
It’s better to have those links than not to have them. It’s a bit as if Eliezer were writing a large, hypertext book that we are writing footnotes in. But the lack of links to the writings of other people shows a lack of engagement and a self-preoccupation that smart people tend to have. Too often, when we ask others for co-operation, we really mean “get behind my ideas and my agenda”.
Cooperation involves compromise. It involves participating in the critique of those ideas. It requires, as a prerequisite, believing that others are smart enough to look at the same evidence and see things that you missed. In a forum like this, actual interest in cooperation is evidence by writing relatively short posts, and then responding at length to many of the comments; rather than by writing extremely long posts, and then making a few short responses to comments.
I link to myself because I know what I have written.
Maybe you should read something written by somebody else sometime.
This is an unhelpful comment and did not contribute to the conversation and I interpret it as an attack. Instead of attack why not engage EY on why he thinks it is so important to link to want he has written rather then what other people have written.
Any time I get the urge to use a “witty” oneliner I instead ask for the persons reasoning, perspective and logic that lead them to their conclusion.
First let me say that I do not think that attacks are by their very nature impermissible, and if you do, how dare you put “witty” in scare quotes? That’s just flat-out unkind.
Anyway, it’s a little hard for me to defend my comments of two years ago against attack, because I no longer remember what prompted me to make them. I will do my best to reconstruct my mental state leading up to the comment I made.
I don’t think I was necessarily on PhilGoetz’s side when I read his comment. I think I agreed, and still agree, with Technologos. But when I read the Wise Master’s response to it, it didn’t sit right with me. It read like an attempt to fight back against attack with anything that came to hand, rather than an attempt to seek truth. Surely, I must have felt, if the Wise Master were thinking clearly, he would see that unfamiliarity with the works of others is not an excuse, but in fact the entire problem. I feel that I wanted to communicate this insight. I chose the form that I did probably because it was the first one that came to mind. I hang out on some pretty rough and tumble internet forums, described by one disgruntled former poster as “geek bevis[sic] and buthead[sic] humour[sic]”. Sharp, witty-without-the-scare-quotes one-liners are built into my muscle memory at this point, and I view a well-executed burn as having aesthetic value in and of itself. I dunno, there is something to be said for short, elegant responses to provoke thought, rather than long plodding walls of text.
Anyway, that’s my reasoning, perspective, and logic. I hope you found this enlightening.
“witty” was describing my remark, as in the remarks I hold back on may not actually be witty, I was not trying to reference your remark though in retrospect it does seem easy to infer that so I apologize for communicating sloppily.
Attacks that do not forward the conversation are not useful. If the attacker does not expose the logic and data behind their attack then the person being attacked has no logic or data to pick a part and respond to and has no reason to believe that the attacker is earnest in seeking the truth.
Your attack against Nominull was, in fact, stronger and less ambiguous than Nominull’s.
The logic behind the point was actually quite obvious, which is not to say I would have presented it in this context. As Perplexed points out, sometimes there are benefits to taking the effort that you do know what other people have written. (Incidentally, I upvoted both Eliezer Phil and left Nominull alone).
Nominull’s comment, discourteous or not, furthered the actual conversation while yours did not (and nor did mine). So that isn’t the deciding factor here of why your kind of attack is different from Nominull’s kind. I think the difference in perception is that you are responding to provocation, which many people perceive as a whole different category—but that can depend which side you empathise with.
You use the terms “Stronger”, “less ambiguous” when I did not make the claim of weaker or more ambiguous. Are you implying that I am untruthful in your first quote of me, if so it is a misinterpretation on your part.
The logic on why Nominull values EY linking and quoting philosophical works is not obvious to me. Nor is it obvious to me what Nominull’s mental model on why EY has not been linking an quoting philosophical works(from 2009 comment). With out making that mental model clear and pointing out supporting evidence I do not see who it is useful.
I do not see any one denying that there are benefits to this in this conversation. I can not tell if you have a deeper point.
That does not fit to how I view my response. It seems to me that the conversation could have taken a much different and more productive route right after EY’s comment and Nominull’s comment discouraged it. I gave the alternative of engaging EY on “why he thinks it is so important to link to want he has written rather then what other people have written” that I thought would lead to a more productive conversation. I want to encrage productive conversation if I am going to be a community member of lesswrong.
I disagree. It is a very appropriate response to Eliezer’s flip dismissal of Goetz’s quite sincere (and to my mind, good) suggestions.
Eliezer is, of course, very well-read for a man of his age, but he is actually a bit parochial given the breadth of his ambitions and the authoritative, didactic writing style. His credibility, his communication ability, his fundraising, and even his ideas could probably benefit if he made a conscious effort to make his writing a bit more scholarly.
I understand that Eliezer is both very busy and very prolific, but I thought that his excuse (that he cited himself so much only for reasons of convenience (or laziness)) was much too dismissive of Phil’s arguments—in large part because I think his excuse is quite likely the truth.
With only a sentence and without back and forth conversation do you have the ability to pull out flippant intent from:
I do not know EY so I can not assign myself a high probability of doing so. In truth I subconsciously assigned a high probability that Nominull was in the same boat as me, in other words I jumped to conclusions. Do you assign yourself a high probability of determining EY’s intent from the above? If so please share if you can.
I can imagine EY’s statement made with helpful intent(I could have made that statement with helpful intent), responding to it as if it was made with unhelpful intent with out evidence does not seem rational/helpful to me.
I think you are attaching too much importance to inferring the intent (flippant vs helpful) of Eliezer’s one-line response to several dozen lines of discussion, and attaching too little importance to assessing the tone. In any case, the dictionary definition of flippant:
seems to be about tone, rather than intent. Eliezer’s comment qualifies as flippant. Nominull’s response was also flippant by this definition. This matching tone strikes me as appropriate—which is exactly what I said.
At the point where Eliezer made his comment, he was being mildly criticized. His flippant comment, which I think was exactly truthful, carried the subtext that he was not particularly interested in discussing those criticisms at that time. He is totally within his rights sending that message. The criticism was mild, and formulating a serious and thoughtful response to the criticism is not something he was required to do. He could have just ignored it. He chose not to.
Sometimes clever, conversation-stopping responses don’t stop conversations. Particularly when they are a little bit rude. Eliezer got a clever and rude response back. And for almost two years, everyone was satisfied with that ending.
I think there is a high probability that lack of further comments is just due to the propensity not to post in old conversations.
I figured if the sequences and in post links are to be taken seriously then the comments should be too. Old comments should not be treated as if they were perserved in carbonite but living arguments.
You can replace intent with tone and I would stand by that point. I could make the same remark without disrespectful, shallow, lacking seriousness, and with out levity.
By your description Eliezer makes a true but rude remark and receives a rude response back and this is “appropriate.” I do not see how a rude response to what is believed to be a rude comment is productive, it does not bring any logic or new data to the table.
This example did.
Are you replying to this?
It is long past time for chastisement, if it was ever required.
I respond to a similar comment here.
It is not about chastisement, it is about the people, like me, who come and read it later.
You seem to be remarkably willing to assert how your comments should be interpreted with respect to intent, meaning and social implications. Yet you do not seem to have paid Nominull that same courtesy.
Well I know what my intent is I know what I want my social implications to be. It makes sense that I try and communicate them. I accept that Nominull hangs “out on some pretty rough and tumble internet forums” and did not have unproductive intentions. I have not claimed that Nominull had unproductive intentions.
An example of impoliteness is needed if you want to continue this conversation.
The observation about American culture (which applies to Australian culture too) is a good one.
I don’t agree that the 19 links paint such negative picture. In fact, three external links in a single post is remarkable.
In hindsight, the problem with your fundraiser was obvious. There were two communications channels: one private channel for people who contributed, and one channel for everyone else. Very few people will post a second message after they’ve already posted one, so the existence of the private channel prevented contributors from posting on the mailing list. Removing all the contributors from the public channel left only nay-sayers and an environment that favored further nay-saying. The fix would be to merge the two channels: publish the messages received from contributors, unless they request otherwise.
I agree with everything you said in your talk, and I think you’re brilliant.
I’ve noticed that I am often hesitant to publicly agree with comments and posts here on LessWrong because often agreement will be seen as spam. While upvotes do count as something, it is far easier to post a disagreement than to invent an excuse to post something that mostly agrees. This can be habit forming.
Comparing say Less Wrong with a Mensa online discussion group I’ve noticed that my probaility of disagreement is far lower with the self identified rationalists than with the self and test identified generic smart people. The levels of Dark Side Argument are almost incomparable. I have begun disengaging from Dark debates wherever convenient purely to form better habits at agreement.
In fact, agreement is a sort of spam—it consumes space and usually doesn’t bring new thoughts. When I imagine a typical conference where the participants are constantly running out of time, visualising the 5-minute question interval consumed by praise to the speaker helps me a lot in rationalising why the disagreement culture is necessary. Not that it would be the real reason why I would flee screaming out of the room, I would probably do even if the time wasn’t a problem.
When I read the debates at e.g. daylightatheism.org I am often disgusted by how much agreement there is (and it is definitely not a Dark Side blog). So I think I am strongly immersed in the disagreement culture. But, all cultural prejudices aside, I will probably always find a discussion consisting of “you are brilliant” type statements extraordinarily boring.
It doesn’t have to bring new thoughts to serve a purpose. A chorus of agreement is an emotional amplifier.
Not only that, it becomes a glue that binds people together, the more agreement the stronger the binding (and the more that get bound). At least that is the analogy that I use when I look at this; we (rationalists) have no glue, they (religions) have too much.
Agreement does not need to be contentless and therefore spam. It can fill in holes in the argument, take a different perspective(helping a different segment of the reading population), add specific details to the argument that were glossed over and much more.
It sounds like you have a problem with lack of content more then you do with agreement. I am sure you would find contentless disagreement just a boring.
Agreements are a lot more often contentless, as a rule. When disagreeing, people feel motivated to include some reasons, and even if they don’t, the one who was disagreed with feels motivated to ask for the reasons. But in principle you are right that my objections don’t primarily aim at agreement.
I think you are focusing too much on discussions.
There are other activities where success can depend heavily on not acting alone, and it is in those types of activities (such as fundraising, seizing political power, reforming institutions, etc) that rationalist-types are disadvantaged by their lack of coordination.
I agree!
You didn’t read Eliezer’s post very carefully, did you? You need more practice in agreement and conformity. There are a limited number of “right” answers out there. It’s alright to agree on them, when they are found.
I’m going to agree with the people saying that agreement often has little to no useful information content (the irony is acknowledged). Note, for instance, that content-free “Me too!” posts have been socially contraindicated on the internet since time immemorial, and content-free disagreement is also generally verboten. This also explains the conference example, I expect. Significantly, if this is actually the root of the issue, we don’t want to fight it. Informational content is a good thing. However, we may need to find ways to counteract the negative effects.
Personally, having been somewhat aware of this phenomenon, when I’ve agreed with what someone said I sometimes try to contribute something positive; a possible elaboration on one of their points, a clarification of an off-hand example if it’s something I know well, an attempt to extend their argument to other territory, &c.
In cases like the fundraising one, where the problem is more individual misperception of group trends, we probably want something like an anonymous poll—i.e., “Eliezer needs your help to fund his new organization to encourage artistic expression from rationalists. Would you donate money to this cause?”, with a poll and a link to a donation page. I would expect you’d actually get a slightly higher percentage voting “yes” than actually donating, though I don’t know if that would be a problem. You’d still get the same 90% negative responses, but people would also see that maybe 60% said they would donate.
“A slightly higher percentage”? More like: no correlation.
I recall that McDonalds were badly burned by “would you X”. Would people buy salads? oh god yes, they’d love an opportunity to eat out and stick to their diets. Did they buy salads, once McDonalds had added them? Nope.
Similarly I recall that last US election the Ron Paul Blimp campaign was able to get a lot more chartable pledges than real-world money, and pretty quickly died from underfunding.
Someone[1] must be buying those salads, as McDonalds is keeping them on the market, and given that food spoils, it doesn’t make financial sense for them to keep offering a product which doesn’t sell.
1: I’ve actually tried the McDonalds salad 3 times. The first time, it was very (and surprisingly) good. The other two times it was mediocre.
You can keep small stocks of an item, and it can have positive effects beyond direct revenues, e.g. if families with one dieting or vegetarian member don’t avoid McDonald’s because that person can eat a salad.
I think the positive effect is that they can say that they sell salads, people can convince themselves they intend to buy the salad, and so on.
I saw a study recently that said that the mere presence of a salad on the menu increases people’s consumption. I deeply doubt that fast food chains were surprised by that result.
From the nature of the study, it’s not even about convincing themselves they intend to buy a salad. By merely seriously having considered the option, they give themselves virtue points which offset the vice of more consumption.
Or rather, another positive effect. These explanations aren’t mutually exclusive.
That being said, nice insight.
Yes, excellent point that should be underlined for the readers here.
People’s metaknowledge is very poor. Their knowledge about themselves, especially so.
You make an excellent point, I was not really thinking clearly there.
However, I will note that my intent was not that it should produce an accurate prediction of donations, but to better gauge public opinion on the idea to counteract the tendency to agree silently but disagree loudly.
I’ve worked for a number of non profits and in analysis of our direct mailings, we would get a better response from a mailing that included one of two things
A single testimonial mentioning the amount that some person gave
Some sort of comment about the group average (listeners are making pledges of $150 this season)
This is one of the reasons that some types of nonprofits choose to create levels of giving; my guess is that it is gaming these common level of giving ideas by creating artificial norms of participation. Note You can base your levels on actual evidence and not just round numbers! (plus inflation, right?)
We also generally found that people respond well to the idea of a matching donation (which is rational since your gift is now worth more).
I do believe that anonymous fund raising removes information about community participation that is very valuable to potential donors. Part of making a donation is responding the signal that you are not the only one sending a check to a hopeless office somewhere.
Anonymous polls might be a good idea, but especially among rational types, you might want the individual testimony: you get to see some of the reasoning!
I think the synagogue in the story picked up on these ideas and used them effectively. But the nice thing about raising money through direct mailing and the internet is that you can run experiments!
The reason I specified anonymity was to reduce the likelihood of a social stigma attached to not donating. The idea of pressuring people into an otherwise voluntary gesture of support makes me very uncomfortable.
However, I may be overcautious on that aspect, and I defer to your greater experience with fundraising. Do you have any other empirical observations about response to fundraising efforts? You could consider submitting an article on the subject, either as it relates to instrumental rationality, or for the benefit of anyone else who might want to organize a rationality-related non-profit.
I think your caution is warranted, the fact that you can see the other people in the synagogue who don’t stand up could be very hurtful to the nonparticipants. Highlighting individual donors or small groups is a good way to show public support without giving away to much information about your membership’s participation as a whole.
If you are interested in more rigorous studies (we did ours in excel), you might want to try Dean Karlan’s “Does Price Matter in Charitable Giving? Evidence from a Large-Scale Natural Field Experiment ” http://karlan.yale.edu/p/MatchingGrant.pdf
I will try to dig up some other papers online
Amongst a group of people who know and interact with each other regularly such as a synagogue those who have the means to donate money and those who do not would be an extremely obvious piece of information to the members of that group.
There are actually two actions taken here by members, they either do not donate or they donate a certain amount. To the members of the group the amount donated is as much of an information channel as the choice to donate or not to donate. Those who donate a lot and are rich may cause offence by donating less than expected, those who donate a little when there is no expectation may gain esteem.
You are proposing a situation in which an individual donates less than expected by such a magnitude that it seriously affects people’s esteem for them. This is possible, although given social pressures unlikely. It can occur at all because the magnitude of the donation combined with the wealth of the individual and the support for the cause are all easily calculable. Magnitude of donation is known, wealth is implied by clothes, status symbols or frank discussions about income, and support for the place of worship is expected to be high.
In a group of rational people donating to support a cause they have the option of donating, not donating and voicing support or criticism. You have established a reasonable grounds for why people do not arbitrarily voice support, and for why people voice criticism. But let’s look at the amount donated and imagine it were being done publicly, is there a state where people can be hurt by donation or non-donation?
Even if the amount donated and a reasonable guess at the wealth of the individual are available, the amount donated can still vary by the level of support the person feels for the cause. There is no level of donation that is ‘incorrect’ just as there is no arbitrary ‘correct’ level of support. Therefore the situation is most unlikely to cause social harm to the individual donating, or those who do not donate as there is a rational reason for any level of donation.
To be honest, I suspect a lot of those folks, and I include myself here, were anti-collectivists first.
In my own mind, the emotive rule “I might follow, but I must never obey” is built over a long childhood war and an eventual hard-fought and somewhat Pyrrhic victory. I know it’s reversed stupidity, but it’s hard to let go.
What good rationalist techniques are there for changing such things?
Ask “what’s bad about obeying?” Imagine a specific concrete instance of obeying, and then carefully observe your automatic, unconscious response. What bad thing do you expect is going to happen?
Most likely, you will get a response that says something about who you are as a person: your social image, like, “then I’ll be weak”. You can then ask how you learned that obeying makes someone weak… which may be an experience like your peers teasing you (or someone else) for obeying. You can then rationally examine that experience and determine whether you still think you have valid evidence for reaching that conclusion about obedience.
Please note, however, that you cannot kill an emotional decision like this without actually examining your own evidence for the proposition, as well as against it. The mere knowledge that your rule is irrational is not sufficient to modify it. You need to access (and re-assess) the actual memor(ies) the rule is based on.
Recognizing that “I might follow, but I must never obey” is an emotional rule is already a good first step, much better than trying to rationalize it.
I’ve recognized that same pattern in myself—a bad feeling in response to the idea of following / obeying even when it’s an objectively good idea to do so. I imagined an “asshole with a time machine” who would follow me around, observe what I did (buy a ham sandwitch for lunch, enter a book store...), go back in time a few seconds before my decision and order me to do it.
Once I realized I was much more angry against this hypothetical asshole than it was reasonable to, I tried getting rid of that anger. I guess I succeeded (the idea doesn’t bug me as much), but I don’t know if it means I won’t have any more psychological resistance to obeying. I am probably still pretty biased towards individualism / giving more value to my opinion just because it’s my own, but I’d like to find ways to get rid of that..
“What good rationalist techniques are there for changing such things?”
Carefully examining the potential reasons for going along with someone else. Emile’s point below is a very good one.
‘Obedience’ implies that we must go along with what someone says we should do. It’s much better to think (hopefully accurately) that we’ve choosing to do something which coincidentally is also what someone has suggested. We don’t need to choose to obey to go along.
Carefully examining the justifications for actions is also important. If there are compelling reasons to do X, the fact that we’ve been “ordered” to do X is irrelevant, just as being ordered NOT to do X is.
Unfortunately, “doing what they say” tend to make people believe they are the top dog.
And a bit too many people are prompt to get this idea, reluctant to abandon it, and abuse it to no end.
So, pragmatically, sometimes it’s better to find another way to get the desired result, or at least delay action to diminish that bad association.
Really? I’ve always thought my similar rule was embedded in my DNA.
Stating that you are not obeying and that you are take a particular course of action because it is a good idea seems to work/help some people.
Realize that the anti-collectivist pull is an explotable weakness it leaves you vulnerable to people who are perceptive and want to harm you. Some would say that you should just avoid getting people to want to harm you, however a consequence is that you would have to avoid standing up to people who harm the world, people you care for and some time yourself.
Wait a second, now we’re using Jews trying to run a synagogue as an example of a group who cooperate and don’t always disagree with each other for the sake of disagreeing? Your synagogue must have been very different from mine. You never heard the old “Ten Jews, ten opinions—or twenty if they’re Reform” joke? Or the desert island joke?
I also agree with everyone. In particular, I agree with Cameron and Prase that it’s tough to just say “I agree”. I agree with ciphergoth that I worry that I’m sucking up to you too much. I agree with Anna Salamon that we tend to be intellectual show-offs. I agree with Julian that many of us probably started off with a contrarian streak and then became rationalists. I agree with Jacob Lyles that there’s a strong game theory element here—I lose big if rationalists don’t cooperate, I win a little if we all cooperate under Eliezer’s benevolent leadership, but to a certain way of thinking I win even more if we all cooperate under my benevolent leadership and there’s no universally convincing proof that cooperating under someone else is always the highest utility option. And I agree with practically everything in the main post.
One thing I don’t agree with: being ashamed of strong feelings isn’t a specifically rationalist problem. It’s a broader problem with upper/middle class society. Possibly more on this later.
I’ve never been dragged to any other religious institution, so I wouldn’t have any other example to use. I expect these forces are much stronger at Jesus Camp or the Raelians. But yes, even Jewish institutions still coordinate better than atheist ones.
Granting that the jokes you refer to are generally accurate, wouldn’t that make the synagogue a better example for a rationalist Cat Herd than some other religious organization where people “think” in lockstep with the Dear Leader? The synagogue would represent an example of a group of people who manage to cooperate effectively even with a high level of dissensus (neologism for the opposite of consensus). Which, as I understand it, is the goal Eliezer is aiming for in this post.
And you win the most when the group is so rational that almost anyone could serve as the benevolent leader.
The group trait required is not rationality—it is other traits that also share positive affect.
I was not asserting that rationality is all that you need to make the most efficient group, if that was what you are getting at.
I think we agree that starting with groups A and B both with x skills if group A is more rational it will also be the more effective group.
My argument was as the ability of the group to act rationally increases, the utility difference between being a member and being the leader will decreases as the group becomes better at judging the leaders value.
I personally see public disagreements as a way to refine the intent of the person under the spotlight rather than a social display of individualism. When I disagree with someone it is not for the sake of disagreeing but rather to refine what I may think is a good idea that has a few weak points. I do this to those I respect and agree with because I hope that others will do this to me.
I think the broader question here is not whether we should encourage widespread agreement in order to create cohesion—but rather if we can ensure that the tenets we collectively agree on are correct conclusions. That is in my mind the main difference between rationalists and what I would call tribalists—in general the majority agree on tenets which are have serious rational flaws or they do simply not raise contest with said tenets. Otherwise if we do follow the leader, then if there are true flaws in that particular modus—we will never discover them.
I agree that it is hard to start a movement based on this—however I see this as a positive attribute. Just as the (flawed) idea of representative democracy was supposed to slow government to a crawl—the rationalist mindset slows group think and confirmation bias to a near halt. It is however a strong movement, however slow.
On ‘What Do We Mean By “Rationality”?’ when you said “If that seems like a perfectly good definition, you can stop reading here; otherwise continue.”—I took your word for it and stopped reading. But apparently comments aren’t enabled there.
You have significantly altered my views on morality (Views which I put a GREAT deal of mental and emotional effort into.) I suspect I am not alone in this.
I think there’s a fine line between tolerating the appearance of a fanboy culture, and becoming a fanboy culture. The next rationalist pop star might not be up to the challenge.
And for that matter, how many time would you want to risk be subjected to agreement without succumbing? It’s not wireheading, but people do get addicted.
Agreement and disagreement look more like skills that we can develop (and can improve at both of) than ends of a continuum (where moving toward one means moving away from the other).
I mean, we can reduce the apparent and actual extent to which we’re an Eliezer fan-club or echo chamber, and improve our armor against the emotional and social pressures that “we all think the Great Leader is perfect” tends to form. And we can also, simulateously, improve our ability to endorse good ideas even when someone else already said that idea, and to actually coordinate to get stuff done in groups.
I think Eli has succeeded in attracting enough very clever people to the community that this is not a massive danger. If Robin, you, Carl S, Yvain, Nick T, Nick Hay, Vladimir N, etc all disagreed with him for the same reason, and he didn’t retract, he would look silly.
“[A] survey of 186 societies found, belief in a moralising God is indeed correlated with measures of group cohesion and size.”—God as Cosmic CCTV, Dan Jones
I’m not sure if this was at work in your fundraiser, but I know I tend to see exhortations from others that I give to charitable causes/nonprofits as attempts at guilt tripping. (I react the same way when I’m instructed to vote, or brush my teeth twice a day, or anything else that sounds less like new information and more like a self-righteous command.) For this reason, I try to keep quiet when I’m tempted to encourage others to give to my pet charity/donate blood/whatever, for fear that I’ll inspire the opposite reaction and hurt my goal. I don’t always succeed, but that’s an explanation other than a culture of disagreement for why some people might not have contributed to the discussion from a pro-giving position.
Good points.
This may be why very smart folks often find themselves unable to commit to an actual view on disputed topics, despite being better informed than most of those who do take sides. When attending to informed debates, we hear a chorus of disagreement, but very little overt agreement. And we are wired to conduct a head count of proponents and opponents before deciding whether an idea is credible. Someone who can see the flaws in the popular arguments, and who sees lots of unpopular expert ideas but few ideas that informed people agree on, may give up looking for the right answer.
The problem is that smart people don’t give much credit to informed expressions of agreement when parceling out status. The heroic falsfier, or the proposer of the great new idea, get all the glory.
There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.
You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don’t see any compelling rational reason to believe this to be the case.
Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this. The prisoners dilemma and the tragedy of the commons are not new ideas. Telling people to act in the group interest because God said so is effective. It is easy to see how informing people of the costs of action, because truth is noble and people ought not be lied to, can be counter-effective.
Perhaps we should stop striving for the maximum rational society, and start pursuing the maximum rational society which is stable in the long term. That is, maybe we ought to set our goal to minimizing irrationality, recognizing that we will never eliminate it.
If we cannot purposely introduce a small bit of beneficial irrationality into our group, then fine: memetic evolution will weed us out and there is nothing we can do about it. People will march by the millions to the will of saints and emperors while rational causes whither on the vine. Not much will change.
Robin made an excellent post along similar lines, which captures half of what I want to say:
http://lesswrong.com/lw/j/the_costs_of_rationality/
I’ll be writing up the rest of my thoughts soon.
Sorry, I can’t find the motivation to jump on the non-critical bandwagon today. I had the idea about a week ago that there is no guarantee that truth= justice = prudence, and that is going to be the hobby-horse I ride until I get a good statement of my position out, or read one by someone else.
I one-box on Newcomb’s Problem, cooperate in the Prisoner’s Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.
Thanks for the links, your corpus of writing can be hard to keep up with. I don’t mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.
Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.
If you are rational enough, perceptive enough and EY’s writing is consistant enough at some point you will not have to read everything EY writes to have a pretty good idea of what his views on a matter will be. I would bet a good some of money that EY would prefer to have his reader gain this ability then read all of his writings.
“However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don’t see any compelling rational reason to believe this to be the case.”
Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.
“Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this.”
Really? Most of the “individual rationality → suboptimal outcomes” results assume that actors have no influence over the structure of the games they are playing. This doesn’t reflect reality particularly well. We may not have infinite flexibility here, but changing the structure of the game is often quite feasible, and quite effective.
For example, we could establish a social norm that compulsive public disagreement is a shameful personal habit, and that you can’t be even remotely considered “formidable” if you haven’t gotten rid of the urge to seek status by pulling down others.
I disagree.
I don’t think your argument applies to jacoblytes’ argument. Jacoblytes claims that there is no reason for “rational” to equal “(morally/ethically) right”, unless an intelligent designer designed the universe in line with our values.
So it’s not about winning versus losing. It’s that unless the rules of the game are set up just in a certain way, then winning may entail causing suffering to others (e.g. to our rivals).
My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: “there is no guarantee that morally good actions are beneficial”.
The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don’t claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Christianity being an old religion and having the time to work out the philosophical kinks.
Of course, they make up for it by offering infinite bliss in the next life, which is cheating. But Christians do have a more honest view of this world in some ways.
Maybe we conflate true, good, and prudent because our “religion” is a hard sell otherwise. If we admitted that true and morally right things may be harmful, our pitch would become “Believe the truth, do what is good, and you may become miserable. There is no guarantee that our philosophy will help you in this life, and there is no next life”. That’s a hard sell. So we rationalists cheat by not examining this possibility.
There is some truth to the Christian criticism that Atheists are closed-minded and biased, too.
In that case, believing in truth is often non-rational.
Many people on this site have bemoaned the confusing dual meanings of “rational” (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.
I believe I consistently used the “believing in truth” definition of rational in the parent post.
I agree that the multiple definitions are confusing, but I’m not sure that you consistently employ the “believing in truth” version in your post above.* It’s not “believing in truth” that gets people into prisoners’ dilemmas; it’s trying to win.
*And if you did, I suspect you’d be responding to a point that Eliezer wasn’t making, given that he’s been pretty clear on his favored definition being the “winning” one. But I could easily be the one confused on that. ;)
“In that case, believing in truth is often non-rational.”
Fair enough. Though I wonder whether, in most of the instances where that seems to be true, it’s true for second-best reasons. (That is, if we were “better” in other (potentially modifiable) ways, the truth wouldn’t be so harmful.)
“Except that we are free to adopt any version of rationality that wins.”
There’s only one kind of rationality.
I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we’re likely to encounter except
Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it’s the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works “well enough,” and therefore what factors contribute to it so working, etc. etc.
Passing the problem of a gun jamming the Rationality-Function might return the response, “If the gun doesn’t fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you’re experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count.”
Does this sound like what you mean by a “beneficial irrationality”?
Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?
No. That’s not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.
In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.
There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.
In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn’t recommend it to a 13th century peasant.
This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: “prove it”.
And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.
Disprove the parable of Eve and the fruit of the tree of knowledge.
I don’t know ’bout no Eve and fruits, but I do know something about the “god-shaped hole”. It doesn’t actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a “core state” in NLP.
Core states are emotional states of peace, oneness, love (in the universal-compassion sense), “being”, or just the sense that “everything is okay”. You could think of them as pure “reward” or “satisfaction” states.
The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others’ mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it.
Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it’s like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area.
Most likely, this is because it’s the unconditional presence of core states that’s the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states.
Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.… and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism.
Appropriately trained rationalists, on the other hand, can simply reinstate the wireheading internally, and get the benefits without “believing in” anything. (In fact, application of the process tends to surface and extinguish left-over religious ideas from childhood!)
Explaining the actual technique would require considerably more space than I have here, however; the briefest training I’ve done on the subject was over an hour in length, although the technique itself is simple enough to be done in a few minutes. A little googling will find you plenty on the subject, although it’s extremely difficult to learn from the short checklist versions of the technique you’re likely to find on the ’net.
The original book on the subject, Core Transformation, is somewhat better, but it also mixes in a lot of irrelevant stuff based on the outdated “parts” metaphor in NLP—“parts” are just a way of keeping people detached from their responses, and that’s really orthogonal to the primary purpose of the technique, which is really sort of a “stack trace” of active unconscious/emotional goals to uncover the system’s root goal (and thereby access the core state of “pure utility” underneath).
Anyone who knows how to access their core states has the ability to call up mystical states of peace, bliss, and what-not, at any moment they actually need or want them. An external idea isn’t necessary to provide comfort—the necessary state already exists inside of you, or religion couldn’t possibly activate it.
Reply here.
“Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.”
So if that’s Eliezer’s point, and it’s also your point, what is it that you actually disagree about?
I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn’t be so. In response, you seem to be asking him to prove that rational individuals must co-operate—when he already appears to have accepted that this isn’t true.
Isn’t the relevant issue whether it is possible for rational individuals to co-operate? Provided we don’t make silly mistakes like equating rationality with self-interest, I don’t see why not—but maybe this whole thread is evidence to the contrary. ;)
My point isn’t exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the “truth = moral good = prudent” assumption, and sometimes not.
He’s provided me with links to some of his past writing, I’ve talked enough, it is time to read and reflect (after I finish a paper for finals).
True, but that “one kind of rationality” might not be what you think it is. Conchis’s point holds if you use “rationality” = “everything should always be taken into account, if possible” or something alike.
A “rational” solution to a problem should always take into account those “but in the real word it doesn’t work like that...”. Those are part of the problem, too.
For example, a political leader acting “rationally” will take into account the opinion of the population (even if they are “wrong” and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his “goal” (position of power? well being of the population?) and on the alternative if not elected (will my opponent’s decisions do more harm?).
I completely agree with this post. It’s heartwarmingly and mindnumbingly agreeable, I would like to praise it and applaud it forever and ever. On a more serious note, personally it feels like not contributing anything into the conversation if you’re just agreeing. Like for an example if I read a 100 posts in here, I don’t feel compelled to add a comment which says just “I agree.” to each of them because it feels like it doesn’t add to the substance of the issue. - So I’m totally doing what the post predicts.
I have really read a hundred or so posts and I think the majority of them are brilliant, and to be honest I don’t think there have been any posts by Eliezer in particular that I have read which I would’ve considered really bad. I think they’re great. I’m not even stretching it very far when I’m saying that they’ve changed my look on life.
Personally I truly hope that whoever comes up with the first functional AIs has concern for the future of humanity and takes the time and trouble to ponder moral issues and is responsible about it in general. In fact I believe the world would be a little better place if a larger number of our leaders and political decision makers would demonstrate similar interests—for an example if they could sit down every now and then and contemplate on the meaning of altruism or caring for one another - or they would stop by and read a post on this website.
So this seems like the perfect post to just agree with and add the following suggestion to the conversation: If it feels like you don’t want to just agree to something, even if you do really agree, try and find a way to do that while also making a contribution, additional detail or insight. :)
Awesome posts!
What exactly is the problem with this? The more knowledge I have, the smaller a weighting I place on any new piece of data.
Seems so: https://aeon.co/essays/why-humans-find-it-so-hard-to-let-go-of-false-beliefs
I Am probably so rational, because I have ASD, people with ASD don’t include emotions into their reasoning: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4532317/#:~:text=In%20typical%20individuals%2C%20alexithymia%20was,fear%2C%20disgust%2C%20and%20anger. And I Am great in logic!!! I have aphantasia—meaning no imagination. Even if I understand some logic perfectly—I couldn’t make you an exercise for it! And I can’t give any examples almost, as next to 0 imagination! That’s also maybe why I Am so rational: https://iai.tv/articles/why-humans-are-the-most-irrational-animals-auid-1239&utm_source=reddit&_auid=2020 But I Am somehow very creative—because ADHD? And I have overexcitabilities and I Am very emotional & sensitive person! Emotions are tied to creativity! I get excited easily, but also bored easily!
Also I have bad memory, so I forget things and have to constantly reinvent them and revise them from 0. But I Am very logical-critical-rational! This is so interesting I saw some guy, which has it same and his almost exactly the way I Am except ASD and ADHD! So interesting!
I also didn’t know anything until my 21 and just played games, then I read like 1 million articles in a year about “Free Will”. I always try yo take everything from 0 and I have ADHD so I see it from all perspectives. I also revise my views, even if I Am wrong, I will learn so much from that! Being wrong is as important for learning as being right! So you see something doesn’t work and get information from that! Who tries to be only right, doesn’t really learn why and something is bad and why something is good!
We don’t know anything for sure except: “I think therefore I Am” Even that maybe, what if I Am dead and alive at the same time—QM (even isn’t this only analogy?, no idea if this can be extended to human death!) unless I saw all permutations of everything how could I know? And even most brilliant people who ever lived succumbed to logical fallacies, or said total nonsense… E.g. anecdote: what Musk said about Covid… Even most brilliant people have so much to learn. Unless you read every book that exist and know everything from every perspective, you still may know nothing… Even most brialliant people know 0.00000000...1% Stephen Hawking said: he can’t even keep up with new studies in his area…
I like this statement: “that those of us too dazed by the job of living to exert an extra mental effort”. As bias is defined as cognitive shortcuts, you can really never think too much… I experience most profound existential boredom and I live quiet life doing nothing whole day, but analyzing everything and reflecting about everything… Problem is you need to slow down and enjoy something, otherwise it is harmful to intelligence I found! And today time life is too quick, nevertheless I don’t have words for people, which get news only from facebook...
TBH even I (I probably one of most rational people in the world, it would take long to explain) noticed that sometimes I let maybe creep bias in tangential judgements (especially if I Am low on energy, wonder if I could affect me in future, if idea gets anchored). But I don’t base my reasoning on them. It is like an idea, which needs to be investigated and I have OCD about every permutation honestly… I have even OCD about being criticized by other people in hypotheticals, not even kidding...
Here my system how I judge knowledge:
logical conclusion based on axioms
axiom
logical conclusions based on empirical evidence
empirically verified observations/facts (mostly inductive!) 1. with large consensus in scientific community like: Evolution
theory (1. theory tested by an experiment, but isn’t fully accepted in scientific circles—depends, 2. number of citations and journal impact factor, 3. is not tested by an experiment yet)
hypothesis
assumption
intuition
idea
I don’t know shit so I don’t have an opinion yet (random thoughts) - I don’t understand why some people have strong opinions, if they don’t know anything about something… Simply don’t talk, if you don’t know, but ask, or study it...
assumption of materialism etc.
I usually start from 0 and I leave things open and revise my opinions all the time...
So one has to be aware of structure on which he base his arguments and precision of which things can be known (which is very difficult admittedly). Even scientists can be uncritical naturally, because their work is based on assumptions and verified inductively most of the time! https://medium.com/starts-with-a-bang/is-the-inflationary-universe-a-scientific-theory-not-anymore-905615723b0f
Or scientists’ unabillity to understand Philosophy, although I suspect: they rather don’t know anything about it given opulent opinions. As people which are very intelligent (which know a lot) may think they know even about things outside of their area! https://qz.com/627989/why-are-so-many-smart-people-such-idiots-about-philosophy/
Smart people constantly question everything, because they realize how much they don’t know! So you should always question yourself, also heavy criticism is excellent. You learn so much from being criticized, as self-reflection/analysis is difficult—even for experts! Unfortunately that is one thing which is banned, criticize anything bam you are banned, can’t even open bank account, or have job...
Like in academia e.g. a student claims that there are differences between men and women and commission investigates that, what the hell? Even she wasn’t kicked, fact that they investigating academic freedom of speech… Or conservative professors has to have security detail to lecture and this is not even new: https://www.wizbangblog.com/2009/04/18/conservative-speakers-need-body-guards-when-speaking-on-college-campuses/
Lefties are arguably even more aggressive and likely to commit a crime… It is because rich want destroy difference between genders, because they invested a lot of money in transgender clinics, transformation cost even $150 000 for one person… https://readingjunkie.com/2021/07/09/big-pharma-deploys-brown-shirts-to-protect-the-trans-industrial-complex/
I also recommend: https://fabiusmaximus.com/ an excellent site, where economists and rather known people write, certainly no no names. And cite many scientific studies for their claims!
We live absolutely under totalitarianism and corporatocracy already and when china will become superpower, it will be even more grim! Did you check last news? Their economy increased about 8%… Also corporations want to create their own municipal goverment, it was only postponed because covid, but in december 21, they will consider it in congress!
Even governer said: it has to be studied first, so it is not realistic couple years yet—thank god! And corporations will want to push this through! Which would unadulterated post-apocalyptic world in its truest form… https://www.marketwatch.com/story/in-nevada-desert-blockchains-llc-aims-to-be-its-own-municipal-government-01613252864
Big corporations like coca cola speak into prepared legislations etc.
And there is also elitism in scientific circles, which doesn’t help. https://bigthink.com/culture-religion/inequality-mathematics
Do you know what is funny, I stated on (scientificforums.net) that science is largely social endeavor and what is true is determined mostly by what is accepted at the top in scientific circles. But I got instantly flamed from elitists there! Science is largely social endeavor, because scientists are people like any else!
De Ropp: describes as the pursuit of “knowledge,” and then outlines many of the ways that this game as well is often corrupted, muddied and tainted (by players whom De Ropp sounds intimately familiar with). Says De Ropp, “Much of it is mere jugglery, a tiresome ringing of changes on a few basic themes by investigators who are little more than technicians with higher degrees . . . Anything truly original tends to be excluded by that formidable array of committees that stands between the scientist and the money he needs for research. He must either tailor his research plans to fit the preconceived ideas of the committee or find himself without funds. Moreover, in the Science Game as in the Art Game there is much insincerity and a frenzied quest for status that sparks endless puerile arguments over priority of publication. The game is played not so much for knowledge as to bolster the scientist’s ego.”.
E.g. if you don’t have rep. you won’t get publicized in prominent journals, no matter what are your arguments, because again people decide at the end based on emotions… https://bigthink.com/experts-corner/decisions-are-emotional-not-logical-the-neuroscience-behind-decision-making
That is not to say, all science is about that and it is what science ought to be. But in reality it largely is! And without funding you can’t do anything and science is largely funded from private sector, because from taxes went like 1.41% couple years in past...
Sorry I prob. talk to much, because I see everything from everything ad infinitum… I used to get like 20 ideas from every other thing and from that another 20, before I was depressed and chronic pain...
You’re awesome, Eli. I love the mix of rationality and emotion here. Emotion is a powerful tool for motivating people. We of the Light Side are rightfully uncomfortable with its power to manipulate, but that doesn’t mean we have to abandon it completely.
I recently suggested a rationality “cult” where the group affirmation and belonging exercise is to circle up and have each person in turn say something they disagree with about the tenets of the group. Then everyone cheers and applauds, giving positive feedback. But now I see that this is going too far towards disagreement—better would be for each person to state one area of agreement and one of disagreement with the cult’s principles, or today’s sermon or exercises, and then be applauded.
I think there’s an interesting moral of the anecdote, but I’m not sure it’s the one you expressed.
My conclusion is: rationalists who desire to discard the burdensome yoke of their cultural traditions, linked inextricably as they are to religion, will have to relearn an entirely new set of cultural traditions from scratch. For example, they will need to learn a new mechanism design that allows them to cooperate in donating money to cause that is accepted as being worthwhile (I think the “ask for money and then wait for people to call out contributions” scheme is damned brilliant).
Here’s an even better one, under the right circumstances:
“Would everyone please stand up for a moment? Thank you. Now, please remain standing if you believe that our organization is doing important things for the good of the world. Terrific, terrific. Okay, please continue to stand if you’re going to make a pledge of at least $X. Fantastic! Now, please continue to stand if you’re going to make a pledge of at least $X*2...”
Of course, it won’t work very well on a room full of non-conformists… you might have trouble getting them to stand in the first place, especially if they know what’s coming.
That only works once, if that much. People don’t like feeling forced and manipulated.
“Right circumstances” includes support for your cause and rapport with your audience, such that most of them don’t feel manipulated. The one time I saw that method used, the speaker already had the audience in the palm of his hand, such that they felt they’d already gotten their money’s worth just from having listened to him. The stand-up/opt-out trick was just to push an already-high expected conversion rate higher.
(An example of how good a rapport he had: early in the presentation, he asked that people please promise to not even attempt to give him any money that day… and several people laughed and shouted “No!”)
Of course, I suppose if you’re that good, the trick is moot. On the other hand, the public approach your synagogue used is equally manipulative… it just builds the conformity pressure more slowly, instead of all at once.
As the old joke says: What do you mean ‘we’, white man?
The real reason ostensibly smart people can’t seem to cooperate is that most of them have no experience with reaching actual conclusions. We train people to make whatever position they espouse look good, not to choose positions well.
What makes a position well-chosen or more likely to assit in reaching actual conclusions?
The logical structure of the best argument supporting it, the quality of the evidence in that argument, and the extensiveness of that evidence.
Instead of those things, most of us pay attention to rhetoric and status.
Take a look at high school speech and debate organizations, and the things they stress. What development of skills and techniques do their debates encourage?
A good point, and a serious problem. When I was in high school debate (Lincoln-Douglas), I hated the degree to which the competition was really about jargon and citation of overwhelming but irrelevant “evidence.” I think the tipping point was when somebody claimed that teaching religion in public schools would lead to an environmental catastrophe (and even more, it was purely an argument from authority).
At one point, I ran a case that relied on no empirical evidence whatsoever (however abhorrent that may sound here): it was a quasi-Aristotelian argument that if you accept the value in the first premise—I believe it was “knowledge”—then the remainder followed. The whole case was perhaps three minutes long, half the allowed time, and formatted to make the series of premises and conclusions very obvious.
Best I could tell, there was only one weak link in the argument that was easily debatable. I correctly guessed that the people I was debating were more used to listing “evidence” than arguing logic, and most people had absolutely no idea how to handle even clearly stated premises and conclusions.
I was arguing against the position I actually hold, which is why there was still a flaw in the argument, but it won the majority of the debates nonetheless. Sad, more than anything.
This “best argument” idea disconsiders the danger of one argument against an army http://lesswrong.com/lw/ik/one_argument_against_an_army/
Perhaps a way to have comments of agreement that can also work as signalling your own smarts would be to say that you agree, and that the best part/most persuasive part/most useful part is X while providing reasons why.
Isnt the secret power of Rationality that it can stand up to review? Religious cults are able to demand extreme loyalty because the people are not presented alternatives and are not able to question the view they are handed. One of our strengths seems to be in discernment and argumentation which naturally leads to fractious in-fighting. What would we call “withholding criticism for the Greater Good”?
The difference is simply in the critic’s motivation: are they trying to improve the situation, or just trying to avoid the expected outcome of agreement? E.g., are you criticizing charities because you want them to do better, or because you don’t want to shell out the money AND don’t want to admit it? (I’m unashamedly in the “I don’t want to send money to Africa and I don’t care if I have a logical reason for it” camp, and so have no need to make up a bunch of reasons it’s bad.)
If the critic were really interested in improvement, they’d be suggesting improvements or better yet, DOING something about improvement.
“But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational. You’re only willing to hear some honest thoughts, but not others. You are a dangerous half-a-rationalist.”
Excellent point. I agree completely, and have had similar thoughts about the problem with the “skeptic” community myself. upvote
If someone understands the phrase “empirical cluster in personspace,” they probably are who you’re talking about. =)
That was what the first draft said, but I considered it for a few moments and realized that as eloquent statements go, it suffered the unfortunate flaw of not actually being true.
This is very interesting; I have usually refrained from replying because I could not think of anything to say that wasn’t trivial. Will take care to voice agreement in th future where applicable.
Couldn’t you just ask contributors for the right to make their donations public?
The Christian and other ethics often demand that the left hand not know what the right hand is doing. However, you can certainly indicate the sum of donations so far without violating anyone’s privacy.
The commitment of those who do donate may be more inspiring than the excuses of those who do not.
An automated reply system could make a post with the donated amount and unique anonymous user name. That way people reading the counter arguments see people donating between some posts.
Then clearly your fund-raising drive would have benefited from a mechanism for publicizing and externalizing support.
Charitable organizations commonly use a variety of such methods. The example you gave is just one. If correctly designed the mechanisms do not cause support to be swamped by criticism, and they can operate without suppressing any free thought or speech.
E.g. publishing (with their agreement) the names of donors, the amounts, and endorsements; using that information to solicit from other donors; getting endorsements from respected people; appointing wealthy donors to use their own donations as an example when leading solicitation drives among other wealthy donors etc.
The situation does not seem as dire as you suggest.
And you’d better bet that synagogue fund-raising drives get all the gripes that you received, and more!
Way to go Eliezer, you have my full support! And another great posting, btw!
To some extent, this was discussed in “The Starfish and the Spider”, which is about “leaderless groups”. The book praises the power of decentralized, individualistic cultures (that you describe as “Light Side”). However, it admits that they’re slower and less-well coordinated than hierarchical organizations (like the military, or some corporations).
You’ve outlined some of the benefits (recruitment, coordinated action) of encouraging public agreement and identifying with the group. You’ve also outlined some of the dangers (pluralistic ignorance, etc.).
Possibly the appropriate answer is to create multiple groups, so that each can be a check against the others turning into cults. Possibly even a fractal of groups and subgroups.
I have been thinking about this subject for a while because I saw the same type of culture of disagreement prevent a group I was a member of from doing anything worthwhile. The problem is very interesting to me because I come from the opposite side of the spectrum being heavily collectivist. I take pleasure in conforming to a group opinion and being a follower but I also have nurtured a growing rationalist position for the last few years. So despite my love of being a follower I often find myself aspiring to a leadership position in order to weld my favored groups into a cohesive whole rather than an un-unified mob. The only solution I have been able to come up with so far is forming a core of beliefs and values which the group can accept without criticism, even if some of the members disagree with some of the parts. This is of course very hard to do.
“Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them. That’s probably about the way things should be in a sane human community”
Personally I think that you were speaking to the wrong crowd when trying to fund raise. Or perhaps I should say too wide a crowd. Like trying to fundraise for tokamak fusion in a mailing list where people are interested in fusion in the generality. People who don’t believe that tokamaks will ever be stable/usable are duty bound to try and convince the other people of that so they won’t waste their money (also it means less money in the pot for their projects).
Geek cooperative projects can work, but generally only if there is a mathematical or empirical way to get everyone on the right page, or you have to filter the group you are trying to work with by philosophical position.
With regards to signaling agreement, I think part of the problem is that agreements tend to give little information. If everyone on a certain mailing list said I agree and here is how much money I am donating, I would consider it spam, too much bandwidth for not enough new information… Polls would probably be better, or the organiser of the fund raiser could give running updates (which I believe you did, IIRC).
I have to agree completely.
I don’t have to agree completely. But I choose to.
I also choose to link the donation’s page for the SIAI here.
http://singinst.org/donate
Yes, this felt great… my emotions seem to be in tune with my high-level goals.
Me too!
There’s an easy and obvious coordination mechanism for rationalists, which is just to say they’re building X from science fiction book Y, and then people will back them to the hilt, as long as their reputation and track record for building things without hurting people is solid. Celebrated Book Y is trusted to explain the upsides and downsides of thing X, and people are trusted to have read the book and have the Right Opinions about all the tradeoffs and choices that come with thing X.
So really, it all comes down to the thing that actually powers the synagogue’s annual appeal—the Torah. The Torah has been doing its job for as long as there have been Jews to read it (or recite it from memory before it was written down), so everyone in the community can agree that the Torah and the Talmud and whatever are reasonable, so coordination becomes trivial. The rabbi standing up in front of the congregation has read all the Torah and Talmud there is, everyone knows and agrees on this fact, so the rabbi is trusted to have the best interests of the community at heart. Since the rabbi has the best interests of the community at heart, the expenses that the community has incurred are obviously real and obviously pressing. Since the rabbi hasn’t been going around doing horrible things (he hasn’t, right?), everyone knows that the money will actually be spent on the thing the rabbi says it will be spent on, not, I don’t know, building nuclear weapons to bomb the competing synagogue across town.
I think rationalists have a deeply good sense of responsibility for their actions and opinions. That’s the best thing about them. But I think they also don’t have enough respect for the actions and opinions of other people (particularly Other People With Different Opinions Who Are Not As Smart As Me). That’s the worst thing about them. As worst things go, it’s a pretty minor character flaw; nobody’s eating babies alive, they’re just kind of smug and condescending in a way that is counterproductive.
I think the thing going on with your pledge drive was that people are afraid to be publicly wrong about something that ends up mattering a great deal, and they don’t trust anyone else’s opinion about whether they’re right or wrong. To break through that force field, we need to start trusting our own wisdom literature, which is science fiction and fantasy, to help us solve the unsolvable challenges that Whatever-Is-Out-There keeps putting in front of us. Sure, mistakes will be made, bad books will be written, people will disagree about what the books mean. All that happens with the Torah and the Talmud too. It just doesn’t seem realistic that any one person could act in accordance with most or all the ethical rules laid out in all the science fiction and fantasy books that exist and still end up building something truly evil by coordinating effectively with other people who are also steeped in the science fiction and fantasy traditions. What would that even look like?
I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:
General cooperation models typically opt for vagueness instead of specificity to broaden the audiences that can make use of them
Complicated/technical problems such as engineering, programming, and rationality tend to require a higher level of quality and efficiency in cooperation than more common problems
Complicated/technical problems also exaggerate the overhead costs of trying to harmonize thought and communication patterns amongst the team(s) due to reduced tolerance of failures
With these in mind, I would posit that a factor worth considering is that the traditional models of collaboration simply don’t meet the quality and cost requirements in their unmodified form. It is quite easy to picture a rationalist determining that the cost of forging new collaboration models isn’t worth the opportunity costs, especially if they aren’t actively on the front lines of some issue they consider Worth It.
I agree. I don’t often say I agree for efficiency. You’ve made the point more eloquently than I could and my few sentences in support of you would probably strengthen your point socially, but it wouldn’t improve the argument in some logical sense.
I love signaling agreement when I can do it and be just as eloquent as the writing I’m agreeing with. Famous authors put a lot of work into the blurbs they write recommending their friend’s books. And that work shows. “X is a great summertime romp, full of adventure!” sure is a glowing recommendation, but it’s not that eloquent and I can tell the author didn’t put much time into writing it. Guess they didn’t think X was worth the time to write a real nice blurb. But when a good author writes an interesting blurb for a book it gives me very high expectations.
I think this applies to ideas as well.
There’s a lot more of this in anime, I feel. A lot of characters end up trusting someone from the bottom of their hearts, agreeing to follow their vision to the end, and you see whole group of good guys that are wholeheartedly committed and united to the same idea. Even main characters often show this trait toward others.
“Yes, a group which can’t tolerate disagreement is not rational. But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational”. Well, agreement may just be perceived default. If I sit at a talk and find nothing to say about (and, mind you, that happens R. A. R. E. L. Y) it means either that I totally agree or that it is so wrong I don’t know where to begin.
Also, your attitude on “we are not to win arguments, we are to win”, your explicit rejection of rhetorics (up to the seemingly-ignorant question “Why do people think that mentioning the death of some poor fella buying snake oil is argument for regulation?”—because bringing it up like that is a rhetorical argument to that side even if it is not a rational one) may be another weakness more or less common among rationalists. There are ways to sway people to your side, not necessarily including direct lies—and still rationalists tend to refuse using them.
Wow. I don’t identify as a cynic or spock, but of the many articles I have read on Less Wrong since I discovered it yesterday, this one is perhaps the most perspective changing.
It makes me happy that those traits you list as what rationalists are usually thought of ----disagreeable, unemotional, cynacal, loners—are unfamiliar. The rationalists I have grown up in the past few years reading this site are both optimistic and caring, along with many other qualities.
Eliezer, I applaud your post. Bravo. I agree.
I’m new to this site and I was compelled to sign up immediately.
There’s not much to add here, but that I hope people appreciate the significance of not shutting off all emotions, much like you argue in this post.
Those who suspect me of advocating my unconventional moral position to signal my edgy innovativeness or my nonconformity should consider that I have held the position since 1992, but only since 2007 have I posted about it or discussed it with anyone but a handful of friends.
I believe rhollerith. I met him the other week and talked in some detail; he strikes me as someone who’s actually trying. Also, he shared the intellectual roots of his moral position, and the roots make sense as part of a life-story that involves being strongly influenced by John David Garcia’s apparently similar moral system some time ago.
Hollerith doesn’t mean he was applying his moral position to AI design since ’92, he means that since ’92, he’s been following out a possible theory of value that doesn’t assign intrinsic value to human life, to human happiness, or to similar subjective states. I’m not sure why people are stating their disbelief.
Good point, Anna: John David Garcia did not work in AI or apply his system of values to the AI problem, but his system of values yields fairly unambiguous recommendations when applied to the AI problem—much more unambiguous than human-centered ways of valuing things.
Off-topic until May, all.
Unfortunately, they can’t consider that you have have held the position since 1992 -- all they can consider is that you claim to have done so. You could get your handful of friends to testify, I suppose...
Cyan points out, correctly, that all the reader can consider is that I claim to have held a certain position since 1992. But that is useful information for evaluating my claim that I am not just signaling because a person is less likely to have deceived himself about having held a position than about his motivations for a sequence of speech acts! And I can add a second piece of useful information in the form of the following archived email. Of course I could be lying when I say that I found the following message on my hard drive, but participants in this conversation willing to lie outright are (much) less frequent than participants who have somehow managed to deceive themselves about whether they really held a certain position since 1992, who in turn are less frequent than participants who have somehow managed to deceive themselves about their real motivation for advocating a certain position.
I don’t disagree with the above post—I just wanted to make a pedantic distinction between claims and facts in evidence. (Also, my choice of the pronoun “they” rather than “we” was deliberate.)
I don’t believe you.
Don’t believe my advocacy of the moral position is not really just signaling or don’t believe I’ve held the moral position since 1992?
I don’t know how long you’ve held the position, or much care—I don’t think it’s relevant. But it is signaling, I think, for 2 reasons:
Your public concern with saying it’s not signaling is just a way of signaling;
Claiming a certain timespan of belief is just an old locker room way of saying “I got here first.” Which surely is signaling.
This is the sort of thing that causes unnecessary splintering in groups. I have a very visceral reaction to this sort of signaling (which I would label preening, actually). Perhaps I should examine that.
It is likely the case that rhollerith’s moral position contains at least some element of signalling. His expression thereof probably does too. In fact, there are few aspects of social behavior that could be credibly claimed to be devoid of signalling. That said, these points do not impress me in the slightest.
Yes, public concern surely involves signalling. That doesn’t mean that which is concerned about isn’t also true. Revealing truth is usually an effective form of signalling.
It is completely unreasonable to dismiss claims because they are similar to something that was signalling in the locker room. Even the “I got here first” signalling in said locker room quite often accompanies the signaller, in fact, getting there first.
I suspect that you have not become acquainted with my moral position! If you knew my moral position, you would be more likely to say I am ruining the party by crapping in the punchbowl than to say I am preening. (Preen. verb. Congratulate oneself for an accomplishment).
People are also unwilling to express agreement because they know, and fear, group consensus and the pressure to fit in. Those usually lead to groupspeak and groupthink.
Given that one of the primary messages of the local Powers That Be is that other people’s evaluations should be a factor in your own—that other people’s conclusions should be considered as evidence when you try to conclude—and that’s incompatible with effective rationality, as well as the techniques needed to prevent self-reinforcing mob consensus.
Not only the culture of disagreement takes place. When I see “+1”, I think what a mind processes do that: commenter needs some attention but have nothing to say? And so when I want to post “+1″, I do not do that, for someone didn’t think the same about me. Usually I’m trying to make some complement to original post, or little correction to it with clear approval of the rest. Something not important and, at the same time, not just “+1”.
There is a way to solve this problem, but it dangerous. Rationalist can watch discussion closely and not only clever thoughts, but the common effect that discussion have on other watchers, and make some activity every time when discussion have wrong effect. But doing this rationalist makes political discussion from rational.
The only way is to remember the purpose for communication takes place. Not every communications is discussion. And this is the most rational way: rationalist every his move should do knowing the purpose for this move. When we speak about cooperating rationalists, we should also remember that there are common goals and individual goals, and rationalist should weigh both and every time pick the most important at this moment.
And in context of donations: what the reason for rationalist to publish his reasons to not donate? Guilt and attempt to justify himself? Or maybe attempt to draw attention: “now look guys how clever my thoughts”? All the reasons I can imagine is individual goals, that this “rationalist” is considering more important than common goals of community. So either this “rationalist” is enemy of community, or he is just stupid (the same thing, generally).
I wonder if one person can have a big effect on this sort of thing.
For example, I’ve known charity organizers to publish the number of donors and the total money donated every few days. Even without identifying donors, that does a lot to make people feel less alone.
An alternate explanation: I’ve noticed a trend where rationalists seem more likely to criticize ideas in general. Perhaps a key experience that needs to happen before some people choose to undergo the rigors of becoming a rationalist is a “waking up” after some trauma that makes them err on the side of being paranoid. I have observed that most people without a “wake up” trauma prefer to simply retain optimism bias and tend to conserve thinking resources for other uses. Someone who thinks as much as you do probably does not feel a need to conserve thinking resources, and probably finds this concept ridiculous, but for most people, stamina for how much thinking they can do in a day is a factor—sad, but true. So, a trauma might be needed to make skepticism appeal to people. It may be that rational thought is often implemented as a defense mechanism and this leads them to create strong habits of doing rational thought in ways that tear ideas down without doing a comparable amount of practice in confirming ideas.
In my opinion, I think the solution to this would be to assist them in reaching a point of satiation when it comes to being great at tearing ideas down. If it’s a self-defense mechanism, no amount of brilliant rational appeals will make them give it up. Even if one starts by explaining the risks of tearing ideas down too much, that’s only confusing to the self-defense system, people won’t know what to do with the cognitive dissonance that causes, so they’re likely to reject it. If they feel secure because of a high level of ability with tearing ideas down, they’ll probably be more open to seeing the limitations to that and doing more practice with methods of confirming ideas.
Maybe—but they seem to work together well enough—if you pay them.
Whereas theists will pay tithes to be ordered around.
They war with other theists as well. Cooperation benefits from a shared mission.
Rather than ourselves making the drastic cultural changes that Eli talks about, perhaps it would be more efficient to piggyback on to another movement which is further down that path of culture change, so long as that movement isn’t irrational. See this URL:
http://www.thankgodforevolution.com/node/1711
Check out the rest of the web site if you have time, or better yet, buy and read the book the web site is promoting. As you can see from the URL above, cooperation is an important value in the group.
I have been observing the spiritual practices promoted by this web site for just a few weeks, and already it’s been giving me tremendous personal benefit. My relationship with my wife and kids is better, I have more enthusiasm for life when I get up in the morning, I no longer find doing chores so onerous, it’s much easier for me to refrain from my vices, and I just generally feel more satisfied with the way things are. That’s quite a bit for just a few weeks, and I sense the benefits are going to continue to grow with time so long as I adhere to the spiritual practices.
Even though I support Eli’s non-profit (that can’t be named), I have a very strong urge to give 10-fold as much money to the group that makes such an immediate and real difference in my life.
The really cool thing, though, is that the group is completely compatible with what Eli is trying to do, and should be able to help the cause rather than hinder it, unless we dismiss the group out of hand because their culture is more like a religion than a group of rationalists.
If you think the material on the web site URL I posted above is in any way irrational, please let me know about it. I’d like to hear what you’re thinking.
This isn’t a comment, this is an attempted post in which you say in more detail what’s going on over there and which “practices” you’re talking about. It then gets voted up or voted down. In any case, don’t try to do this sort of thing in one comment.
...though I see you don’t have enough karma yet to post; but that’s exactly what we’ve got the system for, eh?
Hrm, overall makes sense. But now, HOW do you suggest, for something here, an online forum, actually doing that sort of thing in the general case without it translating to a whole bunch of people going, effectively, “me too”?
I do remember when for a certain unnamed organization you started the “donate today and tomorrow” drive (or whatever you called it, something to that effect), I did post to a certain mailing list my thoughts that both led me to donate and what I was thinking in response to that sort of appeal, etc etc.
In the pursuit of truth it is rational to argue and, at first glance, irrational to agree. The culling of truth proceeds by “leaving be” the material that is correct and modifying (arguing with) the part that is not. (While slightly tangential, it is good to recall that the scientific method can only argue with a hypothesis; never confirm it.)
At a conference where there is a dialogue it is a waste of time to agree, as a lack of argument is already implicit agreement. After the conference, however, the culling of truth further progresses by assimilating and disseminating the correct material. So while it may not be rational to go to the mike and say, “I agree, you are brilliant”, it is a form of true agreement to tell other people that they were brilliant.
Nevertheless, we’re human beings, and by that I mean we’re not entirely rational in the sense of a deterministic computational machine. We care about our interaction with our community, and in this sense it is rational to give encouragement.
I’m a beginner that thinks meta-discussions are fun..
Eliezer is asking about whether we should tolerate tolerance. Let’s suppose—for the sake of argument—that we do not tolerate tolerance. If X is intolerable, then the tolerance of X is intolerable.
So if Y tolerates X, then Y is intolerable. And so on.
Thus, if we accept that we cannot tolerate toleration, then also we cannot tolerate toleration of tolerance, and also we cannot tolerate toleration of toleration of tolerance.
I would think of tolerance as a relationship between X and Y in which Y acquires the intolerability of X.
That may be, but I generally find YOUR poetic appeals to make me throw up in my mouth. I read my mother your bit about how amazing it was that love was born out of the cruelty of natural selection, and even she thought it was sappy.
I, on the other hand, nearly started sobbing, so I guess it takes all kinds.
Source?
http://lesswrong.com/lw/sa/the_gift_we_give_to_tomorrow/
I don’t see how individualism can beat out collectivism as long as groups = more power. for individualism to work each person would have to wield equal power to any group.
One view doesn’t need to “beat out” the other; for each societal state, there’s a corresponding equilibrium between individualistic- and group-think (or rather, group-think for varying sizes of groups) as each person weigh the costs and benefits of adherence for them. In a world of individuals, an organized and specialized group of any size “= more power.” Witness sedentary farmers displacing hunter-gatherers. On the other hand, in a world of groups, a rogue individualistic prisoner’s-dilemma-defector is king. Witness sociopaths in corporate structures, or the plots of far too many Star Trek episodes.
The balance of power can shift as Individualism becomes a better choice, due to its risks lessening and rewards increasing, whether due to culture, technology, or extensive debates on websites.