I believe “nice,” makes an excellent default, and I think these arguments are good ones, well-presented. Not-niceness is sometimes an effort to signal intelligence, I think; it’s not particularly effective at that.
It’s important, too, though, to recognize when niceness doesn’t pay the utilitarian bill:
Trolls. In any environment, people interested in reactions occasionally wander in. These people should be banned and ignored. Shunning is not a particularly nice thing, but even polite feedback is feedback. Do not feed these.
The ineducable. Suppose a person asserts that the Monty Hall problem results in a 50-50 chance of switching or not switching. One or two efforts to educate nicely is good. Additional efforts are wasted and unproductive.
Evil. Deliberately dishonest people are far rarer than alleged on internet fora generally, but it happens. Shaming people who are genuinely bad actors is fine with me, thanks.
Overall, I’d say LW is a particularly civil corner of the internet, and I’ve spent time at some uncivil places. The other side of pro-niceness posting is that assuming unkindness or bad motives is not a good default for the reader; assume the other person’s directness isn’t not-nice.
I’d also say that I think niceness is more important for persuasion on the net than it is in person; people who know you personally might assume more internal niceness even you’re sometimes a condescending, sarcastic, know-it-all. (Really.) That sort of instinctive goodwill towards one’s fellow man is tougher to generate without a handshake.
1 and 2 are both examples where the correct response is not being not-nice, but saying nothing.
This also applies in case 3, except where you have evidence of deliberate dishonesty that you think other honest participants may not already be aware of and need brought to their attention.
In general, if you think someone isn’t worth being nice to, don’t address them at all. It’s OK to talk about trolls, but never talk to them.
In other words, if you can’t say something nice, don’t say anything :-)
Trolls. In any environment, people interested in reactions occasionally wander in. These people should be banned and ignored. Shunning is not a particularly nice thing, but even polite feedback is feedback. Do not feed these.
Agreed—but the process of determining trollhood should probably be nice.
The ineducable. Suppose a person asserts that the Monty Hall problem results in a 50-50 chance of switching or not switching. One or two efforts to educate nicely is good. Additional efforts are wasted and unproductive.
There are nice ways to give up on teaching people. Some of these go by names that sound horrendous to our collective project here (“agreeing to disagree”); some may be more palatable (“changing the subject”). Only if the person is hotly intent on mis-educating you (or others whom you feel are themselves educable) does this quality warrant discarding niceness, and it’s probably just a special case of trolling anyway.
Some of these go by names that sound horrendous to our collective project here (“agreeing to disagree”); some may be more palatable (“changing the subject”).
I think you need to add a fourth option: People with a blatant conflict of interest—most often a political affiliation. Even Wikipedia (which thrives on niceness and NPOV-seeking, if not truth-seeking) assumes bad faith when people try to edit their own articles, with its WP:COI policy. The legal system assumes bad faith about prosecutors and lawyers, which is why its standard of evidence is so extreme. And while the scientific method does not assume bad faith about scientists, it still protects against their naive errors of rationality, which is just as important.
Of course, we already know that prediction markets excel at integrating people with bad faith in a rational, truth-seeking institution; finding a way to do the same thing in a deliberative forum comparable to Less Wrong would be an extremely useful development. My hunch is that it would be useful to steal a page from the playbook of politics and support clearly-defined factions with different points of view and perhaps different policy proposals or decisions or what have you. But all of this is largely speculation.
tl;dr: I think Alicorn’s post is definitely cogent when it comes to LessWrong as we know it. But there’s a huge design space to be explored for more resilient institutions.
Hrm. Well, if politics itself is any example to judge by, that may make for a resilient institution—but the mess of allegiances and biases created by splitting people into well-defined factions probably entails that the institution would be much worse off in terms of truth-finding, because devoting too much of its energies to internecine squabbling.
I suppose you need to strike a balance between unproductive antagonism, and ending up as a group of like-minded folks just patting each other on the back. Thankfully, LW seems to have a strong dose of “Let’s get to the bottom of this”-type norms, and the appropriately rigorous/persnickety personalities, to stop it from getting too back-patty.
Still I think we’d need some measure to prevent becoming permanently entrenched into factions. Maybe have an artificial time-limit for clearly defined factions. Every two weeks we tell everyone to give up factional loyalties and consider the evidence given. Then after a couple days re-form the factions along new boundaries.
That sounds pretty confusing. You might as well just not have officially sanctioned factions in the first place, right? People who agree on a given issue will naturally band together on it, but they won’t be so afflicted with the bias or the pressure that comes of being on a well-defined Side, to have their whole range of opinions cohere with those held by the group. There are already de facto ‘factions’ on any issue we might discuss, and everyone is already felt to be continually obliged to examine the rationality of their positions, so it kind of seems like we’re already there!
I took bogus’s point to be that we can avoid some of the harms of bad faith arguments if we make motivations explicit with clearly defined factions. That would be a reason to prefer official factions to de facto factions.
But my proposal might be too convoluted a solution for a problem that I haven’t really noticed here. And I’m not sure how much officially sanctioned factions actually would prevent bad faith arguments.
But my proposal might be too convoluted a solution for a problem that I haven’t really noticed here.
You haven’t noticed this problem here because political debates are expressly discouraged at LessWrong. But we can easily imagine LW-like sites with the mission of making policy decision-making more rational and transparent: there is a fairly large literature on open politics, open source politics (a pun on two different usages of “open source”!), open source governance, e-democracy etc.
It’s the same problem Robin Hanson wants to address with his decision markets, though his solution is to avoid all the issues with deliberation by just deferring to the output of a betting market.
This isn’t Hanson’s position at all. Decision markets don’t solve the problem “how do we make a good decision”—they just improve incentives by deferring it to the investors. The investors still have the problem of what decision would be best, and deliberation mechanisms could still play an important role.
I believe “nice,” makes an excellent default, and I think these arguments are good ones, well-presented. Not-niceness is sometimes an effort to signal intelligence, I think; it’s not particularly effective at that.
It’s important, too, though, to recognize when niceness doesn’t pay the utilitarian bill:
Trolls. In any environment, people interested in reactions occasionally wander in. These people should be banned and ignored. Shunning is not a particularly nice thing, but even polite feedback is feedback. Do not feed these.
The ineducable. Suppose a person asserts that the Monty Hall problem results in a 50-50 chance of switching or not switching. One or two efforts to educate nicely is good. Additional efforts are wasted and unproductive.
Evil. Deliberately dishonest people are far rarer than alleged on internet fora generally, but it happens. Shaming people who are genuinely bad actors is fine with me, thanks.
I’ve probably missed several. Assistance welcomed.
Overall, I’d say LW is a particularly civil corner of the internet, and I’ve spent time at some uncivil places. The other side of pro-niceness posting is that assuming unkindness or bad motives is not a good default for the reader; assume the other person’s directness isn’t not-nice.
I’d also say that I think niceness is more important for persuasion on the net than it is in person; people who know you personally might assume more internal niceness even you’re sometimes a condescending, sarcastic, know-it-all. (Really.) That sort of instinctive goodwill towards one’s fellow man is tougher to generate without a handshake.
1 and 2 are both examples where the correct response is not being not-nice, but saying nothing.
This also applies in case 3, except where you have evidence of deliberate dishonesty that you think other honest participants may not already be aware of and need brought to their attention.
In general, if you think someone isn’t worth being nice to, don’t address them at all. It’s OK to talk about trolls, but never talk to them.
In other words, if you can’t say something nice, don’t say anything :-)
Agreed—but the process of determining trollhood should probably be nice.
There are nice ways to give up on teaching people. Some of these go by names that sound horrendous to our collective project here (“agreeing to disagree”); some may be more palatable (“changing the subject”). Only if the person is hotly intent on mis-educating you (or others whom you feel are themselves educable) does this quality warrant discarding niceness, and it’s probably just a special case of trolling anyway.
I like “agreeing to postpone agreement”.
I think you need to add a fourth option: People with a blatant conflict of interest—most often a political affiliation. Even Wikipedia (which thrives on niceness and NPOV-seeking, if not truth-seeking) assumes bad faith when people try to edit their own articles, with its WP:COI policy. The legal system assumes bad faith about prosecutors and lawyers, which is why its standard of evidence is so extreme. And while the scientific method does not assume bad faith about scientists, it still protects against their naive errors of rationality, which is just as important.
Of course, we already know that prediction markets excel at integrating people with bad faith in a rational, truth-seeking institution; finding a way to do the same thing in a deliberative forum comparable to Less Wrong would be an extremely useful development. My hunch is that it would be useful to steal a page from the playbook of politics and support clearly-defined factions with different points of view and perhaps different policy proposals or decisions or what have you. But all of this is largely speculation.
tl;dr: I think Alicorn’s post is definitely cogent when it comes to LessWrong as we know it. But there’s a huge design space to be explored for more resilient institutions.
Hrm. Well, if politics itself is any example to judge by, that may make for a resilient institution—but the mess of allegiances and biases created by splitting people into well-defined factions probably entails that the institution would be much worse off in terms of truth-finding, because devoting too much of its energies to internecine squabbling.
I suppose you need to strike a balance between unproductive antagonism, and ending up as a group of like-minded folks just patting each other on the back. Thankfully, LW seems to have a strong dose of “Let’s get to the bottom of this”-type norms, and the appropriately rigorous/persnickety personalities, to stop it from getting too back-patty.
Still I think we’d need some measure to prevent becoming permanently entrenched into factions. Maybe have an artificial time-limit for clearly defined factions. Every two weeks we tell everyone to give up factional loyalties and consider the evidence given. Then after a couple days re-form the factions along new boundaries.
That sounds pretty confusing. You might as well just not have officially sanctioned factions in the first place, right? People who agree on a given issue will naturally band together on it, but they won’t be so afflicted with the bias or the pressure that comes of being on a well-defined Side, to have their whole range of opinions cohere with those held by the group. There are already de facto ‘factions’ on any issue we might discuss, and everyone is already felt to be continually obliged to examine the rationality of their positions, so it kind of seems like we’re already there!
I took bogus’s point to be that we can avoid some of the harms of bad faith arguments if we make motivations explicit with clearly defined factions. That would be a reason to prefer official factions to de facto factions.
But my proposal might be too convoluted a solution for a problem that I haven’t really noticed here. And I’m not sure how much officially sanctioned factions actually would prevent bad faith arguments.
You haven’t noticed this problem here because political debates are expressly discouraged at LessWrong. But we can easily imagine LW-like sites with the mission of making policy decision-making more rational and transparent: there is a fairly large literature on open politics, open source politics (a pun on two different usages of “open source”!), open source governance, e-democracy etc.
It’s the same problem Robin Hanson wants to address with his decision markets, though his solution is to avoid all the issues with deliberation by just deferring to the output of a betting market.
This isn’t Hanson’s position at all. Decision markets don’t solve the problem “how do we make a good decision”—they just improve incentives by deferring it to the investors. The investors still have the problem of what decision would be best, and deliberation mechanisms could still play an important role.