Two things have been bugging me about LessWrong and its connection to other rationality diaspora/tangential places.
1) Criticism on LW is upvoted a lot, leading to major visibility. This happens even in the case where the criticism is quite vitriolic, like in Duncan’s Dragon Army Barracks post. Currently, there’s only upvotes for comments, and there aren’t multiple reactions, like on Facebook, Vanilla Forums, or other places. So there’s no clear way to say something like “you bring up good points, but also, your tone is probably going to make other people feel attacked, and that’s not good”. You either give an upvote or you don’t.
I think this leads to potentially toxic comments that wouldn’t survive elsewhere (FB, HackerNews, Reddit, etc.) being far more prominently visible. (A separate but related issue is my thought that the burden of not-pissing-people off lies with the commenter. Giving unnecessarily sharply worded criticism and then saying the other person isn’t engaging well with you is bad practice.)
2) There seems to be a subset of people tangentially related to LW that really likes criticizing LW (?) My current exposure to several blogs / messaging boards seems to be it’s fashionable/wise/something in some sense to be that LW types childish/autistic/stupid (?) I’m curious why this is the case. It’s true that some people in the community are lacking social skills, and this often shows in posts that try to overanalyze social patterns/behavior. But why keep bringing this up? Like, LW has also got some pretty cool people who have written some useful posts on beating procrastination, health, etc. But those positives don’t seem to get as much attention?
Critics are also a sign that the site is becoming more recognized and has started spreading around… You cannot control what other people choose to criticize, mainly because it’s known that people get a status kick by taking down others. When downvotes will be resurrected, we’ll have some means of judging nasty or undue criticisms.
Also, it will be nice to have some tools to detect sockpuppets. Because if a nasty comment gets 20 upvotes, that doesn’t necessarily mean that 20 people upvoted it.
Well, how would you prevent someone registering multiple accounts manually? Going by IP could unfairly stop multiple people using the same computer (e.g. me and my wife) or even multiple people behind the same proxy server (e.g. the same job, or the same university).
I think the correct approach to this is to admit that you simply cannot prevent someone from creating hundreds of accounts, and design a system in a way that doesn’t allow an army of hundred zombies to do significant damage. One option would be to require something costly before allowing someone to either upvote or downvote, typically to have karma high enough that you can’t gain that in a week by simply posting three clever quotes to Rationality Threed. Preferably to require high karma and afterwards a personal approval by a moderator.
Maybe some of this will be implemented in LessWrong 2.0, I don’t know.
Well, how would you prevent someone registering multiple accounts manually?
That’s beside the point, any user determined enough can create enough sock-puppets to be annoying, but I remember that you said that specifically in the case of Eugene there should have been a glitch that allowed him to create automatically multiple accounts. But the usual standard precaution here would suffice: captcha login and unique email verifications should be deterring enough.
you simply cannot prevent someone from creating hundreds of accounts
You can’t, but you can make the process more difficult and slower. This is, more or less, infosec and here it’s rarely feasible to provide guarantees of unconditional safety. Generally speaking, the goal of defence is not so much to stop the attacker outright, but rather change his cost-benefit calculation so that the attack becomes too expensive.
a way that doesn’t allow an army of hundred zombies to do significant damage
The issue is detection: once you know they are zombies, their actions are not hard to undo.
The issue is detection: once you know they are zombies, their actions are not hard to undo.
Generally true, but with Reddit code and Reddit database schema, everything is hard (both detecting the zombies and undoing their actions). One of the reasons to move to LessWrong 2.0.
(This may be difficult to believe until you really try do download the Reddit/LW codebase and try to make it run on your home machine.)
Seems to me, when you find a vitriolic comment, there are essentially three options (other than ignoring it):
upvote it;
downvote it;
write a polite steelmanned version as a separate top-level comment, and downvote the original one.
The problem is, the third option is too much work. And the second options feels like: “omg, we can’t take any criticism; we have become a cult just like some people have always accused us!”. So people choose the first option.
Maybe a good approach would be if the moderators would write a message like: “I am going to delete this nasty comment in 3 hours; if anyone believes it contains valuable information, please report it as a separate top-level comment.”
There seems to be a subset of people tangentially related to LW that really likes criticizing LW
Some of them also like to play the “damned if you do, damned if you don’t” game (e.g. the Basilisk). Delete or not delete, you are a bad guy either way; you only have a choice of what kind of a bad guy you are—the horrible one who keeps nasty comments on his website, or the horrible one who censors information exposing the dark secrets of the website.
My current exposure to several blogs / messaging boards seems to be it’s fashionable/wise/something in some sense to be that LW types childish/autistic/stupid (?) I’m curious why this is the case.
Trolling or status games, I guess. For people who don’t have rationality as a value, it is fun (and more pageviews for their website) to poke the nerds and watch how they react. For people who have rationality as a value, it is a status move to declare that they are more rational than the stupid folks at LW.
At some moment, trying to interpret everything charitably and trying to answer politely and reasonably will make you a laughingstock. The most important lacking social skill is probably recognizing when you are simply being bullied. It is good to start with assumption of good intent, but it is stupid to refuse to update in face of overwhelming evidence.
For example, it is obvious that people on RationalWiki are absolutely not interested in evaluating the degree of rationality on LW objectively; they enjoy too much their “snarky point of view”, which simply means bullying the outgroup; and they have already decided that we are an outgroup. Now that we stopped giving a fuck about them, and more or less called them stupid in return, they moved to editing the Wikipedia article about LW as their next step. Whatever. As they say, never wrestle with a pig, because you get dirty, and besides, the pig likes it. Any energy spent on debating them would be better spent e.g. writing new valuable content for LW.
True, but I guess some people were doing this even before the downvotes were disables. Or sometimes we had a wave of downvotes first, then someone saying “hey, this contains some valid criticism, so I am going to upvote it, because we shouldn’t just hide the criticism”, then a wave of contrarian upvotes, then a meta-debate… eh.
Two things have been bugging me about LessWrong and its connection to other rationality diaspora/tangential places.
1) Criticism on LW is upvoted a lot, leading to major visibility. This happens even in the case where the criticism is quite vitriolic, like in Duncan’s Dragon Army Barracks post. Currently, there’s only upvotes for comments, and there aren’t multiple reactions, like on Facebook, Vanilla Forums, or other places. So there’s no clear way to say something like “you bring up good points, but also, your tone is probably going to make other people feel attacked, and that’s not good”. You either give an upvote or you don’t.
I think this leads to potentially toxic comments that wouldn’t survive elsewhere (FB, HackerNews, Reddit, etc.) being far more prominently visible. (A separate but related issue is my thought that the burden of not-pissing-people off lies with the commenter. Giving unnecessarily sharply worded criticism and then saying the other person isn’t engaging well with you is bad practice.)
2) There seems to be a subset of people tangentially related to LW that really likes criticizing LW (?) My current exposure to several blogs / messaging boards seems to be it’s fashionable/wise/something in some sense to be that LW types childish/autistic/stupid (?) I’m curious why this is the case. It’s true that some people in the community are lacking social skills, and this often shows in posts that try to overanalyze social patterns/behavior. But why keep bringing this up? Like, LW has also got some pretty cool people who have written some useful posts on beating procrastination, health, etc. But those positives don’t seem to get as much attention?
Critics are also a sign that the site is becoming more recognized and has started spreading around… You cannot control what other people choose to criticize, mainly because it’s known that people get a status kick by taking down others.
When downvotes will be resurrected, we’ll have some means of judging nasty or undue criticisms.
Also, it will be nice to have some tools to detect sockpuppets. Because if a nasty comment gets 20 upvotes, that doesn’t necessarily mean that 20 people upvoted it.
Yes, there’s also that… has the glitch allowing a sock-puppets army been discovered / fixed?
Well, how would you prevent someone registering multiple accounts manually? Going by IP could unfairly stop multiple people using the same computer (e.g. me and my wife) or even multiple people behind the same proxy server (e.g. the same job, or the same university).
I think the correct approach to this is to admit that you simply cannot prevent someone from creating hundreds of accounts, and design a system in a way that doesn’t allow an army of hundred zombies to do significant damage. One option would be to require something costly before allowing someone to either upvote or downvote, typically to have karma high enough that you can’t gain that in a week by simply posting three clever quotes to Rationality Threed. Preferably to require high karma and afterwards a personal approval by a moderator.
Maybe some of this will be implemented in LessWrong 2.0, I don’t know.
That’s beside the point, any user determined enough can create enough sock-puppets to be annoying, but I remember that you said that specifically in the case of Eugene there should have been a glitch that allowed him to create automatically multiple accounts. But the usual standard precaution here would suffice: captcha login and unique email verifications should be deterring enough.
You can’t, but you can make the process more difficult and slower. This is, more or less, infosec and here it’s rarely feasible to provide guarantees of unconditional safety. Generally speaking, the goal of defence is not so much to stop the attacker outright, but rather change his cost-benefit calculation so that the attack becomes too expensive.
The issue is detection: once you know they are zombies, their actions are not hard to undo.
Generally true, but with Reddit code and Reddit database schema, everything is hard (both detecting the zombies and undoing their actions). One of the reasons to move to LessWrong 2.0.
(This may be difficult to believe until you really try do download the Reddit/LW codebase and try to make it run on your home machine.)
Seems to me, when you find a vitriolic comment, there are essentially three options (other than ignoring it):
upvote it;
downvote it;
write a polite steelmanned version as a separate top-level comment, and downvote the original one.
The problem is, the third option is too much work. And the second options feels like: “omg, we can’t take any criticism; we have become a cult just like some people have always accused us!”. So people choose the first option.
Maybe a good approach would be if the moderators would write a message like: “I am going to delete this nasty comment in 3 hours; if anyone believes it contains valuable information, please report it as a separate top-level comment.”
Some of them also like to play the “damned if you do, damned if you don’t” game (e.g. the Basilisk). Delete or not delete, you are a bad guy either way; you only have a choice of what kind of a bad guy you are—the horrible one who keeps nasty comments on his website, or the horrible one who censors information exposing the dark secrets of the website.
Trolling or status games, I guess. For people who don’t have rationality as a value, it is fun (and more pageviews for their website) to poke the nerds and watch how they react. For people who have rationality as a value, it is a status move to declare that they are more rational than the stupid folks at LW.
At some moment, trying to interpret everything charitably and trying to answer politely and reasonably will make you a laughingstock. The most important lacking social skill is probably recognizing when you are simply being bullied. It is good to start with assumption of good intent, but it is stupid to refuse to update in face of overwhelming evidence.
For example, it is obvious that people on RationalWiki are absolutely not interested in evaluating the degree of rationality on LW objectively; they enjoy too much their “snarky point of view”, which simply means bullying the outgroup; and they have already decided that we are an outgroup. Now that we stopped giving a fuck about them, and more or less called them stupid in return, they moved to editing the Wikipedia article about LW as their next step. Whatever. As they say, never wrestle with a pig, because you get dirty, and besides, the pig likes it. Any energy spent on debating them would be better spent e.g. writing new valuable content for LW.
You mean “the second option is disabled”. which would leave upvote or ignore.
True, but I guess some people were doing this even before the downvotes were disables. Or sometimes we had a wave of downvotes first, then someone saying “hey, this contains some valid criticism, so I am going to upvote it, because we shouldn’t just hide the criticism”, then a wave of contrarian upvotes, then a meta-debate… eh.