Once there was an article deleted on LW. Since that happened, it is repeatedly used as an example how censored, intolerant, and cultish LW is. Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn! If this happens, what will come next: captcha in LW wiki?
Instead, we should spend hundreds or thousands of man-hours engaging with trolls? At least Roko had a positive goal.
From your link:
This about the Internet: Anyone can walk in. And anyone can walk out. And so an online community must stay fun to stay alive. Waiting until the last resort of absolute, blatent, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late.
Perhaps there should be some automatic account-disabling mechanism based on karma. If someone has total karma (not just in last 30 days) below some negative level (for example −100), their account would be automatically disabled. Without direct intervention by a moderator, to make it less personal, but also more quick. Without deleting anything, to allow an easy fix in case of karma assassinations.
Perhaps it’s not the right word. Anyway, website moderation is full of “damned if you do, damned if you don’t” situations. Having bad content on your website puts you in a bad light. Removing bad content from you website puts you in a bad light.
People will automatically associate everything on your website with you. Because it’s on your website, d’oh! This is especially dangerous with opinions which have a surface similarity to your expressed opinions. Most people will only remember: “I read this on LessWrong”.
That was the PR danger of Roko. If his “pro-Singularity Pascal’s mugging” comments were not removed, many people would interpret them as something that people at SIAI believe. Because (1) SIAI is pro-Singularity, and (2) they need money, and (3) it’s on their website, d’oh! A hyperlink to such discussion is all anyone would ever need to prove that LW is a dangerous organization.
On the other hand, if you ever remove anything from your website, it is a proof that you are an evil Nazi who can’t tolerate free speech. What, are you unable to withstand someone disagreeing with you? (That’s how most trolls describe their own actions.) And deleting comments with surface similarities to yours, that’s even more suspicious. What, you can’t tolerate even a small dissent?
The best solution, from PR point of view, is probably to remove all offending comments without explanation, or replacing them with a generic explanation such as “this comment violated LW Terms of Service”, with a hyperlink to a long and boring document containing a rule equivalent to ‘...and also moderators can delete any comment or article if they decide so.’ Also, if such deletions are rather common, not exceptional, the individual instances will draw less attention. (In other words, the best way to avoid censorship accusations is to have a real censorship. Homo hypocritus, ahoy.)
The Roko Incident was one of the most exceptional events of article removal I’ve ever witnessed, for every possible reason: the high-status people involved, the reasons for removal, the tone of conversation, the theoretical dangers of knowledge, and the mass-self-deletion event following. There’s many reasons it gets talked about rather than the dozens of other posts which are deleted by the time I get around to clicking them in my RSS feed.
For my own part, if LW admins want to actively moderate discussion (e.g., delete substandard comments/posts), that’s cool with me, and I would endorse that far more than not actively moderating discussion but every once in a while deleting comments or banning users who are not obviously worse than comments and users that go unaddressed.
Of course, once site admins demonstrate the willingness to ban submissions considered inappropriate, reasonable people are justified in concluding that unbanned submissions are considered appropriate. In other words, active moderation quickly becomes an obligation.
Note that you’re excluding a middle that is perhaps worth considering. That is, the choice is not necessarily between “dealing with” a user account on an admin level (which generally amounts to forcing the user to change their ID and not much more), and spending hundreds of thousands of man-hours in counterproductive exchange.
A third option worth considering is not engaging in counterproductive exchanges, and focusing our attention elsewhere. (AKA, as you say, “don’t feed the trolls”.)
Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn!
Wait, what? Forums ban trolls all the time. It becomes necessary when you get big enough and popular enough to attract significant troll populations. It’s hardly extreme and cultish, or even unusual.
You propose a dangerous thing.
Once there was an article deleted on LW. Since that happened, it is repeatedly used as an example how censored, intolerant, and cultish LW is. Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn! If this happens, what will come next: captcha in LW wiki?
Instead, we should spend hundreds or thousands of man-hours engaging with trolls? At least Roko had a positive goal.
From your link:
Note to self: use metadata in comments when necessary, such as “irony” etc.
Perhaps there should be some automatic account-disabling mechanism based on karma. If someone has total karma (not just in last 30 days) below some negative level (for example −100), their account would be automatically disabled. Without direct intervention by a moderator, to make it less personal, but also more quick. Without deleting anything, to allow an easy fix in case of karma assassinations.
What was ironic about it?
Perhaps it’s not the right word. Anyway, website moderation is full of “damned if you do, damned if you don’t” situations. Having bad content on your website puts you in a bad light. Removing bad content from you website puts you in a bad light.
People will automatically associate everything on your website with you. Because it’s on your website, d’oh! This is especially dangerous with opinions which have a surface similarity to your expressed opinions. Most people will only remember: “I read this on LessWrong”.
That was the PR danger of Roko. If his “pro-Singularity Pascal’s mugging” comments were not removed, many people would interpret them as something that people at SIAI believe. Because (1) SIAI is pro-Singularity, and (2) they need money, and (3) it’s on their website, d’oh! A hyperlink to such discussion is all anyone would ever need to prove that LW is a dangerous organization.
On the other hand, if you ever remove anything from your website, it is a proof that you are an evil Nazi who can’t tolerate free speech. What, are you unable to withstand someone disagreeing with you? (That’s how most trolls describe their own actions.) And deleting comments with surface similarities to yours, that’s even more suspicious. What, you can’t tolerate even a small dissent?
The best solution, from PR point of view, is probably to remove all offending comments without explanation, or replacing them with a generic explanation such as “this comment violated LW Terms of Service”, with a hyperlink to a long and boring document containing a rule equivalent to ‘...and also moderators can delete any comment or article if they decide so.’ Also, if such deletions are rather common, not exceptional, the individual instances will draw less attention. (In other words, the best way to avoid censorship accusations is to have a real censorship. Homo hypocritus, ahoy.)
The Roko Incident was one of the most exceptional events of article removal I’ve ever witnessed, for every possible reason: the high-status people involved, the reasons for removal, the tone of conversation, the theoretical dangers of knowledge, and the mass-self-deletion event following. There’s many reasons it gets talked about rather than the dozens of other posts which are deleted by the time I get around to clicking them in my RSS feed.
Nobody would miss private_messaging.
For my own part, if LW admins want to actively moderate discussion (e.g., delete substandard comments/posts), that’s cool with me, and I would endorse that far more than not actively moderating discussion but every once in a while deleting comments or banning users who are not obviously worse than comments and users that go unaddressed.
Of course, once site admins demonstrate the willingness to ban submissions considered inappropriate, reasonable people are justified in concluding that unbanned submissions are considered appropriate. In other words, active moderation quickly becomes an obligation.
Note that you’re excluding a middle that is perhaps worth considering. That is, the choice is not necessarily between “dealing with” a user account on an admin level (which generally amounts to forcing the user to change their ID and not much more), and spending hundreds of thousands of man-hours in counterproductive exchange.
A third option worth considering is not engaging in counterproductive exchanges, and focusing our attention elsewhere. (AKA, as you say, “don’t feed the trolls”.)
Wait, what? Forums ban trolls all the time. It becomes necessary when you get big enough and popular enough to attract significant troll populations. It’s hardly extreme and cultish, or even unusual.