Seems to me you misunderstand this aspect of trolling: someone systematically working to create an ugh field about some topic, person, or a blog. Pavlovian conditioning through online communication.
Imagine a situation where every time you speak about a topic X, someone kicks you in a foot. Not too painfully, but unpleasantly enough. Imagine that there is no way for you to avoid this feeling (except for not speaking about X ever again). Do you expect that in a long term it would influence your emotions about X, and your ability to think about X clearly? If yes, why would you want to give anyone this kind of power over you?
This is an art some people are very successful at. I don’t know why exactly they do that; maybe it is deliberate on their part, or maybe they have some bad emotions related with the topic or person, and they can’t resist sharing the emotions with a larger audience.
In the past I have left one website I participated on for a few years, just because one crazy person got angry at me for some specific disagreement (I criticized their favorite politician once), and then for the following months, wherever I posted a comment about whatever topic, that person made sure to reply to me, negatively. Each specific instance, viewed individually, could be interpreted as a honest disagreement. The problem was the pattern. After a few months, I was perfectly conditioned… I merely thought about writing a comment, and immediately I saw myself reading another negative response by the given person, other people reacting to that negative response, and… I stopped writing comments, because it felt bad.
I am not the only person who left that specific website because of this specific person. I tried to have a meta conversation about this kind of behavior, but the administrators made their values obvious: censorship is evil and completely unacceptable (unless swear words or personal threats are used). Recently they have acquired another website, whose previous owner agreed to work as a moderator for them. I happen to know the moderator personally, and a few days ago he said to me he is considering quitting the job he used to love, because in a similar way most of his valuable contributors were driven away by a single dedicated person, whom the site owners refuse to censor.
If you have a sufficiently persistent person and inflexible moderation policy, one person really is enough to destroy a website.
If you have a sufficiently persistent person and inflexible moderation policy, one person really is enough to destroy a website.
I agree that destructive people can do a lot of damage, and that removing them is a good idea. I also agree that destructiveness doesn’t even require maliciousness.
The strategy I’d like to see is “cultivate dissent.” If someone is being critical in an unproductive way, then show them the productive way to be critical, and if they fail to shape up, then remove them from the community, through a ban or deletion/hiding of comments. Documenting the steps along the way, and linking to previous warnings, makes it clear to observers that dissent is carefully managed, not suppressed.
Tying the moderator reaction to whether or not the criticism is fun to receive, rather than if it is useful to receive, is a recipe for receiving fun but useless criticisms and not receiving unfun but useful criticisms.
Receiving and processing unfun but useful criticisms is a core part of rationality, to the point that there are litanies about it.
The most unsuccessful thing about the message deletion is that now I am insatiable curious about what the message said and am thinking way more about that, and having to spend cognitive effort worrying about whether Eliezer overstepped his bounds or not, in a way that (I suspect) is at least as bad as whatever the original comment was. (This remains the case whether or not the message was truly awful)
If someone is being critical in an unproductive way, then show them the productive way to be critical, and if they fail to shape up, then remove them from the community, through a ban or deletion/hiding of comments.
How specifically? I imagine it would be good to tell certain people: “you have already written twenty comments with almost the same content, so either write a full article about it, or shut up”.
The idea is that writing an article requires more work, better thinking, and now you are a person who must defend an idea instead of just attacking people who have different ideas. Also an article focuses the discussion of one topic on one place.
Even if someone e.g. thinks that the whole LessWrong community is Eliezer’s suicidal cult, I would prefer if the person collected all their best evidence at one place, so people can focus on one topic and discuss it thoroughly, instead of posting dozens of sarcastic remarks in various, often unrelated places.
I imagine it would be good to tell certain people: “you have already written twenty comments with almost the same content, so either write a full article about it, or shut up”.
I like this idea quite a bit, though I would word it more politely.
I also imagine that many posters would benefit from suggestions on how to alter their commenting style in general, as well as specific suggestions about how to apply those communication principles to this situation.
Tying the moderator reaction to whether or not the criticism is fun to receive, rather than if it is useful to receive, is a recipe for receiving fun but useless criticisms and not receiving unfun but useful criticisms.
Retaliatory sniping like the one you described is common, both online and IRL, and is not easy to moderate against. It is present on this forum, as well, to some degree, and occasionally complained about. The problem is that it is hard to prevent, since each specific instance usually does not break the usual ground rules. A couple of places I know have an informal “no sniping” rule, but it is quite subjective and the violations are hard to prove, except in extreme cases. An enforcement attempt by the mods is often costly, as it often evokes the ire of the egalitarian rank and file, who only see the tip of the iceberg.
Interestingly, on karma-supporting forums it often takes the form of downvoting with impunity everything (or almost everything) written by a poster you don’t like. Because of its zero cost it is hard to resist, and because of its anonymity it is hard to guard against. Fortunately, it is not as destructive as explicit sniping, since the hate-on downvotes tend to get overwhelmed by the relevant feedback, whether positive or negative.
Seems to me you misunderstand this aspect of trolling: someone systematically working to create an ugh field about some topic, person, or a blog. Pavlovian conditioning through online communication.
Imagine a situation where every time you speak about a topic X, someone kicks you in a foot. Not too painfully, but unpleasantly enough. Imagine that there is no way for you to avoid this feeling (except for not speaking about X ever again). Do you expect that in a long term it would influence your emotions about X, and your ability to think about X clearly? If yes, why would you want to give anyone this kind of power over you?
This is an art some people are very successful at. I don’t know why exactly they do that; maybe it is deliberate on their part, or maybe they have some bad emotions related with the topic or person, and they can’t resist sharing the emotions with a larger audience.
In the past I have left one website I participated on for a few years, just because one crazy person got angry at me for some specific disagreement (I criticized their favorite politician once), and then for the following months, wherever I posted a comment about whatever topic, that person made sure to reply to me, negatively. Each specific instance, viewed individually, could be interpreted as a honest disagreement. The problem was the pattern. After a few months, I was perfectly conditioned… I merely thought about writing a comment, and immediately I saw myself reading another negative response by the given person, other people reacting to that negative response, and… I stopped writing comments, because it felt bad.
I am not the only person who left that specific website because of this specific person. I tried to have a meta conversation about this kind of behavior, but the administrators made their values obvious: censorship is evil and completely unacceptable (unless swear words or personal threats are used). Recently they have acquired another website, whose previous owner agreed to work as a moderator for them. I happen to know the moderator personally, and a few days ago he said to me he is considering quitting the job he used to love, because in a similar way most of his valuable contributors were driven away by a single dedicated person, whom the site owners refuse to censor.
If you have a sufficiently persistent person and inflexible moderation policy, one person really is enough to destroy a website.
I agree that destructive people can do a lot of damage, and that removing them is a good idea. I also agree that destructiveness doesn’t even require maliciousness.
The strategy I’d like to see is “cultivate dissent.” If someone is being critical in an unproductive way, then show them the productive way to be critical, and if they fail to shape up, then remove them from the community, through a ban or deletion/hiding of comments. Documenting the steps along the way, and linking to previous warnings, makes it clear to observers that dissent is carefully managed, not suppressed.
Tying the moderator reaction to whether or not the criticism is fun to receive, rather than if it is useful to receive, is a recipe for receiving fun but useless criticisms and not receiving unfun but useful criticisms.
Receiving and processing unfun but useful criticisms is a core part of rationality, to the point that there are litanies about it.
Very much agree with this.
The most unsuccessful thing about the message deletion is that now I am insatiable curious about what the message said and am thinking way more about that, and having to spend cognitive effort worrying about whether Eliezer overstepped his bounds or not, in a way that (I suspect) is at least as bad as whatever the original comment was. (This remains the case whether or not the message was truly awful)
How specifically? I imagine it would be good to tell certain people: “you have already written twenty comments with almost the same content, so either write a full article about it, or shut up”.
The idea is that writing an article requires more work, better thinking, and now you are a person who must defend an idea instead of just attacking people who have different ideas. Also an article focuses the discussion of one topic on one place.
Even if someone e.g. thinks that the whole LessWrong community is Eliezer’s suicidal cult, I would prefer if the person collected all their best evidence at one place, so people can focus on one topic and discuss it thoroughly, instead of posting dozens of sarcastic remarks in various, often unrelated places.
I like this idea quite a bit, though I would word it more politely.
I also imagine that many posters would benefit from suggestions on how to alter their commenting style in general, as well as specific suggestions about how to apply those communication principles to this situation.
Useless criticisms are no fun at all.
Retaliatory sniping like the one you described is common, both online and IRL, and is not easy to moderate against. It is present on this forum, as well, to some degree, and occasionally complained about. The problem is that it is hard to prevent, since each specific instance usually does not break the usual ground rules. A couple of places I know have an informal “no sniping” rule, but it is quite subjective and the violations are hard to prove, except in extreme cases. An enforcement attempt by the mods is often costly, as it often evokes the ire of the egalitarian rank and file, who only see the tip of the iceberg.
Interestingly, on karma-supporting forums it often takes the form of downvoting with impunity everything (or almost everything) written by a poster you don’t like. Because of its zero cost it is hard to resist, and because of its anonymity it is hard to guard against. Fortunately, it is not as destructive as explicit sniping, since the hate-on downvotes tend to get overwhelmed by the relevant feedback, whether positive or negative.