Before you say “well it’s implicitly clear!”, consider 1) suffering from a typical mind fallacy and 2) the precedent that there even was an explicit post about not recommending violence against actual people on LW (so much for ‘implicitly clear’).
Lastly, don’t counter jerks by advocating being a jerk. “Benefits” and “public shaming” … don’t get me started. What’s this, our community’s attempt at playing Inquisition? Can’t we skip those stages? You’re already participating, mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
In that vein, hey gjm, I heard you stopped beating your wife? Good for you!
there’s not even a consensus whether there should be a rule against such usage.
There could be consensus that it’s harmful without consensus that there should be a rule against it. (I have no idea whether there is.) After all, LW gets by fairly well with few explicit rules. In any case, all that’s relevant here is that ialdabaoth might reasonably hold that such behaviour is toxic and should be shamed since the question was whether his actions have credible reasons other than a personal grudge.
Was there ever a similar poll about whether there should be a community norm against such actions? About whether such actions are generally highly toxic to the LW community?
our community’s attempt at playing Inquisition?
A brief perspective check: The Inquisition tortured people and had them executed for heresy. What has happened here is that someone said “I think so-and-so probably did something unpleasant”.
You’re already participating, naming a possible culprit
It is possible that you have misunderstood what I said—which is not that I think Eugine did what ialdabaoth says he probably did (I do have an opinion as to how likely that is, but have not mentioned it anywhere in this discussion).
What I said is that if it comes to be believed that Eugine acted as ialdabaoth says he thinks he probably did, then that may lead LW participants to shy away from expressing opinions that they think Eugine will dislike. This can happen even if Eugine is wholly innocent.
mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
This seems rather tenuous to me. I did not accuse him of anything, nor did I call for him to be shamed. (Still less for anything to be done to him that would warrant the parallel with the Inquisition.)
There could be consensus that it’s harmful without consensus that there should be a rule against it.
Making the rules against all harmful things is a FAI-complete problem. If someone is able to do that, they would better spend their time writing the rules in a programming language and creating a Friendly AI.
Let’s assume we have a rule “it is forbidden to downvote all posts by someone, we detect such behavior automatically by a script, and the punishment is X”. What will most likely happen?
a) The mass downvoters will switch to downvoting all comments but one.
b) A new troll will come to the website, post three idiotic comments, someone will downvote all three of them and unknowingly trigger the illegal downvoting detection script.
Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don’t involve thinking mass-downvoting isn’t a very bad thing.
I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators’ discretion (or to other users’, if all that happens on a violation is that a complete description of what you did gets posted automatically).
(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)
This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don’t have one of those.)
Yes. Of course, LW has human moderators, or at least admins—but they don’t appear to do very much human moderation. (Which is fair enough—it’s a time-intensive business.)
Note there’s not even a consensus whether there should be a rule against such usage. What you find to be ‘abuse’ others may find to be valid expressions within the system of wanting someone to ‘go away’. No details, no demarcation line, but calling for ‘public shaming’? Please.
Before you say “well it’s implicitly clear!”, consider 1) suffering from a typical mind fallacy and 2) the precedent that there even was an explicit post about not recommending violence against actual people on LW (so much for ‘implicitly clear’).
Lastly, don’t counter jerks by advocating being a jerk. “Benefits” and “public shaming” … don’t get me started. What’s this, our community’s attempt at playing Inquisition? Can’t we skip those stages? You’re already participating, mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
In that vein, hey gjm, I heard you stopped beating your wife? Good for you!
There could be consensus that it’s harmful without consensus that there should be a rule against it. (I have no idea whether there is.) After all, LW gets by fairly well with few explicit rules. In any case, all that’s relevant here is that ialdabaoth might reasonably hold that such behaviour is toxic and should be shamed since the question was whether his actions have credible reasons other than a personal grudge.
Was there ever a similar poll about whether there should be a community norm against such actions? About whether such actions are generally highly toxic to the LW community?
A brief perspective check: The Inquisition tortured people and had them executed for heresy. What has happened here is that someone said “I think so-and-so probably did something unpleasant”.
It is possible that you have misunderstood what I said—which is not that I think Eugine did what ialdabaoth says he probably did (I do have an opinion as to how likely that is, but have not mentioned it anywhere in this discussion).
What I said is that if it comes to be believed that Eugine acted as ialdabaoth says he thinks he probably did, then that may lead LW participants to shy away from expressing opinions that they think Eugine will dislike. This can happen even if Eugine is wholly innocent.
This seems rather tenuous to me. I did not accuse him of anything, nor did I call for him to be shamed. (Still less for anything to be done to him that would warrant the parallel with the Inquisition.)
Making the rules against all harmful things is a FAI-complete problem. If someone is able to do that, they would better spend their time writing the rules in a programming language and creating a Friendly AI.
Let’s assume we have a rule “it is forbidden to downvote all posts by someone, we detect such behavior automatically by a script, and the punishment is X”. What will most likely happen?
a) The mass downvoters will switch to downvoting all comments but one.
b) A new troll will come to the website, post three idiotic comments, someone will downvote all three of them and unknowingly trigger the illegal downvoting detection script.
Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don’t involve thinking mass-downvoting isn’t a very bad thing.
I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators’ discretion (or to other users’, if all that happens on a violation is that a complete description of what you did gets posted automatically).
(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)
This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don’t have one of those.)
Yes. Of course, LW has human moderators, or at least admins—but they don’t appear to do very much human moderation. (Which is fair enough—it’s a time-intensive business.)