I can’t see the reason being anything else than a personal grudge.
Really?
I can see at least two other (closely linked) reasons for ialdabaoth’s providing the name of the conjectured culprit. (1) Two people specifically asked him to do it. (2) Abuse of the LW karma system is damaging to the whole community and everyone benefits if such abuse results in public shaming.
There are, of course, reasons in the other direction (the danger you mention, of such accusations becoming commonplace and themselves being used as a tool of abuse and manipulation; and the danger that people will be more reluctant to disagree with Eugine because they don’t want him to do to them what he is alleged to be doing to ialdabaoth). So it’s not obvious that ialdabaoth did well to reveal the name. But there seem to be obvious reasons other than “a personal grudge”.
Before you say “well it’s implicitly clear!”, consider 1) suffering from a typical mind fallacy and 2) the precedent that there even was an explicit post about not recommending violence against actual people on LW (so much for ‘implicitly clear’).
Lastly, don’t counter jerks by advocating being a jerk. “Benefits” and “public shaming” … don’t get me started. What’s this, our community’s attempt at playing Inquisition? Can’t we skip those stages? You’re already participating, mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
In that vein, hey gjm, I heard you stopped beating your wife? Good for you!
there’s not even a consensus whether there should be a rule against such usage.
There could be consensus that it’s harmful without consensus that there should be a rule against it. (I have no idea whether there is.) After all, LW gets by fairly well with few explicit rules. In any case, all that’s relevant here is that ialdabaoth might reasonably hold that such behaviour is toxic and should be shamed since the question was whether his actions have credible reasons other than a personal grudge.
Was there ever a similar poll about whether there should be a community norm against such actions? About whether such actions are generally highly toxic to the LW community?
our community’s attempt at playing Inquisition?
A brief perspective check: The Inquisition tortured people and had them executed for heresy. What has happened here is that someone said “I think so-and-so probably did something unpleasant”.
You’re already participating, naming a possible culprit
It is possible that you have misunderstood what I said—which is not that I think Eugine did what ialdabaoth says he probably did (I do have an opinion as to how likely that is, but have not mentioned it anywhere in this discussion).
What I said is that if it comes to be believed that Eugine acted as ialdabaoth says he thinks he probably did, then that may lead LW participants to shy away from expressing opinions that they think Eugine will dislike. This can happen even if Eugine is wholly innocent.
mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
This seems rather tenuous to me. I did not accuse him of anything, nor did I call for him to be shamed. (Still less for anything to be done to him that would warrant the parallel with the Inquisition.)
There could be consensus that it’s harmful without consensus that there should be a rule against it.
Making the rules against all harmful things is a FAI-complete problem. If someone is able to do that, they would better spend their time writing the rules in a programming language and creating a Friendly AI.
Let’s assume we have a rule “it is forbidden to downvote all posts by someone, we detect such behavior automatically by a script, and the punishment is X”. What will most likely happen?
a) The mass downvoters will switch to downvoting all comments but one.
b) A new troll will come to the website, post three idiotic comments, someone will downvote all three of them and unknowingly trigger the illegal downvoting detection script.
Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don’t involve thinking mass-downvoting isn’t a very bad thing.
I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators’ discretion (or to other users’, if all that happens on a violation is that a complete description of what you did gets posted automatically).
(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)
This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don’t have one of those.)
Yes. Of course, LW has human moderators, or at least admins—but they don’t appear to do very much human moderation. (Which is fair enough—it’s a time-intensive business.)
Yeah, other possibilities exist. What I meant to say is that my social heuristics strongly point to a particular interpretation of the situation based on why people usually seem to be doing these kinds of things.
A socially competent person should have some kind of an idea what accusing people publicly means. What follows, I think, is that he did it to hurt Eugine, or that he’s not socially competent.
A socially competent person should have some kind of an idea what accusing people publicly means.
Yup.
That in my mind means that he did it to hurt Eugine, or that he’s not socially competent.
Doesn’t follow. It means that he did it knowing it would hurt Eugine or else is not socially competent. But a thing can have predictable consequences that are not reasons for your doing it. A medically competent person knows that major surgery causes pain and inconvenience and risk, but that doesn’t mean that someone medically competent undergoing or recommending major surgery must be doing it to bring about the pain and inconvenience and risk. They’re doing it for some other benefit, and putting up with those unfortunately unavoidable side effects.
(I don’t know ialdabaoth. It is possible that s/he did intend to hurt Eugine. But I don’t see any good evidence for that, nor any grounds for assuming it.)
But a thing can have predictable consequences that are not reasons for your doing it.
Well, you got me here. I think the expected positive value of the action is so low that using that as justification for the highly probable negative value seems kinda weird. Surgeons don’t usually cut people up just because it might have some benefit.
Really?
I can see at least two other (closely linked) reasons for ialdabaoth’s providing the name of the conjectured culprit. (1) Two people specifically asked him to do it. (2) Abuse of the LW karma system is damaging to the whole community and everyone benefits if such abuse results in public shaming.
There are, of course, reasons in the other direction (the danger you mention, of such accusations becoming commonplace and themselves being used as a tool of abuse and manipulation; and the danger that people will be more reluctant to disagree with Eugine because they don’t want him to do to them what he is alleged to be doing to ialdabaoth). So it’s not obvious that ialdabaoth did well to reveal the name. But there seem to be obvious reasons other than “a personal grudge”.
Note there’s not even a consensus whether there should be a rule against such usage. What you find to be ‘abuse’ others may find to be valid expressions within the system of wanting someone to ‘go away’. No details, no demarcation line, but calling for ‘public shaming’? Please.
Before you say “well it’s implicitly clear!”, consider 1) suffering from a typical mind fallacy and 2) the precedent that there even was an explicit post about not recommending violence against actual people on LW (so much for ‘implicitly clear’).
Lastly, don’t counter jerks by advocating being a jerk. “Benefits” and “public shaming” … don’t get me started. What’s this, our community’s attempt at playing Inquisition? Can’t we skip those stages? You’re already participating, mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
In that vein, hey gjm, I heard you stopped beating your wife? Good for you!
There could be consensus that it’s harmful without consensus that there should be a rule against it. (I have no idea whether there is.) After all, LW gets by fairly well with few explicit rules. In any case, all that’s relevant here is that ialdabaoth might reasonably hold that such behaviour is toxic and should be shamed since the question was whether his actions have credible reasons other than a personal grudge.
Was there ever a similar poll about whether there should be a community norm against such actions? About whether such actions are generally highly toxic to the LW community?
A brief perspective check: The Inquisition tortured people and had them executed for heresy. What has happened here is that someone said “I think so-and-so probably did something unpleasant”.
It is possible that you have misunderstood what I said—which is not that I think Eugine did what ialdabaoth says he probably did (I do have an opinion as to how likely that is, but have not mentioned it anywhere in this discussion).
What I said is that if it comes to be believed that Eugine acted as ialdabaoth says he thinks he probably did, then that may lead LW participants to shy away from expressing opinions that they think Eugine will dislike. This can happen even if Eugine is wholly innocent.
This seems rather tenuous to me. I did not accuse him of anything, nor did I call for him to be shamed. (Still less for anything to be done to him that would warrant the parallel with the Inquisition.)
Making the rules against all harmful things is a FAI-complete problem. If someone is able to do that, they would better spend their time writing the rules in a programming language and creating a Friendly AI.
Let’s assume we have a rule “it is forbidden to downvote all posts by someone, we detect such behavior automatically by a script, and the punishment is X”. What will most likely happen?
a) The mass downvoters will switch to downvoting all comments but one.
b) A new troll will come to the website, post three idiotic comments, someone will downvote all three of them and unknowingly trigger the illegal downvoting detection script.
Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don’t involve thinking mass-downvoting isn’t a very bad thing.
I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators’ discretion (or to other users’, if all that happens on a violation is that a complete description of what you did gets posted automatically).
(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)
This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don’t have one of those.)
Yes. Of course, LW has human moderators, or at least admins—but they don’t appear to do very much human moderation. (Which is fair enough—it’s a time-intensive business.)
Yeah, other possibilities exist. What I meant to say is that my social heuristics strongly point to a particular interpretation of the situation based on why people usually seem to be doing these kinds of things.
A socially competent person should have some kind of an idea what accusing people publicly means. What follows, I think, is that he did it to hurt Eugine, or that he’s not socially competent.
Yup.
Doesn’t follow. It means that he did it knowing it would hurt Eugine or else is not socially competent. But a thing can have predictable consequences that are not reasons for your doing it. A medically competent person knows that major surgery causes pain and inconvenience and risk, but that doesn’t mean that someone medically competent undergoing or recommending major surgery must be doing it to bring about the pain and inconvenience and risk. They’re doing it for some other benefit, and putting up with those unfortunately unavoidable side effects.
(I don’t know ialdabaoth. It is possible that s/he did intend to hurt Eugine. But I don’t see any good evidence for that, nor any grounds for assuming it.)
Well, you got me here. I think the expected positive value of the action is so low that using that as justification for the highly probable negative value seems kinda weird. Surgeons don’t usually cut people up just because it might have some benefit.