Of course they aren’t. But if I’m not a moral expert, and I’m not an expert at knowing who is a moral expert, then whose counsel should I trust?
What you’re seeing here is the culmination of a LOT of moral processing. Eugine’s plausible outcomes, my plausible outcomes, the community’s plausible outcomes… this is far, far more data than I know how to accurately process, and all the heuristics I can fall back on have known serious flaws, but no known good compensatory algorithms.
All that’s left is moral experimentation, which I find terrifying. But is action selfish weakness, or is failure to act moral cowardice? And how do I find out, unless I commit to a course of action and then analyze its consequences? (Assuming I’m even competent to do so, which itself is not necessarily certain).
ETA: Does anyone have any good recommendations, beyond the Sequences/etc., where someone without financial means could go to learn better ethical heuristics?
ETA: Does anyone have any good recommendations, beyond the Sequences/etc., where someone without financial means could go to learn better ethical heuristics?
ETA: Does anyone have any good recommendations, beyond the Sequences/etc., where someone without financial means could go to learn better ethical heuristics?
Spend more time irl with people to see what actually works and what doesn’t. People do most of the experimentation for you.
What you’re seeing here is the culmination of a LOT of moral processing. Eugine’s plausible outcomes, my plausible outcomes, the community’s plausible outcomes...
As I see it the problem isn’t the complexity of moral processing, but that you fail to recognize the important parts. Your failure here is fairly simple. Let’s take the community’s plausible outcomes, because yours and Eugine’s are a drop in the ocean.
Do you wish this kind of mud slinging to become the community norm? “Oh, I’m 25% sure that ialdabaoth is block-downvoting me, and 50% sure it’s bayeslisk.” Seriously, I don’t know how trustworthy you are, so your probabilities provide me almost zero information. You however provided a name, and I can’t see the reason being anything else than a personal grudge. I’m sure people who share it are happy to join you.
If you want karma to be more accurate, this is not the way to go. Trying to introduce a less abusable system might be.
I can’t see the reason being anything else than a personal grudge.
Really?
I can see at least two other (closely linked) reasons for ialdabaoth’s providing the name of the conjectured culprit. (1) Two people specifically asked him to do it. (2) Abuse of the LW karma system is damaging to the whole community and everyone benefits if such abuse results in public shaming.
There are, of course, reasons in the other direction (the danger you mention, of such accusations becoming commonplace and themselves being used as a tool of abuse and manipulation; and the danger that people will be more reluctant to disagree with Eugine because they don’t want him to do to them what he is alleged to be doing to ialdabaoth). So it’s not obvious that ialdabaoth did well to reveal the name. But there seem to be obvious reasons other than “a personal grudge”.
Before you say “well it’s implicitly clear!”, consider 1) suffering from a typical mind fallacy and 2) the precedent that there even was an explicit post about not recommending violence against actual people on LW (so much for ‘implicitly clear’).
Lastly, don’t counter jerks by advocating being a jerk. “Benefits” and “public shaming” … don’t get me started. What’s this, our community’s attempt at playing Inquisition? Can’t we skip those stages? You’re already participating, mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
In that vein, hey gjm, I heard you stopped beating your wife? Good for you!
there’s not even a consensus whether there should be a rule against such usage.
There could be consensus that it’s harmful without consensus that there should be a rule against it. (I have no idea whether there is.) After all, LW gets by fairly well with few explicit rules. In any case, all that’s relevant here is that ialdabaoth might reasonably hold that such behaviour is toxic and should be shamed since the question was whether his actions have credible reasons other than a personal grudge.
Was there ever a similar poll about whether there should be a community norm against such actions? About whether such actions are generally highly toxic to the LW community?
our community’s attempt at playing Inquisition?
A brief perspective check: The Inquisition tortured people and had them executed for heresy. What has happened here is that someone said “I think so-and-so probably did something unpleasant”.
You’re already participating, naming a possible culprit
It is possible that you have misunderstood what I said—which is not that I think Eugine did what ialdabaoth says he probably did (I do have an opinion as to how likely that is, but have not mentioned it anywhere in this discussion).
What I said is that if it comes to be believed that Eugine acted as ialdabaoth says he thinks he probably did, then that may lead LW participants to shy away from expressing opinions that they think Eugine will dislike. This can happen even if Eugine is wholly innocent.
mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
This seems rather tenuous to me. I did not accuse him of anything, nor did I call for him to be shamed. (Still less for anything to be done to him that would warrant the parallel with the Inquisition.)
There could be consensus that it’s harmful without consensus that there should be a rule against it.
Making the rules against all harmful things is a FAI-complete problem. If someone is able to do that, they would better spend their time writing the rules in a programming language and creating a Friendly AI.
Let’s assume we have a rule “it is forbidden to downvote all posts by someone, we detect such behavior automatically by a script, and the punishment is X”. What will most likely happen?
a) The mass downvoters will switch to downvoting all comments but one.
b) A new troll will come to the website, post three idiotic comments, someone will downvote all three of them and unknowingly trigger the illegal downvoting detection script.
Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don’t involve thinking mass-downvoting isn’t a very bad thing.
I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators’ discretion (or to other users’, if all that happens on a violation is that a complete description of what you did gets posted automatically).
(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)
This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don’t have one of those.)
Yes. Of course, LW has human moderators, or at least admins—but they don’t appear to do very much human moderation. (Which is fair enough—it’s a time-intensive business.)
Yeah, other possibilities exist. What I meant to say is that my social heuristics strongly point to a particular interpretation of the situation based on why people usually seem to be doing these kinds of things.
A socially competent person should have some kind of an idea what accusing people publicly means. What follows, I think, is that he did it to hurt Eugine, or that he’s not socially competent.
A socially competent person should have some kind of an idea what accusing people publicly means.
Yup.
That in my mind means that he did it to hurt Eugine, or that he’s not socially competent.
Doesn’t follow. It means that he did it knowing it would hurt Eugine or else is not socially competent. But a thing can have predictable consequences that are not reasons for your doing it. A medically competent person knows that major surgery causes pain and inconvenience and risk, but that doesn’t mean that someone medically competent undergoing or recommending major surgery must be doing it to bring about the pain and inconvenience and risk. They’re doing it for some other benefit, and putting up with those unfortunately unavoidable side effects.
(I don’t know ialdabaoth. It is possible that s/he did intend to hurt Eugine. But I don’t see any good evidence for that, nor any grounds for assuming it.)
But a thing can have predictable consequences that are not reasons for your doing it.
Well, you got me here. I think the expected positive value of the action is so low that using that as justification for the highly probable negative value seems kinda weird. Surgeons don’t usually cut people up just because it might have some benefit.
Do you wish this kind of mud slinging to become the community norm?
The question is ambiguous.
Sense 1: “Do you want it to become normal for people to throw out such accusations when they have good reason to think they’re being mass-downvoted?”
Sense 2: “Do you want it to become normal for people to throw out such accusations just as a means of causing trouble for others?”
Clearly no one wants #2, but there’s something to be said for #1.
As it stands, hyporational’s challenge here seems like a fully general objection to anyone ever complaining about any alleged abuse that isn’t trivial to verify. [EDITED to add: More specifically, complaining and naming names.] I don’t think the world would be a better place if no one ever complained about any alleged abuse that isn’t trivial to verify. [EDITED to add: Or even if no one ever named the alleged abuser in such cases.] In the present instance, there’s at least good evidence (see satt’s comment) that someone is doing to ialdabaoth what he claims someone is doing.
The behaviour ialdabaoth is complaining about seems to me extremely bad for LW, and indeed a “less abusable system” would be good. So far as I can tell, no one has so far proposed one, and I bet it would be difficult to get a substantially different system in place. So proposing that as an alternative to complaining isn’t very reasonable.
Sense 1 and Sense 2 can’t be reliably distinguished from the outside.
As it stands, hyporational’s challenge here seems like a fully general objection to anyone ever complaining about any alleged abuse that isn’t trivial to verify.
It isn’t. It’s an objection against naming people without providing reliable evidence. Complain all you wish for all I care, but if you wish to handle the situation, do it by changing the system, not by taking justice in your own hands.
What am I portraying you as saying that differs from what you’re actually saying? I’m certainly not intentionally putting up strawmen. (If you mean the thing where I agree below that I should have been more explicit, then, er, I agree.)
Sense 1 and Sense 2 can’t be reliably distinguished from the outside.
Indeed they can’t, but they are still different things; it’s reasonable to have different attitudes towards #1 becoming a community norm and towards #2 becoming a community norm; and what encourages #1 and what encourages #2 might be different.
It’s an objection against naming people without providing reliable evidence.
You’re right—I should have said “complaining and providing names”. Sorry about that. I shall edit my comment to clarify.
do it by changing the system, not by taking justice in your own hands.
In what way do you think I’m taking justice into my own hands? What do you think anyone concerned can actually, realistically, do to change the system?
(In principle, one could change the system by changing how the LW karma system works in a way that eliminates the possibility of anonymous mass-downvoting. In practice, so far no one has proposed a change that would accomplish this and so far as I know no one knows of any such change that would work well. And in practice, even with such a change fully designed it would then be necessary to arrange for it to be incorporated into the LW codebase; it is reasonable to suspect that the odds of that are not good.)
(In principle, one could change the system by changing how the LW karma system works in a way that eliminates the possibility of anonymous mass-downvoting. In practice, so far no one has proposed a change that would accomplish this and so far as I know no one knows of any such change that would work well. And in practice, even with such a change fully designed it would then be necessary to arrange for it to be incorporated into the LW codebase; it is reasonable to suspect that the odds of that are not good.)
This is one of the conversations I was hoping would be sparked by my complaint, and is the reason why I did not mention names until pressured. (My cost/benefit analysis of mentioning names may well have been flawed; if it was, I will gladly redact [although I’m not sure how much harm that would mitigate at this point {yay recursive parentheses!}])
I would like to see a system that flags a human administrator to review block downvotes. I agree that having an automatic punishment that flags if you downvote everything is absurd, but that’s a strawman. Something like this is completely viable:
If I am downvoting someone whom I have already downvoted over 70% of their posts, AND their net karma is greater than 60%, automatically forward the downvoted post, and the downvoter’s name, to a human admin for investigation. (I might make the algorithm slightly more aware, and say that [downvotes—upvotes > 70% of posts]).
If the downvotee is clearly a troll, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment. If the downvoter is clearly being retributive, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment.
Since it’s automatic and only triggers on the downvoter’s action, a potential downvotee can’t use it as part of a ‘wounded gazelle’ gambit. Since it punts the actual decision-making to a human whom the community has already invested admin status, ‘literal genie’/automation concerns are replaced with human expertise. The only concern left is that admins will fail to be impartial or will fail to do their job, in which case the community has far bigger problems.
The only concern left is that admins will fail to be impartial or will fail to do their job, in which case the community has far bigger problems.
Also that as the set of tasks described as “their job” increases, it becomes less likely that trusted uncompensated human admins will be interested in the job.
Also that as the set of tasks described as “their job” increases, it becomes less likely that trusted uncompensated human admins will be interested in the job.
...and that, yes. I shall meditate upon this further.
Indeed they can’t, but they are still different things; it’s reasonable to have different attitudes towards #1 becoming a community norm and towards #2 becoming a community norm; and what encourages #1 and what encourages #2 might be different.
Encouraging #1 unavoidably encourages #2 too, because it provides plausible deniability. I have no idea how bad it could get.
You’re right—I should have said “complaining and providing names”. Sorry about that. I shall edit my comment to clarify.
This is what I meant by the strawman, thanks for catching it.
What do you think anyone concerned can actually, realistically, do to change the system?
Take all the people who complain about abuse, and brainstorm what a good system would be like. Make a post about the proposed solutions and have lw people vote. Find a programmer who can do it for free, or pay for one. Ask permission from an admin.
I proposed a solution in an open thread, but can’t find it. The idea was that most mass downvoting happens within a short time period in an angry mood, so limiting the amount of votes one can give within a time period to a particular person could be a solution. People seemed to like it based on the upvotes. It might not work for this particular situation we have here, though, but would at least make mass downvoting a nuisance for the culprit.
This is where I foresee the difficulty, if the change being proposed isn’t very small and simple and unequivocally an improvement.
limiting the amount of votes one can give within a time period to a particular person could be a solution.
Yes, that seems like it would help—though, as you say, maybe not in this case which seems to be either a longstanding grudge or an attempt to intimidate people away from expressing certain sorts of views on LW. Your proposal does have the advantage of being small and simple.
Sense 1 and Sense 2 can’t be reliably distinguished from the outside.
I disagree. There may be specific cases where they are difficult to distinguish, but I think in general it is not so hard to reliably distinguish them. In this particular case, based on the model I’ve formed of ialdabaoth from reading a number of his comments, based on the specific arguments he has offered, and based on what others are saying, I’d assign sense 1 a considerably higher probability than sense 2, and I’m quite confident in this distinction. I would be very surprised if it turned out ialdabaoth was falsely accusing Eugine simply to cause him trouble.
Complain all you wish for all I care, but if you wish to handle the situation, do it by changing the system, not by taking justice in your own hands.
Introducing a norm of naming names is a mechanism for changing the system. It might be a change to the system that does more harm than good, but that is an empirical question, and one on which I suspect you are wrong. Labeling it as “taking justice in your own hands”, and contrasting it with “changing the system”, just seems like well-poisoning, a rhetorical maneuver to sidestep discussion on whether “complaining and naming) is in fact a more effective way of changing the system than thinking up and trying to implement some software solution. Here I mean “effective” not just in terms of the probability of a strategy working, but the probability of the strategy being fully implemented in the first place.
I would be very surprised if it turned out ialdabaoth was falsely accusing Eugine simply to cause him trouble.
I don’t doubt he has some evidence for Eugine being the culprit. That doesn’t mean he didn’t name him to cause him trouble, in fact it’s probably why he did so. I suppose Sense 1 and 2 don’t cover all the possibilities then.
Introducing a norm of naming names is a mechanism for changing the system.
Would you call street justice a system? Do you think the press should publish the names of all people accused of a crime? Do you like the idea of being wrongly named? This is not well poisoning, but trying to establish whether it works anywhere else. You’re expecting quite a lot from lesswrongians here.
Eugine’s karma ratio for the past month has dropped from 75 % to 52 % after he was named. What do you think of that?
You took action, after careful thought failed to provide an obviously safe pathway. That already puts you above most people, regardless of the validity of the action (I happen to agree with it, but it was obviously going to be contentious). So congrats and an upvote for that.
Regarding ethics, I wouldn’t even recommend the sequences. Perhaps one of the many philosophical resources out there on the web. Ethics is applied morality, and morality comes from within. The way to cultivate ethics is to apply your inner morality over and over again to various hypothetical situations, which is what most moral philosophical argumentation is about.
The hand-wringing in most of the parent comments about the ethics of ialdabaoth naming names is kind of amusing, given that ialdabaoth basically called Eugine_Nier out monthsago with far less circumspection.
Well, he didn’t start a top level discussion post about it back then, so there’s that. He also got downvoted because of those accusations back then, as I think he should be now.
Sure. Maybe I’m engaging in a typical mind fallacy, but if a comment like that came to me completely out of the blue I think my response would have at least been a “Bwwah? What?” sort of thing, not silence.
I love wedrifid’s response to ialdabaoth, and am considering implementing it myself.
It’s not a bad response. While I assert that wedrifid’s (and hyporational’s) assumptions about why I’m doing this are incorrect, you all have no reason to trust that assertion. From your perspective, this could easily be a simple grudge or whining or social ploy, and it makes good sense to respond to it the way you are.
That said, I’ll continue to take whatever karma hit you impose, because my own karma is less important than bringing attention to this sort of thing. I bring attention to my own case instead of other people’s because I’m closest to my own, but I have frequently thought “I can’t be the only one experiencing this”, and that has motivated me to complain rather than simply going away.
Part of the problem is that I have three different classes of situations in which I will post about karma.
Class 1 is when I notice that I am confused. My post will typically convey something like “why was this voted down?”. I fear that wedrifid has mistaken those posts for an attempt at shaming, but my actual intent was to say, “I thought karma was supposed to be used like {this}, but I see it being used like {that}. Please help me correct my understanding of karma’s purpose?”
Case 2 is when I have a reasonably strong suspicion that karma is being abused. My post will typically convey something like “is this really how we want to behave as a community?”. I can understand why another person’s view might blend these together with case 1, but they actually are completely different. When wedrifid posted his admonition/threat, I took that opportunity to re-evaluate how I was communicating in Case 1 and Case 2. Hopefully I’m doing a little better.
Case 3 is when I am tired, and lonely, and perhaps a little irrational, and feel somewhat persecuted. My post will typically convey something like “why are you doing this to meeeeee?”. I can see why another person’s view might blend these together with case 1 and case 2, but unfortunately when I’m in that kind of mood, my rational facilities are not operating at peak performance. Whenever I do this, I actually APPRECIATE people like you and wedrifid downvoting that post to oblivion, because it provides useful social feedback not do to that shit. As an imperfectly rational being, I must rely on the social feedback of other imperfectly rational beings to improve my rationality.
Requests are not obligations, and what follows is other people are not responsible for your actions.
Of course they aren’t. But if I’m not a moral expert, and I’m not an expert at knowing who is a moral expert, then whose counsel should I trust?
What you’re seeing here is the culmination of a LOT of moral processing. Eugine’s plausible outcomes, my plausible outcomes, the community’s plausible outcomes… this is far, far more data than I know how to accurately process, and all the heuristics I can fall back on have known serious flaws, but no known good compensatory algorithms.
All that’s left is moral experimentation, which I find terrifying. But is action selfish weakness, or is failure to act moral cowardice? And how do I find out, unless I commit to a course of action and then analyze its consequences? (Assuming I’m even competent to do so, which itself is not necessarily certain).
ETA: Does anyone have any good recommendations, beyond the Sequences/etc., where someone without financial means could go to learn better ethical heuristics?
Captain Awkward?
Spend more time irl with people to see what actually works and what doesn’t. People do most of the experimentation for you.
As I see it the problem isn’t the complexity of moral processing, but that you fail to recognize the important parts. Your failure here is fairly simple. Let’s take the community’s plausible outcomes, because yours and Eugine’s are a drop in the ocean.
Do you wish this kind of mud slinging to become the community norm? “Oh, I’m 25% sure that ialdabaoth is block-downvoting me, and 50% sure it’s bayeslisk.” Seriously, I don’t know how trustworthy you are, so your probabilities provide me almost zero information. You however provided a name, and I can’t see the reason being anything else than a personal grudge. I’m sure people who share it are happy to join you.
If you want karma to be more accurate, this is not the way to go. Trying to introduce a less abusable system might be.
Really?
I can see at least two other (closely linked) reasons for ialdabaoth’s providing the name of the conjectured culprit. (1) Two people specifically asked him to do it. (2) Abuse of the LW karma system is damaging to the whole community and everyone benefits if such abuse results in public shaming.
There are, of course, reasons in the other direction (the danger you mention, of such accusations becoming commonplace and themselves being used as a tool of abuse and manipulation; and the danger that people will be more reluctant to disagree with Eugine because they don’t want him to do to them what he is alleged to be doing to ialdabaoth). So it’s not obvious that ialdabaoth did well to reveal the name. But there seem to be obvious reasons other than “a personal grudge”.
Note there’s not even a consensus whether there should be a rule against such usage. What you find to be ‘abuse’ others may find to be valid expressions within the system of wanting someone to ‘go away’. No details, no demarcation line, but calling for ‘public shaming’? Please.
Before you say “well it’s implicitly clear!”, consider 1) suffering from a typical mind fallacy and 2) the precedent that there even was an explicit post about not recommending violence against actual people on LW (so much for ‘implicitly clear’).
Lastly, don’t counter jerks by advocating being a jerk. “Benefits” and “public shaming” … don’t get me started. What’s this, our community’s attempt at playing Inquisition? Can’t we skip those stages? You’re already participating, mentioning a possible culprit who broke a non-existing rule in the same comment that calls for public shamings.
In that vein, hey gjm, I heard you stopped beating your wife? Good for you!
There could be consensus that it’s harmful without consensus that there should be a rule against it. (I have no idea whether there is.) After all, LW gets by fairly well with few explicit rules. In any case, all that’s relevant here is that ialdabaoth might reasonably hold that such behaviour is toxic and should be shamed since the question was whether his actions have credible reasons other than a personal grudge.
Was there ever a similar poll about whether there should be a community norm against such actions? About whether such actions are generally highly toxic to the LW community?
A brief perspective check: The Inquisition tortured people and had them executed for heresy. What has happened here is that someone said “I think so-and-so probably did something unpleasant”.
It is possible that you have misunderstood what I said—which is not that I think Eugine did what ialdabaoth says he probably did (I do have an opinion as to how likely that is, but have not mentioned it anywhere in this discussion).
What I said is that if it comes to be believed that Eugine acted as ialdabaoth says he thinks he probably did, then that may lead LW participants to shy away from expressing opinions that they think Eugine will dislike. This can happen even if Eugine is wholly innocent.
This seems rather tenuous to me. I did not accuse him of anything, nor did I call for him to be shamed. (Still less for anything to be done to him that would warrant the parallel with the Inquisition.)
Making the rules against all harmful things is a FAI-complete problem. If someone is able to do that, they would better spend their time writing the rules in a programming language and creating a Friendly AI.
Let’s assume we have a rule “it is forbidden to downvote all posts by someone, we detect such behavior automatically by a script, and the punishment is X”. What will most likely happen?
a) The mass downvoters will switch to downvoting all comments but one.
b) A new troll will come to the website, post three idiotic comments, someone will downvote all three of them and unknowingly trigger the illegal downvoting detection script.
Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don’t involve thinking mass-downvoting isn’t a very bad thing.
I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators’ discretion (or to other users’, if all that happens on a violation is that a complete description of what you did gets posted automatically).
(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)
This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don’t have one of those.)
Yes. Of course, LW has human moderators, or at least admins—but they don’t appear to do very much human moderation. (Which is fair enough—it’s a time-intensive business.)
Yeah, other possibilities exist. What I meant to say is that my social heuristics strongly point to a particular interpretation of the situation based on why people usually seem to be doing these kinds of things.
A socially competent person should have some kind of an idea what accusing people publicly means. What follows, I think, is that he did it to hurt Eugine, or that he’s not socially competent.
Yup.
Doesn’t follow. It means that he did it knowing it would hurt Eugine or else is not socially competent. But a thing can have predictable consequences that are not reasons for your doing it. A medically competent person knows that major surgery causes pain and inconvenience and risk, but that doesn’t mean that someone medically competent undergoing or recommending major surgery must be doing it to bring about the pain and inconvenience and risk. They’re doing it for some other benefit, and putting up with those unfortunately unavoidable side effects.
(I don’t know ialdabaoth. It is possible that s/he did intend to hurt Eugine. But I don’t see any good evidence for that, nor any grounds for assuming it.)
Well, you got me here. I think the expected positive value of the action is so low that using that as justification for the highly probable negative value seems kinda weird. Surgeons don’t usually cut people up just because it might have some benefit.
The question is ambiguous.
Sense 1: “Do you want it to become normal for people to throw out such accusations when they have good reason to think they’re being mass-downvoted?”
Sense 2: “Do you want it to become normal for people to throw out such accusations just as a means of causing trouble for others?”
Clearly no one wants #2, but there’s something to be said for #1.
As it stands, hyporational’s challenge here seems like a fully general objection to anyone ever complaining about any alleged abuse that isn’t trivial to verify. [EDITED to add: More specifically, complaining and naming names.] I don’t think the world would be a better place if no one ever complained about any alleged abuse that isn’t trivial to verify. [EDITED to add: Or even if no one ever named the alleged abuser in such cases.] In the present instance, there’s at least good evidence (see satt’s comment) that someone is doing to ialdabaoth what he claims someone is doing.
The behaviour ialdabaoth is complaining about seems to me extremely bad for LW, and indeed a “less abusable system” would be good. So far as I can tell, no one has so far proposed one, and I bet it would be difficult to get a substantially different system in place. So proposing that as an alternative to complaining isn’t very reasonable.
Nice strawman.
Sense 1 and Sense 2 can’t be reliably distinguished from the outside.
It isn’t. It’s an objection against naming people without providing reliable evidence. Complain all you wish for all I care, but if you wish to handle the situation, do it by changing the system, not by taking justice in your own hands.
What am I portraying you as saying that differs from what you’re actually saying? I’m certainly not intentionally putting up strawmen. (If you mean the thing where I agree below that I should have been more explicit, then, er, I agree.)
Indeed they can’t, but they are still different things; it’s reasonable to have different attitudes towards #1 becoming a community norm and towards #2 becoming a community norm; and what encourages #1 and what encourages #2 might be different.
You’re right—I should have said “complaining and providing names”. Sorry about that. I shall edit my comment to clarify.
In what way do you think I’m taking justice into my own hands? What do you think anyone concerned can actually, realistically, do to change the system?
(In principle, one could change the system by changing how the LW karma system works in a way that eliminates the possibility of anonymous mass-downvoting. In practice, so far no one has proposed a change that would accomplish this and so far as I know no one knows of any such change that would work well. And in practice, even with such a change fully designed it would then be necessary to arrange for it to be incorporated into the LW codebase; it is reasonable to suspect that the odds of that are not good.)
This is one of the conversations I was hoping would be sparked by my complaint, and is the reason why I did not mention names until pressured. (My cost/benefit analysis of mentioning names may well have been flawed; if it was, I will gladly redact [although I’m not sure how much harm that would mitigate at this point {yay recursive parentheses!}])
I would like to see a system that flags a human administrator to review block downvotes. I agree that having an automatic punishment that flags if you downvote everything is absurd, but that’s a strawman. Something like this is completely viable:
If I am downvoting someone whom I have already downvoted over 70% of their posts, AND their net karma is greater than 60%, automatically forward the downvoted post, and the downvoter’s name, to a human admin for investigation. (I might make the algorithm slightly more aware, and say that [downvotes—upvotes > 70% of posts]).
If the downvotee is clearly a troll, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment. If the downvoter is clearly being retributive, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment.
Since it’s automatic and only triggers on the downvoter’s action, a potential downvotee can’t use it as part of a ‘wounded gazelle’ gambit. Since it punts the actual decision-making to a human whom the community has already invested admin status, ‘literal genie’/automation concerns are replaced with human expertise. The only concern left is that admins will fail to be impartial or will fail to do their job, in which case the community has far bigger problems.
Also that as the set of tasks described as “their job” increases, it becomes less likely that trusted uncompensated human admins will be interested in the job.
...and that, yes. I shall meditate upon this further.
This seems like an excellent solution.
Encouraging #1 unavoidably encourages #2 too, because it provides plausible deniability. I have no idea how bad it could get.
This is what I meant by the strawman, thanks for catching it.
Take all the people who complain about abuse, and brainstorm what a good system would be like. Make a post about the proposed solutions and have lw people vote. Find a programmer who can do it for free, or pay for one. Ask permission from an admin.
I proposed a solution in an open thread, but can’t find it. The idea was that most mass downvoting happens within a short time period in an angry mood, so limiting the amount of votes one can give within a time period to a particular person could be a solution. People seemed to like it based on the upvotes. It might not work for this particular situation we have here, though, but would at least make mass downvoting a nuisance for the culprit.
I’m not sure what the reddit platform allows for.
This is where I foresee the difficulty, if the change being proposed isn’t very small and simple and unequivocally an improvement.
Yes, that seems like it would help—though, as you say, maybe not in this case which seems to be either a longstanding grudge or an attempt to intimidate people away from expressing certain sorts of views on LW. Your proposal does have the advantage of being small and simple.
I disagree. There may be specific cases where they are difficult to distinguish, but I think in general it is not so hard to reliably distinguish them. In this particular case, based on the model I’ve formed of ialdabaoth from reading a number of his comments, based on the specific arguments he has offered, and based on what others are saying, I’d assign sense 1 a considerably higher probability than sense 2, and I’m quite confident in this distinction. I would be very surprised if it turned out ialdabaoth was falsely accusing Eugine simply to cause him trouble.
Introducing a norm of naming names is a mechanism for changing the system. It might be a change to the system that does more harm than good, but that is an empirical question, and one on which I suspect you are wrong. Labeling it as “taking justice in your own hands”, and contrasting it with “changing the system”, just seems like well-poisoning, a rhetorical maneuver to sidestep discussion on whether “complaining and naming) is in fact a more effective way of changing the system than thinking up and trying to implement some software solution. Here I mean “effective” not just in terms of the probability of a strategy working, but the probability of the strategy being fully implemented in the first place.
I don’t doubt he has some evidence for Eugine being the culprit. That doesn’t mean he didn’t name him to cause him trouble, in fact it’s probably why he did so. I suppose Sense 1 and 2 don’t cover all the possibilities then.
Would you call street justice a system? Do you think the press should publish the names of all people accused of a crime? Do you like the idea of being wrongly named? This is not well poisoning, but trying to establish whether it works anywhere else. You’re expecting quite a lot from lesswrongians here.
Eugine’s karma ratio for the past month has dropped from 75 % to 52 % after he was named. What do you think of that?
You took action, after careful thought failed to provide an obviously safe pathway. That already puts you above most people, regardless of the validity of the action (I happen to agree with it, but it was obviously going to be contentious). So congrats and an upvote for that.
Regarding ethics, I wouldn’t even recommend the sequences. Perhaps one of the many philosophical resources out there on the web. Ethics is applied morality, and morality comes from within. The way to cultivate ethics is to apply your inner morality over and over again to various hypothetical situations, which is what most moral philosophical argumentation is about.
The hand-wringing in most of the parent comments about the ethics of ialdabaoth naming names is kind of amusing, given that ialdabaoth basically called Eugine_Nier out months ago with far less circumspection.
Yep, that was not one of my finer moments.
Well, he didn’t start a top level discussion post about it back then, so there’s that. He also got downvoted because of those accusations back then, as I think he should be now.
I find it more interesting there that that Eugine didn’t deny the statement at all.
If you participate in a mud-slinging contest, even as the winner you’re still likely to end up full of mud.
Sure. Maybe I’m engaging in a typical mind fallacy, but if a comment like that came to me completely out of the blue I think my response would have at least been a “Bwwah? What?” sort of thing, not silence.
I love wedrifid’s response to ialdabaoth, and am considering implementing it myself.
It’s not a bad response. While I assert that wedrifid’s (and hyporational’s) assumptions about why I’m doing this are incorrect, you all have no reason to trust that assertion. From your perspective, this could easily be a simple grudge or whining or social ploy, and it makes good sense to respond to it the way you are.
That said, I’ll continue to take whatever karma hit you impose, because my own karma is less important than bringing attention to this sort of thing. I bring attention to my own case instead of other people’s because I’m closest to my own, but I have frequently thought “I can’t be the only one experiencing this”, and that has motivated me to complain rather than simply going away.
Part of the problem is that I have three different classes of situations in which I will post about karma.
Class 1 is when I notice that I am confused. My post will typically convey something like “why was this voted down?”. I fear that wedrifid has mistaken those posts for an attempt at shaming, but my actual intent was to say, “I thought karma was supposed to be used like {this}, but I see it being used like {that}. Please help me correct my understanding of karma’s purpose?”
Case 2 is when I have a reasonably strong suspicion that karma is being abused. My post will typically convey something like “is this really how we want to behave as a community?”. I can understand why another person’s view might blend these together with case 1, but they actually are completely different. When wedrifid posted his admonition/threat, I took that opportunity to re-evaluate how I was communicating in Case 1 and Case 2. Hopefully I’m doing a little better.
Case 3 is when I am tired, and lonely, and perhaps a little irrational, and feel somewhat persecuted. My post will typically convey something like “why are you doing this to meeeeee?”. I can see why another person’s view might blend these together with case 1 and case 2, but unfortunately when I’m in that kind of mood, my rational facilities are not operating at peak performance. Whenever I do this, I actually APPRECIATE people like you and wedrifid downvoting that post to oblivion, because it provides useful social feedback not do to that shit. As an imperfectly rational being, I must rely on the social feedback of other imperfectly rational beings to improve my rationality.
This comment is really too long of a response to my comment, and I have no intention of reading it.