If Eugine Nier didn’t exist, we would have to invent him.
What are we calling retributive downvoting, incidentally? That seems a bit of a fuzzy term, and we should probably have a solid definition as we move into being able to respond to it.
What are we calling retributive downvoting, incidentally?
The targeted harassment of one user by another user to punish disagreement; letting disagreements on one topic spill over into disagreements on all topics.
That is, if someone has five terrible comments on politics and five mediocre comments on horticulture, downvoting all five politics comments could be acceptable but downvoting all ten is troubling, especially if it’s done all at once. (In general, don’t hate-read.)
Another way to think about this is that we want to preserve large swings in karma as signals of community approval or disapproval, rather than individuals using long histories to magnify approval or disapproval. It’s also problematic to vote up everything someone else has written because you really like one of their recent comments, and serial vote detection algorithms also target that behavior.
We typically see this as sockpuppets instead of serial upvoters, because when someone wants to abuse the karma system they want someone else’s total / last thirty days to be low, and they want a particular comment’s karma to be high, and having a second account upvote everything they’ve ever done isn’t as useful for the latter.
Taking another tack—human beings are prone to failure. Maybe the system should accommodate some degree of failure, as well, instead of punish it.
I think one obvious thing would be caps on the maximum percent of upvotes/downvotes a given user is allowed to be responsible for, vis a vis another user, particularly over a given timeframe. Ideally, just prevent users from upvoting/downvoting further on that user’s posts or their comments past the cap. This would help deal with the major failure mode of people hating one another.
Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments (and maybe prevent them from upvoting responses to those responses). That should cut off a major source of grudges. (It’s absurdly obvious when people do this, and they do this knowing it is obvious. It’s a way of saying to somebody “I’m hurting you, and I want you to know that it’s me doing it.”)
A third would be—hide or disable user-level karma scores entirely. Just do away with them. It’d be painful to do away with that badge of honor for longstanding users, but maybe the emphasis should be on the quality of the content than the quality (or at least the duration) of the author anyways.
Sockpuppets aren’t the only failure mode. A system which encourages grudge-making is its own failure.
I agree with you that grudge-making should be discouraged by the system.
Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments
Hmm. I think downvoting a response to one’s material is typically a poor idea, but I don’t yet think that case is typical enough to prevent it outright.
I am curious now about the interaction between downvoting a comment and replying to it. If Alice posts something and Bob responds to it, a bad situation from the grudge-making point of view is Alice both downvoting Bob’s comment and responding to it. If it was bad enough to downvote, the theory goes, that means it is too bad to respond to.
So one could force Alice to choose between downvoting and replying to the children of posts she makes, in the hopes of replacing a chain of −1 snipes with either a single −1 or a chain of discussion at 0.
I am curious now about the interaction between downvoting a comment and replying to it.
I have a personal policy of either replying to a comment or downvoting it, not both. The rationale is that downvoting is a message and if I’m bothering to reply, I can provide a better message and the vote is not needed. I am not terribly interested in karma, especially karma of other people. Occasionally I make exceptions to this policy, though.
I make rare exceptions. About the only time I do it is when I notice my opponent is doing it. (Not because I care they’re doing it to me or about karma, but I regard it as a moral imperative to defect against defectors, and if they care about karma enough to try it against me, I’m going to retaliate on the grounds that it will probably hurt them as much as they hoped it would hurt me.)
I think it’s sufficient to just prevent voting on children of your own posts/comments. The community should provide what voting feedback is necessary, and any voting you engage in on responses to your material probably isn’t going to be high-quality rational voting anyways.
My argument is symmetry, but the form that argument would take would be… extremely weak, once translated into words.
Roughly, however… you risk defining new norms, by treating downvotes as uniquely bad as compared to upvotes. We already have an issue where neutral karma is regarded by many as close-to-failure. It would accentuate that problem, and make upvotes worth less.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
A concrete definition does enable “rule-lawyering”, but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won’t deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user’s comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable.
To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn’t like what I had done there, and didn’t want to see anything like it ever again. (I do not want something to be done about that, to be clear, I’m using it as an example.) Would we say that’s against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn’t in that post—but this is the kind of situation you need explicit rules for.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
Speaking as someone that’s done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you’re giving it credit for. Even a large, experienced mod staff—which we don’t have—rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains.
The best first step is to think hard about what you’re incentivizing and make sure your users want what you want them to. If that doesn’t get you where you’re going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of “don’t be a dick” and then visibly enforce it. I have literally never seen anything else work.
Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; “Don’t be a dick” is perhaps too rough.
You don’t try to eliminate fuzzy edges; legal edge cases are fractal in nature, you’ll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going “Not crossing, not crosssing!”. (Rule #1: There shall be no rule lawyering.) They’re not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for “Don’t be a dick”, and enforce it just as visibly.
(It’s the visible enforcement there that matters.)
The rough lines aren’t there so rule lawyers know exactly what point they can push things to: They’re so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)
Standing just on this side of a line you’ve drawn is only a problem if you have a mod staff that’s way too cautious or too legalistic, which—judging from the Eugine debacle—may indeed be a problem that LW has. For most sites, though, that’s about the least challenging problem you’ll face short of a clear violation.
The cases you need to watch out for are the ones that’re clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there’s the incentives thing again).
I’m aware there are ways of causing trouble that do not involve violating any rules.
I can do it without even violating the “Don’t be a dick” rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse—because nobody could say what exactly I had done.
A substantial percentage of the blog’s readers left and never came back. The blog author’s significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn’t be the first, if so.
You’re right in that rules don’t help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they’re not a complete solution.
I’m not saying we should do away with rules. I’m saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than “don’t do it again”.
Rules are helpful. A ruleset outlines what you’re concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They’ll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you’ll ever be able to spend, and in the worst cases they can even be used against you.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the ‘no vote manipulation’ rule to them.
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
Would we say that’s against the rules, or no?
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
My current plan is to introduce karma weights, where we can easily adjust how much an account’s votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there’s no need to penalize their comments or their overall account standing when we can just remove the power they’re not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
You would. Somebody else would put it somewhere else. You don’t have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that—somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.
The cheapest technical fix would probably be to prohibit voting on a comment after some time has passed, like some subreddits do. This would prevent karma gain from “interest” on old comments, but that probably wouldn’t be too big a deal. More importantly, though, it wouldn’t prevent ongoing retributive downvoting, which Eugine did (sometimes? I was never targeted) engage in—only big one-time karma moves.
If we’re looking for first steps, though, this is a place to start.
This would prevent karma gain from “interest” on old comments
If you want to reward having a long history of comments, you could prohibit only downvoting of old comments.
it wouldn’t prevent ongoing retributive downvoting
I doubt you could algorithmically distinguish between downvoting a horticulture post because of disagreements about horticulture and downvoting a horticulture post because of disagreements about some other topic.
But I suspect voting rate limiters should keep the problem in check.
What are we calling retributive downvoting, incidentally?
What bad guys do.
There is an occasionally quoted heuristic: “Vote up what you’d like to see more of; vote down what you’d like to see less of”. When good guys do that it’s called karma system working as intended. When bad guys do that it’s called abuse of the karma system.
What gets called “retributive voting” is when you vote something down not because of its own (de)merits but because of its author. That’s bad for LW no matter who does it. Someone who does it much is (I suggest) ipso facto not a good guy any more.
I have never seen anyone defending such behaviour as “karma system working as intended”, so I’m not seeing the hypocrisy you complain of. Can you point to a couple of examples?
(It’s also an abuse of the karma system if you systematically vote someone’s comments up because you approve of that person. I’ve no idea whether that’s a thing that happens—aside from the case where the voter and the beneficiary are really the same person, which is an abuse of the system for other reasons—because it’s harder to notice: most people’s karma, most of the time, goes up rather than down, and the main way retributive downvoting gets spotted is when someone notices that they’ve suddenly lost a lot of karma.)
Actually, let’s take this in another direction: Suppose the moderator(s) (Is Nancy the only one left) are out on vacation, and Eugine shows up again, and has already farmed enough karma to begin downvoting.
Would it be a Good Guy act, or a Bad Guy act, to downvote all of his karma-farming comments?
I’m not keen on this sort of binary classification. But: I don’t think I would do it in most versions of this scenario, though I dare say some other reasonable people would.
What’s interesting to me about your choice of scenario is that it’s one in which an “identity-based” sanction has already been applied: Eugine, specifically, is not supposed to be active here any more. It would not be so very surprising if that provided an exception to the general principle that voting should be content-based rather than identity-based.
What gets called “retributive voting” is when you vote something down not because of its own (de)merits but because of its author. That’s bad for LW no matter who does it. Someone who does it much is (I suggest) ipso facto not a good guy any more.
That’s the modern Less Wrongian perspective. Prior to Eugine’s ban, there was, in fact, some general support for the idea of getting rid of persistently bad users via user-based downvotes via the karma system. The Overton Window was shifted by Eugine’s ban (and his subsequent and repeated reappearances, complete with the same behaviors).
I have never seen anyone defending such behaviour as “karma system working as intended”
You’re either newer than I thought, or didn’t pay attention. There was a -lot- of defense of this during Eugine’s ban by people worried that Less Wrong would be destroyed by bad users. (They by and large supported Eugine’s ban, as they objected to the automation of it, and also I think didn’t want to die on the hill of defending an extremely unpopular figure.)
My memory is very far from perfect, but I don’t remember there ever being much support for downvoting “bad” users into oblivion. Do you have a couple of links, perhaps? In any case, what Lumifer wrote was “When good guys do that it’s called karma system working as intended” and not “A few years ago, some people on LW were in favour of good guys doing that”, which seems to me a very different proposition indeed.
There was a -lot- of defense of this during Eugine’s ban
I’m just looking through the comments to the announcement of Eugine’s ban. There are a lot of comments. So far, the only instance I can find of someone defending mass-downvoting in some cases is … Lumifer.
… OK, there are a couple more: wedrifid suggesting it might be an appropriate treatment for trollish sockpuppets and MugaSofer not actually defending mass-downvoting but saying that some (unspecified) people think it is sometimes justified.
… And, having now finished (though I confess I skimmed some subthreads that didn’t seem likely to contain opinions on this point), that’s all I found. So we have Lumifer defending his right (in principle) to mass-downvote someone he thinks is a hopeless case; wedrifid suggesting that mass-downvoting might be an appropriate sanction for trollish sockpuppets and the like; and MugaSofer saying that some people think mass-downvoting is sometimes OK; and that’s it. That’s in a thread of hundreds of comments, a large fraction of which either explicitly say what an ugly thing mass-downvoting is or implicitly agree with the general sentiment.
That doesn’t look to me like “a -lot- of defense”. Maybe I looked in the wrong place. Again, do you have a link or two?
I cannot provide links, unfortunately, no, because most of it happened in background threads, although MugaSofer’s comment can be taken as confirmation that this was, in fact, being talked about. This was a… semi-popular topic on how Less Wrong could be improved around that time, when I happened to be unusually active, although I left in disgust right before Eugine’s ban, IIRC, over the fact that my most upvoted comments were what I considered basic-level social sanity, and the stuff I wrote that I expected to be taken seriously tended to get downvoted (later I realized that Less Wrong is just incredibly socially inept, but relatively skilled in the areas I expected to be taken seriously, so comparative advantage went overwhelmingly in favor of my social skills, which happened to be considerably better than I had thought at the time). Eugine didn’t invent the idea of mass-downvoting, he merely implemented what was being discussed.
It seems that all we have here is your recollection of how much support the idea had (“semi-popular” or “a -lot-”; I’m not sure what the intersection of those two is) versus mine (scarcely any). I’m not sure we can make much further progress on that basis, but it really doesn’t matter because the actual question at issue was about opinions now; do you think there is currently any support to speak of on LW for constructive mass-downvoting?
If Eugine Nier didn’t exist, we would have to invent him.
What are we calling retributive downvoting, incidentally? That seems a bit of a fuzzy term, and we should probably have a solid definition as we move into being able to respond to it.
The targeted harassment of one user by another user to punish disagreement; letting disagreements on one topic spill over into disagreements on all topics.
That is, if someone has five terrible comments on politics and five mediocre comments on horticulture, downvoting all five politics comments could be acceptable but downvoting all ten is troubling, especially if it’s done all at once. (In general, don’t hate-read.)
Another way to think about this is that we want to preserve large swings in karma as signals of community approval or disapproval, rather than individuals using long histories to magnify approval or disapproval. It’s also problematic to vote up everything someone else has written because you really like one of their recent comments, and serial vote detection algorithms also target that behavior.
We typically see this as sockpuppets instead of serial upvoters, because when someone wants to abuse the karma system they want someone else’s total / last thirty days to be low, and they want a particular comment’s karma to be high, and having a second account upvote everything they’ve ever done isn’t as useful for the latter.
Taking another tack—human beings are prone to failure. Maybe the system should accommodate some degree of failure, as well, instead of punish it.
I think one obvious thing would be caps on the maximum percent of upvotes/downvotes a given user is allowed to be responsible for, vis a vis another user, particularly over a given timeframe. Ideally, just prevent users from upvoting/downvoting further on that user’s posts or their comments past the cap. This would help deal with the major failure mode of people hating one another.
Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments (and maybe prevent them from upvoting responses to those responses). That should cut off a major source of grudges. (It’s absurdly obvious when people do this, and they do this knowing it is obvious. It’s a way of saying to somebody “I’m hurting you, and I want you to know that it’s me doing it.”)
A third would be—hide or disable user-level karma scores entirely. Just do away with them. It’d be painful to do away with that badge of honor for longstanding users, but maybe the emphasis should be on the quality of the content than the quality (or at least the duration) of the author anyways.
Sockpuppets aren’t the only failure mode. A system which encourages grudge-making is its own failure.
I agree with you that grudge-making should be discouraged by the system.
Hmm. I think downvoting a response to one’s material is typically a poor idea, but I don’t yet think that case is typical enough to prevent it outright.
I am curious now about the interaction between downvoting a comment and replying to it. If Alice posts something and Bob responds to it, a bad situation from the grudge-making point of view is Alice both downvoting Bob’s comment and responding to it. If it was bad enough to downvote, the theory goes, that means it is too bad to respond to.
So one could force Alice to choose between downvoting and replying to the children of posts she makes, in the hopes of replacing a chain of −1 snipes with either a single −1 or a chain of discussion at 0.
I have a personal policy of either replying to a comment or downvoting it, not both. The rationale is that downvoting is a message and if I’m bothering to reply, I can provide a better message and the vote is not needed. I am not terribly interested in karma, especially karma of other people. Occasionally I make exceptions to this policy, though.
I make rare exceptions. About the only time I do it is when I notice my opponent is doing it. (Not because I care they’re doing it to me or about karma, but I regard it as a moral imperative to defect against defectors, and if they care about karma enough to try it against me, I’m going to retaliate on the grounds that it will probably hurt them as much as they hoped it would hurt me.)
I think it’s sufficient to just prevent voting on children of your own posts/comments. The community should provide what voting feedback is necessary, and any voting you engage in on responses to your material probably isn’t going to be high-quality rational voting anyways.
Blocking downvoting responses I could be convinced of, but blocking upvoting responses seems like a much harder sell.
My argument is symmetry, but the form that argument would take would be… extremely weak, once translated into words.
Roughly, however… you risk defining new norms, by treating downvotes as uniquely bad as compared to upvotes. We already have an issue where neutral karma is regarded by many as close-to-failure. It would accentuate that problem, and make upvotes worth less.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
A concrete definition does enable “rule-lawyering”, but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won’t deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user’s comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable.
To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn’t like what I had done there, and didn’t want to see anything like it ever again. (I do not want something to be done about that, to be clear, I’m using it as an example.) Would we say that’s against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn’t in that post—but this is the kind of situation you need explicit rules for.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
Speaking as someone that’s done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you’re giving it credit for. Even a large, experienced mod staff—which we don’t have—rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains.
The best first step is to think hard about what you’re incentivizing and make sure your users want what you want them to. If that doesn’t get you where you’re going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of “don’t be a dick” and then visibly enforce it. I have literally never seen anything else work.
Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; “Don’t be a dick” is perhaps too rough.
You don’t try to eliminate fuzzy edges; legal edge cases are fractal in nature, you’ll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going “Not crossing, not crosssing!”. (Rule #1: There shall be no rule lawyering.) They’re not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for “Don’t be a dick”, and enforce it just as visibly.
(It’s the visible enforcement there that matters.)
The rough lines aren’t there so rule lawyers know exactly what point they can push things to: They’re so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)
Standing just on this side of a line you’ve drawn is only a problem if you have a mod staff that’s way too cautious or too legalistic, which—judging from the Eugine debacle—may indeed be a problem that LW has. For most sites, though, that’s about the least challenging problem you’ll face short of a clear violation.
The cases you need to watch out for are the ones that’re clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there’s the incentives thing again).
I’m aware there are ways of causing trouble that do not involve violating any rules.
I can do it without even violating the “Don’t be a dick” rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse—because nobody could say what exactly I had done.
A substantial percentage of the blog’s readers left and never came back. The blog author’s significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn’t be the first, if so.
You’re right in that rules don’t help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they’re not a complete solution.
I’m not saying we should do away with rules. I’m saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than “don’t do it again”.
Rules are helpful. A ruleset outlines what you’re concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They’ll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you’ll ever be able to spend, and in the worst cases they can even be used against you.
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the ‘no vote manipulation’ rule to them.
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
My current plan is to introduce karma weights, where we can easily adjust how much an account’s votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there’s no need to penalize their comments or their overall account standing when we can just remove the power they’re not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.
You would. Somebody else would put it somewhere else. You don’t have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that—somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.
The cheapest technical fix would probably be to prohibit voting on a comment after some time has passed, like some subreddits do. This would prevent karma gain from “interest” on old comments, but that probably wouldn’t be too big a deal. More importantly, though, it wouldn’t prevent ongoing retributive downvoting, which Eugine did (sometimes? I was never targeted) engage in—only big one-time karma moves.
If we’re looking for first steps, though, this is a place to start.
If you want to reward having a long history of comments, you could prohibit only downvoting of old comments.
I doubt you could algorithmically distinguish between downvoting a horticulture post because of disagreements about horticulture and downvoting a horticulture post because of disagreements about some other topic.
But I suspect voting rate limiters should keep the problem in check.
What bad guys do.
There is an occasionally quoted heuristic: “Vote up what you’d like to see more of; vote down what you’d like to see less of”. When good guys do that it’s called karma system working as intended. When bad guys do that it’s called abuse of the karma system.
This is simply untrue.
What gets called “retributive voting” is when you vote something down not because of its own (de)merits but because of its author. That’s bad for LW no matter who does it. Someone who does it much is (I suggest) ipso facto not a good guy any more.
I have never seen anyone defending such behaviour as “karma system working as intended”, so I’m not seeing the hypocrisy you complain of. Can you point to a couple of examples?
(It’s also an abuse of the karma system if you systematically vote someone’s comments up because you approve of that person. I’ve no idea whether that’s a thing that happens—aside from the case where the voter and the beneficiary are really the same person, which is an abuse of the system for other reasons—because it’s harder to notice: most people’s karma, most of the time, goes up rather than down, and the main way retributive downvoting gets spotted is when someone notices that they’ve suddenly lost a lot of karma.)
Actually, let’s take this in another direction: Suppose the moderator(s) (Is Nancy the only one left) are out on vacation, and Eugine shows up again, and has already farmed enough karma to begin downvoting.
Would it be a Good Guy act, or a Bad Guy act, to downvote all of his karma-farming comments?
I’m not keen on this sort of binary classification. But: I don’t think I would do it in most versions of this scenario, though I dare say some other reasonable people would.
What’s interesting to me about your choice of scenario is that it’s one in which an “identity-based” sanction has already been applied: Eugine, specifically, is not supposed to be active here any more. It would not be so very surprising if that provided an exception to the general principle that voting should be content-based rather than identity-based.
That’s the modern Less Wrongian perspective. Prior to Eugine’s ban, there was, in fact, some general support for the idea of getting rid of persistently bad users via user-based downvotes via the karma system. The Overton Window was shifted by Eugine’s ban (and his subsequent and repeated reappearances, complete with the same behaviors).
You’re either newer than I thought, or didn’t pay attention. There was a -lot- of defense of this during Eugine’s ban by people worried that Less Wrong would be destroyed by bad users. (They by and large supported Eugine’s ban, as they objected to the automation of it, and also I think didn’t want to die on the hill of defending an extremely unpopular figure.)
My memory is very far from perfect, but I don’t remember there ever being much support for downvoting “bad” users into oblivion. Do you have a couple of links, perhaps? In any case, what Lumifer wrote was “When good guys do that it’s called karma system working as intended” and not “A few years ago, some people on LW were in favour of good guys doing that”, which seems to me a very different proposition indeed.
I’m just looking through the comments to the announcement of Eugine’s ban. There are a lot of comments. So far, the only instance I can find of someone defending mass-downvoting in some cases is … Lumifer.
… OK, there are a couple more: wedrifid suggesting it might be an appropriate treatment for trollish sockpuppets and MugaSofer not actually defending mass-downvoting but saying that some (unspecified) people think it is sometimes justified.
… And, having now finished (though I confess I skimmed some subthreads that didn’t seem likely to contain opinions on this point), that’s all I found. So we have Lumifer defending his right (in principle) to mass-downvote someone he thinks is a hopeless case; wedrifid suggesting that mass-downvoting might be an appropriate sanction for trollish sockpuppets and the like; and MugaSofer saying that some people think mass-downvoting is sometimes OK; and that’s it. That’s in a thread of hundreds of comments, a large fraction of which either explicitly say what an ugly thing mass-downvoting is or implicitly agree with the general sentiment.
That doesn’t look to me like “a -lot- of defense”. Maybe I looked in the wrong place. Again, do you have a link or two?
I cannot provide links, unfortunately, no, because most of it happened in background threads, although MugaSofer’s comment can be taken as confirmation that this was, in fact, being talked about. This was a… semi-popular topic on how Less Wrong could be improved around that time, when I happened to be unusually active, although I left in disgust right before Eugine’s ban, IIRC, over the fact that my most upvoted comments were what I considered basic-level social sanity, and the stuff I wrote that I expected to be taken seriously tended to get downvoted (later I realized that Less Wrong is just incredibly socially inept, but relatively skilled in the areas I expected to be taken seriously, so comparative advantage went overwhelmingly in favor of my social skills, which happened to be considerably better than I had thought at the time). Eugine didn’t invent the idea of mass-downvoting, he merely implemented what was being discussed.
It seems that all we have here is your recollection of how much support the idea had (“semi-popular” or “a -lot-”; I’m not sure what the intersection of those two is) versus mine (scarcely any). I’m not sure we can make much further progress on that basis, but it really doesn’t matter because the actual question at issue was about opinions now; do you think there is currently any support to speak of on LW for constructive mass-downvoting?
Yeah, that’s what I’m afraid of.