What are we calling retributive downvoting, incidentally?
The targeted harassment of one user by another user to punish disagreement; letting disagreements on one topic spill over into disagreements on all topics.
That is, if someone has five terrible comments on politics and five mediocre comments on horticulture, downvoting all five politics comments could be acceptable but downvoting all ten is troubling, especially if it’s done all at once. (In general, don’t hate-read.)
Another way to think about this is that we want to preserve large swings in karma as signals of community approval or disapproval, rather than individuals using long histories to magnify approval or disapproval. It’s also problematic to vote up everything someone else has written because you really like one of their recent comments, and serial vote detection algorithms also target that behavior.
We typically see this as sockpuppets instead of serial upvoters, because when someone wants to abuse the karma system they want someone else’s total / last thirty days to be low, and they want a particular comment’s karma to be high, and having a second account upvote everything they’ve ever done isn’t as useful for the latter.
Taking another tack—human beings are prone to failure. Maybe the system should accommodate some degree of failure, as well, instead of punish it.
I think one obvious thing would be caps on the maximum percent of upvotes/downvotes a given user is allowed to be responsible for, vis a vis another user, particularly over a given timeframe. Ideally, just prevent users from upvoting/downvoting further on that user’s posts or their comments past the cap. This would help deal with the major failure mode of people hating one another.
Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments (and maybe prevent them from upvoting responses to those responses). That should cut off a major source of grudges. (It’s absurdly obvious when people do this, and they do this knowing it is obvious. It’s a way of saying to somebody “I’m hurting you, and I want you to know that it’s me doing it.”)
A third would be—hide or disable user-level karma scores entirely. Just do away with them. It’d be painful to do away with that badge of honor for longstanding users, but maybe the emphasis should be on the quality of the content than the quality (or at least the duration) of the author anyways.
Sockpuppets aren’t the only failure mode. A system which encourages grudge-making is its own failure.
I agree with you that grudge-making should be discouraged by the system.
Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments
Hmm. I think downvoting a response to one’s material is typically a poor idea, but I don’t yet think that case is typical enough to prevent it outright.
I am curious now about the interaction between downvoting a comment and replying to it. If Alice posts something and Bob responds to it, a bad situation from the grudge-making point of view is Alice both downvoting Bob’s comment and responding to it. If it was bad enough to downvote, the theory goes, that means it is too bad to respond to.
So one could force Alice to choose between downvoting and replying to the children of posts she makes, in the hopes of replacing a chain of −1 snipes with either a single −1 or a chain of discussion at 0.
I am curious now about the interaction between downvoting a comment and replying to it.
I have a personal policy of either replying to a comment or downvoting it, not both. The rationale is that downvoting is a message and if I’m bothering to reply, I can provide a better message and the vote is not needed. I am not terribly interested in karma, especially karma of other people. Occasionally I make exceptions to this policy, though.
I make rare exceptions. About the only time I do it is when I notice my opponent is doing it. (Not because I care they’re doing it to me or about karma, but I regard it as a moral imperative to defect against defectors, and if they care about karma enough to try it against me, I’m going to retaliate on the grounds that it will probably hurt them as much as they hoped it would hurt me.)
I think it’s sufficient to just prevent voting on children of your own posts/comments. The community should provide what voting feedback is necessary, and any voting you engage in on responses to your material probably isn’t going to be high-quality rational voting anyways.
My argument is symmetry, but the form that argument would take would be… extremely weak, once translated into words.
Roughly, however… you risk defining new norms, by treating downvotes as uniquely bad as compared to upvotes. We already have an issue where neutral karma is regarded by many as close-to-failure. It would accentuate that problem, and make upvotes worth less.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
A concrete definition does enable “rule-lawyering”, but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won’t deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user’s comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable.
To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn’t like what I had done there, and didn’t want to see anything like it ever again. (I do not want something to be done about that, to be clear, I’m using it as an example.) Would we say that’s against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn’t in that post—but this is the kind of situation you need explicit rules for.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
Speaking as someone that’s done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you’re giving it credit for. Even a large, experienced mod staff—which we don’t have—rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains.
The best first step is to think hard about what you’re incentivizing and make sure your users want what you want them to. If that doesn’t get you where you’re going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of “don’t be a dick” and then visibly enforce it. I have literally never seen anything else work.
Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; “Don’t be a dick” is perhaps too rough.
You don’t try to eliminate fuzzy edges; legal edge cases are fractal in nature, you’ll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going “Not crossing, not crosssing!”. (Rule #1: There shall be no rule lawyering.) They’re not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for “Don’t be a dick”, and enforce it just as visibly.
(It’s the visible enforcement there that matters.)
The rough lines aren’t there so rule lawyers know exactly what point they can push things to: They’re so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)
Standing just on this side of a line you’ve drawn is only a problem if you have a mod staff that’s way too cautious or too legalistic, which—judging from the Eugine debacle—may indeed be a problem that LW has. For most sites, though, that’s about the least challenging problem you’ll face short of a clear violation.
The cases you need to watch out for are the ones that’re clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there’s the incentives thing again).
I’m aware there are ways of causing trouble that do not involve violating any rules.
I can do it without even violating the “Don’t be a dick” rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse—because nobody could say what exactly I had done.
A substantial percentage of the blog’s readers left and never came back. The blog author’s significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn’t be the first, if so.
You’re right in that rules don’t help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they’re not a complete solution.
I’m not saying we should do away with rules. I’m saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than “don’t do it again”.
Rules are helpful. A ruleset outlines what you’re concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They’ll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you’ll ever be able to spend, and in the worst cases they can even be used against you.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the ‘no vote manipulation’ rule to them.
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
Would we say that’s against the rules, or no?
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
My current plan is to introduce karma weights, where we can easily adjust how much an account’s votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there’s no need to penalize their comments or their overall account standing when we can just remove the power they’re not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
You would. Somebody else would put it somewhere else. You don’t have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that—somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.
The cheapest technical fix would probably be to prohibit voting on a comment after some time has passed, like some subreddits do. This would prevent karma gain from “interest” on old comments, but that probably wouldn’t be too big a deal. More importantly, though, it wouldn’t prevent ongoing retributive downvoting, which Eugine did (sometimes? I was never targeted) engage in—only big one-time karma moves.
If we’re looking for first steps, though, this is a place to start.
This would prevent karma gain from “interest” on old comments
If you want to reward having a long history of comments, you could prohibit only downvoting of old comments.
it wouldn’t prevent ongoing retributive downvoting
I doubt you could algorithmically distinguish between downvoting a horticulture post because of disagreements about horticulture and downvoting a horticulture post because of disagreements about some other topic.
But I suspect voting rate limiters should keep the problem in check.
The targeted harassment of one user by another user to punish disagreement; letting disagreements on one topic spill over into disagreements on all topics.
That is, if someone has five terrible comments on politics and five mediocre comments on horticulture, downvoting all five politics comments could be acceptable but downvoting all ten is troubling, especially if it’s done all at once. (In general, don’t hate-read.)
Another way to think about this is that we want to preserve large swings in karma as signals of community approval or disapproval, rather than individuals using long histories to magnify approval or disapproval. It’s also problematic to vote up everything someone else has written because you really like one of their recent comments, and serial vote detection algorithms also target that behavior.
We typically see this as sockpuppets instead of serial upvoters, because when someone wants to abuse the karma system they want someone else’s total / last thirty days to be low, and they want a particular comment’s karma to be high, and having a second account upvote everything they’ve ever done isn’t as useful for the latter.
Taking another tack—human beings are prone to failure. Maybe the system should accommodate some degree of failure, as well, instead of punish it.
I think one obvious thing would be caps on the maximum percent of upvotes/downvotes a given user is allowed to be responsible for, vis a vis another user, particularly over a given timeframe. Ideally, just prevent users from upvoting/downvoting further on that user’s posts or their comments past the cap. This would help deal with the major failure mode of people hating one another.
Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments (and maybe prevent them from upvoting responses to those responses). That should cut off a major source of grudges. (It’s absurdly obvious when people do this, and they do this knowing it is obvious. It’s a way of saying to somebody “I’m hurting you, and I want you to know that it’s me doing it.”)
A third would be—hide or disable user-level karma scores entirely. Just do away with them. It’d be painful to do away with that badge of honor for longstanding users, but maybe the emphasis should be on the quality of the content than the quality (or at least the duration) of the author anyways.
Sockpuppets aren’t the only failure mode. A system which encourages grudge-making is its own failure.
I agree with you that grudge-making should be discouraged by the system.
Hmm. I think downvoting a response to one’s material is typically a poor idea, but I don’t yet think that case is typical enough to prevent it outright.
I am curious now about the interaction between downvoting a comment and replying to it. If Alice posts something and Bob responds to it, a bad situation from the grudge-making point of view is Alice both downvoting Bob’s comment and responding to it. If it was bad enough to downvote, the theory goes, that means it is too bad to respond to.
So one could force Alice to choose between downvoting and replying to the children of posts she makes, in the hopes of replacing a chain of −1 snipes with either a single −1 or a chain of discussion at 0.
I have a personal policy of either replying to a comment or downvoting it, not both. The rationale is that downvoting is a message and if I’m bothering to reply, I can provide a better message and the vote is not needed. I am not terribly interested in karma, especially karma of other people. Occasionally I make exceptions to this policy, though.
I make rare exceptions. About the only time I do it is when I notice my opponent is doing it. (Not because I care they’re doing it to me or about karma, but I regard it as a moral imperative to defect against defectors, and if they care about karma enough to try it against me, I’m going to retaliate on the grounds that it will probably hurt them as much as they hoped it would hurt me.)
I think it’s sufficient to just prevent voting on children of your own posts/comments. The community should provide what voting feedback is necessary, and any voting you engage in on responses to your material probably isn’t going to be high-quality rational voting anyways.
Blocking downvoting responses I could be convinced of, but blocking upvoting responses seems like a much harder sell.
My argument is symmetry, but the form that argument would take would be… extremely weak, once translated into words.
Roughly, however… you risk defining new norms, by treating downvotes as uniquely bad as compared to upvotes. We already have an issue where neutral karma is regarded by many as close-to-failure. It would accentuate that problem, and make upvotes worth less.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
A concrete definition does enable “rule-lawyering”, but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won’t deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user’s comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable.
To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn’t like what I had done there, and didn’t want to see anything like it ever again. (I do not want something to be done about that, to be clear, I’m using it as an example.) Would we say that’s against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn’t in that post—but this is the kind of situation you need explicit rules for.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
Speaking as someone that’s done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you’re giving it credit for. Even a large, experienced mod staff—which we don’t have—rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains.
The best first step is to think hard about what you’re incentivizing and make sure your users want what you want them to. If that doesn’t get you where you’re going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of “don’t be a dick” and then visibly enforce it. I have literally never seen anything else work.
Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; “Don’t be a dick” is perhaps too rough.
You don’t try to eliminate fuzzy edges; legal edge cases are fractal in nature, you’ll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going “Not crossing, not crosssing!”. (Rule #1: There shall be no rule lawyering.) They’re not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for “Don’t be a dick”, and enforce it just as visibly.
(It’s the visible enforcement there that matters.)
The rough lines aren’t there so rule lawyers know exactly what point they can push things to: They’re so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)
Standing just on this side of a line you’ve drawn is only a problem if you have a mod staff that’s way too cautious or too legalistic, which—judging from the Eugine debacle—may indeed be a problem that LW has. For most sites, though, that’s about the least challenging problem you’ll face short of a clear violation.
The cases you need to watch out for are the ones that’re clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there’s the incentives thing again).
I’m aware there are ways of causing trouble that do not involve violating any rules.
I can do it without even violating the “Don’t be a dick” rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse—because nobody could say what exactly I had done.
A substantial percentage of the blog’s readers left and never came back. The blog author’s significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn’t be the first, if so.
You’re right in that rules don’t help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they’re not a complete solution.
I’m not saying we should do away with rules. I’m saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than “don’t do it again”.
Rules are helpful. A ruleset outlines what you’re concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They’ll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you’ll ever be able to spend, and in the worst cases they can even be used against you.
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the ‘no vote manipulation’ rule to them.
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
My current plan is to introduce karma weights, where we can easily adjust how much an account’s votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there’s no need to penalize their comments or their overall account standing when we can just remove the power they’re not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.
You would. Somebody else would put it somewhere else. You don’t have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that—somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.
The cheapest technical fix would probably be to prohibit voting on a comment after some time has passed, like some subreddits do. This would prevent karma gain from “interest” on old comments, but that probably wouldn’t be too big a deal. More importantly, though, it wouldn’t prevent ongoing retributive downvoting, which Eugine did (sometimes? I was never targeted) engage in—only big one-time karma moves.
If we’re looking for first steps, though, this is a place to start.
If you want to reward having a long history of comments, you could prohibit only downvoting of old comments.
I doubt you could algorithmically distinguish between downvoting a horticulture post because of disagreements about horticulture and downvoting a horticulture post because of disagreements about some other topic.
But I suspect voting rate limiters should keep the problem in check.