Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
A concrete definition does enable “rule-lawyering”, but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won’t deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user’s comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable.
To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn’t like what I had done there, and didn’t want to see anything like it ever again. (I do not want something to be done about that, to be clear, I’m using it as an example.) Would we say that’s against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn’t in that post—but this is the kind of situation you need explicit rules for.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
Speaking as someone that’s done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you’re giving it credit for. Even a large, experienced mod staff—which we don’t have—rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains.
The best first step is to think hard about what you’re incentivizing and make sure your users want what you want them to. If that doesn’t get you where you’re going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of “don’t be a dick” and then visibly enforce it. I have literally never seen anything else work.
Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; “Don’t be a dick” is perhaps too rough.
You don’t try to eliminate fuzzy edges; legal edge cases are fractal in nature, you’ll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going “Not crossing, not crosssing!”. (Rule #1: There shall be no rule lawyering.) They’re not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for “Don’t be a dick”, and enforce it just as visibly.
(It’s the visible enforcement there that matters.)
The rough lines aren’t there so rule lawyers know exactly what point they can push things to: They’re so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)
Standing just on this side of a line you’ve drawn is only a problem if you have a mod staff that’s way too cautious or too legalistic, which—judging from the Eugine debacle—may indeed be a problem that LW has. For most sites, though, that’s about the least challenging problem you’ll face short of a clear violation.
The cases you need to watch out for are the ones that’re clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there’s the incentives thing again).
I’m aware there are ways of causing trouble that do not involve violating any rules.
I can do it without even violating the “Don’t be a dick” rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse—because nobody could say what exactly I had done.
A substantial percentage of the blog’s readers left and never came back. The blog author’s significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn’t be the first, if so.
You’re right in that rules don’t help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they’re not a complete solution.
I’m not saying we should do away with rules. I’m saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than “don’t do it again”.
Rules are helpful. A ruleset outlines what you’re concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They’ll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you’ll ever be able to spend, and in the worst cases they can even be used against you.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the ‘no vote manipulation’ rule to them.
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
Would we say that’s against the rules, or no?
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
My current plan is to introduce karma weights, where we can easily adjust how much an account’s votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there’s no need to penalize their comments or their overall account standing when we can just remove the power they’re not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
You would. Somebody else would put it somewhere else. You don’t have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that—somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.
Begging your pardon, but I know the behavior you’re referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That’s a recipe for disaster.
A concrete definition does enable “rule-lawyering”, but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won’t deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user’s comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable.
To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn’t like what I had done there, and didn’t want to see anything like it ever again. (I do not want something to be done about that, to be clear, I’m using it as an example.) Would we say that’s against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn’t in that post—but this is the kind of situation you need explicit rules for.
Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
Speaking as someone that’s done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you’re giving it credit for. Even a large, experienced mod staff—which we don’t have—rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains.
The best first step is to think hard about what you’re incentivizing and make sure your users want what you want them to. If that doesn’t get you where you’re going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of “don’t be a dick” and then visibly enforce it. I have literally never seen anything else work.
Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; “Don’t be a dick” is perhaps too rough.
You don’t try to eliminate fuzzy edges; legal edge cases are fractal in nature, you’ll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going “Not crossing, not crosssing!”. (Rule #1: There shall be no rule lawyering.) They’re not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for “Don’t be a dick”, and enforce it just as visibly.
(It’s the visible enforcement there that matters.)
The rough lines aren’t there so rule lawyers know exactly what point they can push things to: They’re so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)
Standing just on this side of a line you’ve drawn is only a problem if you have a mod staff that’s way too cautious or too legalistic, which—judging from the Eugine debacle—may indeed be a problem that LW has. For most sites, though, that’s about the least challenging problem you’ll face short of a clear violation.
The cases you need to watch out for are the ones that’re clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there’s the incentives thing again).
I’m aware there are ways of causing trouble that do not involve violating any rules.
I can do it without even violating the “Don’t be a dick” rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse—because nobody could say what exactly I had done.
A substantial percentage of the blog’s readers left and never came back. The blog author’s significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn’t be the first, if so.
You’re right in that rules don’t help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they’re not a complete solution.
I’m not saying we should do away with rules. I’m saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than “don’t do it again”.
Rules are helpful. A ruleset outlines what you’re concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They’ll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you’ll ever be able to spend, and in the worst cases they can even be used against you.
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the ‘no vote manipulation’ rule to them.
I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.
I remember that case, and I would put that in the “downvoting five terrible politics comments” category, since it wasn’t disagreement on that topic spilling over to other topics.
My current plan is to introduce karma weights, where we can easily adjust how much an account’s votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there’s no need to penalize their comments or their overall account standing when we can just remove the power they’re not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.
You would. Somebody else would put it somewhere else. You don’t have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that—somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.