My current frame on “what the bad thing is here?” is less focused on “people are incentivized to do weird/bad things” and more focused on some upstream problems.
I’d say the overall tradeoff with rate limits is that there are two groups I want to distinguish between:
people writing actively mediocre/bad stuff, where the amount-that-it-clogs-up-the-conversation-space outweighs....
...people writing controversial and/or hard to evaluate stuff, which is either in-fact-good, or, where you expect that following a policy of encouraging it is good-on-net even if individual comments are wrong/unproductive.
Rate limiting is useful if the downside of group #1 is large enough to outweigh the upsides of encouraging group #2. I think it’s a pretty reasonable argument that the upside from group #2 is really really important, and that if you’re getting false positives you really need to prioritize improving the system in some way.
One option is to just accept more mediocre stuff as a tradeoff. Another option is… think more and find third-options that avoid false positives while catching true positives.
I don’t think I think the correct number of false-positives for group 2 is zero – I think the cost of group #1 is pretty big. But I do think “1 false positive is too many” is a reasonably position to hold, and IMO even if there’s only one it still at least warrants “okay can we somehow get a more accurate reading here?” (looking over your recent comment history I do think I’d probably count you in the “the system probably shouldn’t be rate limiting you” bucket).
Problem 1: unique-downvoter threshold isn’t good enough
I think one concrete problem is that the countermeasure against this problem...
One strong downvote (from the person you reply to) on a few replies can be enough to get you rate-limited, like me. (Maybe people shouldn’t be allowed to strong-vote on replies to their comments or posts?)
Does currently work that well. We have the “unique downvoter count” requirement to attempt to prevent the “person you’re in an argument with singlehandedly vindictively (or even accidentally) rate-limiting you” problem. But after experimenting with it more I think this doesn’t carve reality at the joints – people who say more things get more downvoters even if they’re net upvoted. So, if you’ve written a bunch of somewhat upvoted comments, you’ll probably have at least some downvoters, and then a single person strong-downvoting you does likely send you over the edge because the unique-downvoter-threshold has already been met.
One (maybe too-clunky) option that occurs to me here is to just distinguish between “downvoting because you thought a local comment was overrated” vs “I actually think it’d be good if this user commented less overall.” We could make it so that when you downvote someone, an additional UI element pops up for “I think this person should be rate limited”, and the minimum threshold is the number of people who specifically thought you should be rate-limited, rather than people who downvoted you for any reason.
Problem 2: technical, hard to evaluate stuff
Sometimes a comment is making a technical point (or some manner of “requires a lot of background knowledge to evaluate” point). You noted a comment where, from your current vantage point, you think you were making a straightforward factually correct claim, and people downvoted out of ignorance.
I think this is a legitimately tricky problem (and would be a problem with karma even if we weren’t using it for rate-limiting).
It’s a problem because we also have cranks who make technical-looking-points who are in fact confused, and I think the cost of having a bunch of them around drives away people doing “real work.” I think this is sort of a cultural problem but the difficulty lives in the territory (i.e. there’s not a simple cultural or programmatic change I can think of to improve the status quo, but I’m interested if people have ideas).
Complaining about getting rate-limited made me no longer rate-limited, so I guess it’s a self-correcting system...???
two groups I want to distinguish between
I agree that some tradeoff here is inevitable.
think more and find third-options that avoid false positives while catching true positives
I think that’s possible.
I don’t think the recent comment window was well-designed. If you’re going to use a window, IMO a vote-count window would be better, eg: look backwards until you hit 400 cumulative karma votes, with some exponential downweighting.
I also think the strong votes are weighted too heavily. Holding a button a little longer doesn’t mean somebody’s opinion should be counted as 6+ times as important, IMO. Maybe normal votes should be weighted at 1⁄2 whatever a strong vote is worth.
when you downvote someone, an additional UI element pops up
I don’t think that’s a good idea.
It’s a problem because we also have cranks who make technical-looking-points who are in fact confused, and I think the cost of having a bunch of them around drives away people
If you find a solution, maybe let some universities know about it...or some CEOs...or some politicians...
People won’t generally go through the history of the user in question; they won’t have the context needed to distinguish the cases you’re asking them to.
Thanks.
My current frame on “what the bad thing is here?” is less focused on “people are incentivized to do weird/bad things” and more focused on some upstream problems.
I’d say the overall tradeoff with rate limits is that there are two groups I want to distinguish between:
people writing actively mediocre/bad stuff, where the amount-that-it-clogs-up-the-conversation-space outweighs....
...people writing controversial and/or hard to evaluate stuff, which is either in-fact-good, or, where you expect that following a policy of encouraging it is good-on-net even if individual comments are wrong/unproductive.
Rate limiting is useful if the downside of group #1 is large enough to outweigh the upsides of encouraging group #2. I think it’s a pretty reasonable argument that the upside from group #2 is really really important, and that if you’re getting false positives you really need to prioritize improving the system in some way.
One option is to just accept more mediocre stuff as a tradeoff. Another option is… think more and find third-options that avoid false positives while catching true positives.
I don’t think I think the correct number of false-positives for group 2 is zero – I think the cost of group #1 is pretty big. But I do think “1 false positive is too many” is a reasonably position to hold, and IMO even if there’s only one it still at least warrants “okay can we somehow get a more accurate reading here?” (looking over your recent comment history I do think I’d probably count you in the “the system probably shouldn’t be rate limiting you” bucket).
Problem 1: unique-downvoter threshold isn’t good enough
I think one concrete problem is that the countermeasure against this problem...
Does currently work that well. We have the “unique downvoter count” requirement to attempt to prevent the “person you’re in an argument with singlehandedly vindictively (or even accidentally) rate-limiting you” problem. But after experimenting with it more I think this doesn’t carve reality at the joints – people who say more things get more downvoters even if they’re net upvoted. So, if you’ve written a bunch of somewhat upvoted comments, you’ll probably have at least some downvoters, and then a single person strong-downvoting you does likely send you over the edge because the unique-downvoter-threshold has already been met.
One (maybe too-clunky) option that occurs to me here is to just distinguish between “downvoting because you thought a local comment was overrated” vs “I actually think it’d be good if this user commented less overall.” We could make it so that when you downvote someone, an additional UI element pops up for “I think this person should be rate limited”, and the minimum threshold is the number of people who specifically thought you should be rate-limited, rather than people who downvoted you for any reason.
Problem 2: technical, hard to evaluate stuff
Sometimes a comment is making a technical point (or some manner of “requires a lot of background knowledge to evaluate” point). You noted a comment where, from your current vantage point, you think you were making a straightforward factually correct claim, and people downvoted out of ignorance.
I think this is a legitimately tricky problem (and would be a problem with karma even if we weren’t using it for rate-limiting).
It’s a problem because we also have cranks who make technical-looking-points who are in fact confused, and I think the cost of having a bunch of them around drives away people doing “real work.” I think this is sort of a cultural problem but the difficulty lives in the territory (i.e. there’s not a simple cultural or programmatic change I can think of to improve the status quo, but I’m interested if people have ideas).
Complaining about getting rate-limited made me no longer rate-limited, so I guess it’s a self-correcting system...???
I agree that some tradeoff here is inevitable.
I think that’s possible.
I don’t think the recent comment window was well-designed. If you’re going to use a window, IMO a vote-count window would be better, eg: look backwards until you hit 400 cumulative karma votes, with some exponential downweighting.
I also think the strong votes are weighted too heavily. Holding a button a little longer doesn’t mean somebody’s opinion should be counted as 6+ times as important, IMO. Maybe normal votes should be weighted at 1⁄2 whatever a strong vote is worth.
I don’t think that’s a good idea.
If you find a solution, maybe let some universities know about it...or some CEOs...or some politicians...
Why? (I’m not very attached to the idea, but, what are you imagining going wrong?)
It seems annoying.
I don’t think people will use it objectively.
People won’t generally go through the history of the user in question; they won’t have the context needed to distinguish the cases you’re asking them to.