If someone makes a dumb comment on your post, do not reply, just strong-downvote and move on. One strong downvote (from the person you reply to) on a few replies can be enough to get you rate-limited, like me. (Maybe people shouldn’t be allowed to strong-vote on replies to their comments or posts?)
Making some comments that get no votes is also bad, because it can push your positive-karma stuff too far away from a couple strong downvotes to cancel them out.
Don’t just say factually correct things unless enough people know them already; you won’t get a request for clarification or evidence, just negative agreement, and that leads to negative karma. (Maybe agreement should be displayed without strong-votes by default? That would help a bit.) If you absolutely must, you should post some links to vaguely related stuff, because people will often assume they support whatever you said.
If you do get rate-limited, you want to hit your rate limit making comments that won’t get any votes, to push the negative-karma comments out of the recent comment window.
Whether trying to follow these rules is worth it depends on how much you care about posting on LessWrong.
I currently think very few people are affected by the automatic rate-limiting, though I should double check that. Seem maybe reasonable for us to add some history of rate-limits to the moderation page so people can confirm this for themselves.
Still, things that affect few users can still have a lot of unmeasured chilling effects, though it seems a bit harder for that to happen if the consequences are relatively minor (like rate-limiting), and relatively rare.
Hi, I think this is incorrect. I had to wait 7 days to write this comment and then almost forgot to. I wrote a comment critiquing a very long post (which was later removed) and was down-voted (by a single user I think) after justifying why I wrote the comment with AI-assistance. My understanding is that a single user with enough karma power can effectively “silence” any opinion they don’t like by down-voting a few comments in an exchange.
I think the site has changed enough over the last several months that I am considering leaving. For me personally, choosing between having a conversation with a random commenter on this site vs. an AI model is just about at a wash. I even hesitate to write this comment given how over-confident your comment seemed i.e. I won’t be able to interact with this site again for another week.
My understanding is that a single user with enough karma power can effectively “silence” any opinion they don’t like by down-voting a few comments in an exchange.
No, because we also have a requirement of minimum-number of downvoters. (I think the current implementation has important flaws and I do still need to improve it which has been on my TODO list and hopefully will get done soon). But, even in the current implementation, a single downvote can’t rate limit you.
Huh, that is an update on me on how quickly rate-limiting kicks in. I don’t think it’s the case that a single user can effectively silence any opinion here (none of your previous few comments were downvoted by a single user as far as I can tell), but having a rate-limit that harsh just because of a single exchange seems quite bad to me. I’ll talk to Raemon and Ruby about at least adjusting the values here.
My current frame on “what the bad thing is here?” is less focused on “people are incentivized to do weird/bad things” and more focused on some upstream problems.
I’d say the overall tradeoff with rate limits is that there are two groups I want to distinguish between:
people writing actively mediocre/bad stuff, where the amount-that-it-clogs-up-the-conversation-space outweighs....
...people writing controversial and/or hard to evaluate stuff, which is either in-fact-good, or, where you expect that following a policy of encouraging it is good-on-net even if individual comments are wrong/unproductive.
Rate limiting is useful if the downside of group #1 is large enough to outweigh the upsides of encouraging group #2. I think it’s a pretty reasonable argument that the upside from group #2 is really really important, and that if you’re getting false positives you really need to prioritize improving the system in some way.
One option is to just accept more mediocre stuff as a tradeoff. Another option is… think more and find third-options that avoid false positives while catching true positives.
I don’t think I think the correct number of false-positives for group 2 is zero – I think the cost of group #1 is pretty big. But I do think “1 false positive is too many” is a reasonably position to hold, and IMO even if there’s only one it still at least warrants “okay can we somehow get a more accurate reading here?” (looking over your recent comment history I do think I’d probably count you in the “the system probably shouldn’t be rate limiting you” bucket).
Problem 1: unique-downvoter threshold isn’t good enough
I think one concrete problem is that the countermeasure against this problem...
One strong downvote (from the person you reply to) on a few replies can be enough to get you rate-limited, like me. (Maybe people shouldn’t be allowed to strong-vote on replies to their comments or posts?)
Does currently work that well. We have the “unique downvoter count” requirement to attempt to prevent the “person you’re in an argument with singlehandedly vindictively (or even accidentally) rate-limiting you” problem. But after experimenting with it more I think this doesn’t carve reality at the joints – people who say more things get more downvoters even if they’re net upvoted. So, if you’ve written a bunch of somewhat upvoted comments, you’ll probably have at least some downvoters, and then a single person strong-downvoting you does likely send you over the edge because the unique-downvoter-threshold has already been met.
One (maybe too-clunky) option that occurs to me here is to just distinguish between “downvoting because you thought a local comment was overrated” vs “I actually think it’d be good if this user commented less overall.” We could make it so that when you downvote someone, an additional UI element pops up for “I think this person should be rate limited”, and the minimum threshold is the number of people who specifically thought you should be rate-limited, rather than people who downvoted you for any reason.
Problem 2: technical, hard to evaluate stuff
Sometimes a comment is making a technical point (or some manner of “requires a lot of background knowledge to evaluate” point). You noted a comment where, from your current vantage point, you think you were making a straightforward factually correct claim, and people downvoted out of ignorance.
I think this is a legitimately tricky problem (and would be a problem with karma even if we weren’t using it for rate-limiting).
It’s a problem because we also have cranks who make technical-looking-points who are in fact confused, and I think the cost of having a bunch of them around drives away people doing “real work.” I think this is sort of a cultural problem but the difficulty lives in the territory (i.e. there’s not a simple cultural or programmatic change I can think of to improve the status quo, but I’m interested if people have ideas).
Complaining about getting rate-limited made me no longer rate-limited, so I guess it’s a self-correcting system...???
two groups I want to distinguish between
I agree that some tradeoff here is inevitable.
think more and find third-options that avoid false positives while catching true positives
I think that’s possible.
I don’t think the recent comment window was well-designed. If you’re going to use a window, IMO a vote-count window would be better, eg: look backwards until you hit 400 cumulative karma votes, with some exponential downweighting.
I also think the strong votes are weighted too heavily. Holding a button a little longer doesn’t mean somebody’s opinion should be counted as 6+ times as important, IMO. Maybe normal votes should be weighted at 1⁄2 whatever a strong vote is worth.
when you downvote someone, an additional UI element pops up
I don’t think that’s a good idea.
It’s a problem because we also have cranks who make technical-looking-points who are in fact confused, and I think the cost of having a bunch of them around drives away people
If you find a solution, maybe let some universities know about it...or some CEOs...or some politicians...
People won’t generally go through the history of the user in question; they won’t have the context needed to distinguish the cases you’re asking them to.
Agree/disagree voting does not translate into a user’s or post’s karma — its sole function is to communicate agreement/disagreement. It has no other direct effects on the site or content visibility (i.e. no effect on sorting algorithms).
I assumed Bhauth meant ‘people see it has disagree score and then downvote it’. (Agree score does not directly translate into karma and doesn’t count in the auto rate limit)
pro tips for commenting on LessWrong:
If someone makes a dumb comment on your post, do not reply, just strong-downvote and move on. One strong downvote (from the person you reply to) on a few replies can be enough to get you rate-limited, like me. (Maybe people shouldn’t be allowed to strong-vote on replies to their comments or posts?)
Making some comments that get no votes is also bad, because it can push your positive-karma stuff too far away from a couple strong downvotes to cancel them out.
Don’t just say factually correct things unless enough people know them already; you won’t get a request for clarification or evidence, just negative agreement, and that leads to negative karma. (Maybe agreement should be displayed without strong-votes by default? That would help a bit.) If you absolutely must, you should post some links to vaguely related stuff, because people will often assume they support whatever you said.
If you do get rate-limited, you want to hit your rate limit making comments that won’t get any votes, to push the negative-karma comments out of the recent comment window.
Whether trying to follow these rules is worth it depends on how much you care about posting on LessWrong.
This strongly suggests that the rate limit mechanism is creating some extremely bad incentives and dynamics on Less Wrong.
I currently think very few people are affected by the automatic rate-limiting, though I should double check that. Seem maybe reasonable for us to add some history of rate-limits to the moderation page so people can confirm this for themselves.
Still, things that affect few users can still have a lot of unmeasured chilling effects, though it seems a bit harder for that to happen if the consequences are relatively minor (like rate-limiting), and relatively rare.
Hi, I think this is incorrect. I had to wait 7 days to write this comment and then almost forgot to. I wrote a comment critiquing a very long post (which was later removed) and was down-voted (by a single user I think) after justifying why I wrote the comment with AI-assistance. My understanding is that a single user with enough karma power can effectively “silence” any opinion they don’t like by down-voting a few comments in an exchange.
I think the site has changed enough over the last several months that I am considering leaving. For me personally, choosing between having a conversation with a random commenter on this site vs. an AI model is just about at a wash. I even hesitate to write this comment given how over-confident your comment seemed i.e. I won’t be able to interact with this site again for another week.
No, because we also have a requirement of minimum-number of downvoters. (I think the current implementation has important flaws and I do still need to improve it which has been on my TODO list and hopefully will get done soon). But, even in the current implementation, a single downvote can’t rate limit you.
Huh, that is an update on me on how quickly rate-limiting kicks in. I don’t think it’s the case that a single user can effectively silence any opinion here (none of your previous few comments were downvoted by a single user as far as I can tell), but having a rate-limit that harsh just because of a single exchange seems quite bad to me. I’ll talk to Raemon and Ruby about at least adjusting the values here.
See: Milton Friedman’s thermostat.
Thanks.
My current frame on “what the bad thing is here?” is less focused on “people are incentivized to do weird/bad things” and more focused on some upstream problems.
I’d say the overall tradeoff with rate limits is that there are two groups I want to distinguish between:
people writing actively mediocre/bad stuff, where the amount-that-it-clogs-up-the-conversation-space outweighs....
...people writing controversial and/or hard to evaluate stuff, which is either in-fact-good, or, where you expect that following a policy of encouraging it is good-on-net even if individual comments are wrong/unproductive.
Rate limiting is useful if the downside of group #1 is large enough to outweigh the upsides of encouraging group #2. I think it’s a pretty reasonable argument that the upside from group #2 is really really important, and that if you’re getting false positives you really need to prioritize improving the system in some way.
One option is to just accept more mediocre stuff as a tradeoff. Another option is… think more and find third-options that avoid false positives while catching true positives.
I don’t think I think the correct number of false-positives for group 2 is zero – I think the cost of group #1 is pretty big. But I do think “1 false positive is too many” is a reasonably position to hold, and IMO even if there’s only one it still at least warrants “okay can we somehow get a more accurate reading here?” (looking over your recent comment history I do think I’d probably count you in the “the system probably shouldn’t be rate limiting you” bucket).
Problem 1: unique-downvoter threshold isn’t good enough
I think one concrete problem is that the countermeasure against this problem...
Does currently work that well. We have the “unique downvoter count” requirement to attempt to prevent the “person you’re in an argument with singlehandedly vindictively (or even accidentally) rate-limiting you” problem. But after experimenting with it more I think this doesn’t carve reality at the joints – people who say more things get more downvoters even if they’re net upvoted. So, if you’ve written a bunch of somewhat upvoted comments, you’ll probably have at least some downvoters, and then a single person strong-downvoting you does likely send you over the edge because the unique-downvoter-threshold has already been met.
One (maybe too-clunky) option that occurs to me here is to just distinguish between “downvoting because you thought a local comment was overrated” vs “I actually think it’d be good if this user commented less overall.” We could make it so that when you downvote someone, an additional UI element pops up for “I think this person should be rate limited”, and the minimum threshold is the number of people who specifically thought you should be rate-limited, rather than people who downvoted you for any reason.
Problem 2: technical, hard to evaluate stuff
Sometimes a comment is making a technical point (or some manner of “requires a lot of background knowledge to evaluate” point). You noted a comment where, from your current vantage point, you think you were making a straightforward factually correct claim, and people downvoted out of ignorance.
I think this is a legitimately tricky problem (and would be a problem with karma even if we weren’t using it for rate-limiting).
It’s a problem because we also have cranks who make technical-looking-points who are in fact confused, and I think the cost of having a bunch of them around drives away people doing “real work.” I think this is sort of a cultural problem but the difficulty lives in the territory (i.e. there’s not a simple cultural or programmatic change I can think of to improve the status quo, but I’m interested if people have ideas).
Complaining about getting rate-limited made me no longer rate-limited, so I guess it’s a self-correcting system...???
I agree that some tradeoff here is inevitable.
I think that’s possible.
I don’t think the recent comment window was well-designed. If you’re going to use a window, IMO a vote-count window would be better, eg: look backwards until you hit 400 cumulative karma votes, with some exponential downweighting.
I also think the strong votes are weighted too heavily. Holding a button a little longer doesn’t mean somebody’s opinion should be counted as 6+ times as important, IMO. Maybe normal votes should be weighted at 1⁄2 whatever a strong vote is worth.
I don’t think that’s a good idea.
If you find a solution, maybe let some universities know about it...or some CEOs...or some politicians...
Why? (I’m not very attached to the idea, but, what are you imagining going wrong?)
It seems annoying.
I don’t think people will use it objectively.
People won’t generally go through the history of the user in question; they won’t have the context needed to distinguish the cases you’re asking them to.
Are you sure this is true? This post says:
Mods, has this been changed?
I meant: negative agreement on (fact-based) posts leads (via behavior of other voters) to negative karma.
I assumed Bhauth meant ‘people see it has disagree score and then downvote it’. (Agree score does not directly translate into karma and doesn’t count in the auto rate limit)