feature proposal: when someone is rate limited, they can still write comments. their comments are auto-delayed until the next time they’d be unratelimited. they can queue up to k comments before it behaves the same as it does now. I suggest k be 1. I expect this would reduce the emotional banneyness-feeling by around 10%.
feature proposal: when someone is ratelimited, the moderators can give a public reason and/or a private reason. if the reason is public, it invites public feedback as well as indicating to users passing by what things might get moderated. I would encourage moderators to give both positive and negative reasoning: why they appreciate the user’s input, and what they’d want to change. I expect this would reduce banneyness feeling by 3-10%, though it may increase it.
feature proposal: make the ui of the ratelimit smaller. I expect this would reduce emotional banneyness-feeling by 2-10%, as emotional valence depends somewhat on literal visual intensity, though this is only a fragment of it.
feature proposal: in the ratelimit indicator, add some of the words you wrote here, such as “this is not equivalent to a general ban from LessWrong. Your comments are still welcome. The moderators will likely be highly willing to give feedback on intercom in the bottom right.”
feature proposal: make karma/(comment+posts) visible on user profile, make total karma require hover of karma/(comments+posts) number to view.
I strongly suspect that spending time building features for rate limited users is not valuable enough to be worthwhile. I suspect this mainly because:
There aren’t a lot of rate limited users who would benefit from it.
The value that the rate limited users receive is marginal.
It’s unclear whether doing things that benefit users who have been rate limited is a good thing.
I don’t see any sorts of second order effects that would make it worthwhile, such as non-rate-limited people seeing these features and being more inclined to be involved in the community because of them.
There are lots of other very valuable things the team could be working on.
sure. I wouldn’t propose bending over backwards to do anything. I suggested some things, up to the team what they do. the most obviously good one is just editing some text, second most obviously good one is just changing some css. would take 20 minutes.
Features to benefit people accused of X may benefit mostly people who have been unjustly accused. So looking at the value to the entire category “people accused of X” may be wrong. You should look at the value to the subset that it was meant to protect.
feature proposal: when someone is rate limited, they can still write comments. their comments are auto-delayed until the next time they’d be unratelimited. they can queue up to k comments before it behaves the same as it does now. I suggest k be 1. I expect this would reduce the emotional banneyness-feeling by around 10%.
If (as I suspect is the case) one of the in-practice purposes or benefits of a limit is to make it harder for an escalation spiral to continue via comments written in a heated emotional state, delaying the reading destroys that effect compared to delaying the writing. If the limited user is in a calm state and believes it’s worth it to push back, they can save their own draft elsewhere and set their own timer.
If someone is rate-limited because their posts are perceived as low quality, and they write a comment ahead of time, it’s good when they reread that comment before posting. If the process of posts from the queue getting posted is automatic that doesn’t happen the same way than when someone has their queue in their Google Doc (or whatever way the use to organize their thoughts) for copy-pasting.
The moderators will likely be highly willing to give feedback on intercom in the bottom right.”
The way I interpret this, you only get this if you are a “high ROI” contributor. It’s quite possible that you get specific feedback, including the post they don’t like, if you are considered “high ROI”. I complained about bhauth rate limiting me by abusing downvotes in a discussion, and never got a reply. In fact I never received a reply by intercom about anything I asked about. Not even a “contribute to the site more and we’ll help, k bye”.
Which is fine, but habryka also admitted that
have communicated that we don’t think you are breaking even. Here are some messages we sent to you:
This was never said, I would have immediately deactivated my account had this occurred.
Old users are owed explanations, new users are (mostly) not
This was not done, and habryka admitted this wasn’t done. Also raemon gave the definition for an “established” user and it was an extremely high bar, only a small fraction of all users will meet it.
Feedback helps a bit, especially if you are young, but usually doesn’t
A little casual age discrimination, anyone over 22 isn’t worth helping.
As such, we don’t have laws.
I thought ‘rationality’ was about mechanistic reasoning, not just “do whatever we want on a whim”. As in you build a model on evidence, write it down, data science, take actions based on what the data says, not what you feel. A really great example of moderation using this is https://communitynotes.x.com/guide/en/under-the-hood/ranking-notes . If you don’t do this, or at least use a prediction market, won’t you probably be wrong?
Uncertainty is costly, and I think it’s worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them.
Wasn’t done for me. Habryka says that they like to communicate with us in ‘subtle’ ways, give ‘hints’. Many of us have some degree of autism and need it spelled out.
I don’t really care about all this, ultimately it is just a game. I just feel incredibly disappointed. I thought rationality was about being actually correct, about being smart and beating the odds, and so on.
I thought that mods like Raemon etc all knew more than me, that I was actually doing something wrong that could be fixed. Not well, what it is.
That ratonality was an idea that ultimately just says next_action = argmax ( actions considered, argmax(EV estimate_algorithm)), where you swap out how you estimate EV to whatever has the strongest predictive power. That we shouldn’t be stuck with “the sequences” if they are wrong.
This was not done, and habryka admitted this wasn’t done
I’m interested in seeing direct evidence of this from DMs. I expect direct evidence would convince me it was in fact done.
If you know, AI doesn’t kill us first. Stopped clocks and all.
Your ongoing assumption that everyone here shares the same beliefs about this continues to be frustrating, though understandable from a less vulcan perspective. Most of your comment appears to be a reply to habryka, not me.
I am confused. The quotes I sent are quotes from DMs we sent to Gerald. Here they are again just for posterity:
You’ve been commenting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful.
And:
To conclude, the rate limit is your warning. Currently I feel your typical comments (even not downvoted) ones aren’t amazing, and now that we’re prioritizing raising standards due the dramatic rise in new users, we’re also getting tougher on contributions from established users that don’t feel like they’re meeting the bar either.
I think we have more but they are in DMs with just Raemon in it, but the above IMO clearly communicate “your current contributions are not breaking even”.
I think we have more but they are in DMs with just Raemon in it, but the above IMO clearly communicate “your current contributions are not breaking even”.
I didn’t draw that conclusion. Feel free to post in Raemon’s DMs.
Again what I am disappointed is ultimately this just seems to be a private fiefdom where you make decisions on a whim. Not altogether different from the new twitter come to think of it.
And that’s fine, and I don’t have to post here, that’s fine. I just feel super disappointed because I’m not seeing anything like rationality here in moderation, just tribalism and the normal outcomes of ‘absolute power corrupts absolutely’. “A complex model that I can’t explain to anyone” is ultimately not scalable and frankly not really a very modern way to do it. It just simplifies to a gut feeling, and it cannot moderator better than the moderator’s own knowledge, which is where it fails on topics the moderators don’t actually understand.
You’ve been commenting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful.
why moderate this weird way different from essentially everywhere else?
I don’t see any significant evidence that the moderation here is weird or unusual. Most forums or chats I’ve encountered do not have bright line rules. Only very large forums do, and my impression is that their quality is worse for it. I do not wish to justify this impression at this time, this will likely be near my last comment on this post.
feature proposal: when someone is rate limited, they can still write comments. their comments are auto-delayed until the next time they’d be unratelimited. they can queue up to k comments before it behaves the same as it does now. I suggest k be 1. I expect this would reduce the emotional banneyness-feeling by around 10%.
feature proposal: when someone is ratelimited, the moderators can give a public reason and/or a private reason. if the reason is public, it invites public feedback as well as indicating to users passing by what things might get moderated. I would encourage moderators to give both positive and negative reasoning: why they appreciate the user’s input, and what they’d want to change. I expect this would reduce banneyness feeling by 3-10%, though it may increase it.
feature proposal: make the ui of the ratelimit smaller. I expect this would reduce emotional banneyness-feeling by 2-10%, as emotional valence depends somewhat on literal visual intensity, though this is only a fragment of it.
feature proposal: in the ratelimit indicator, add some of the words you wrote here, such as “this is not equivalent to a general ban from LessWrong. Your comments are still welcome. The moderators will likely be highly willing to give feedback on intercom in the bottom right.”
feature proposal: make karma/(comment+posts) visible on user profile, make total karma require hover of karma/(comments+posts) number to view.
I strongly suspect that spending time building features for rate limited users is not valuable enough to be worthwhile. I suspect this mainly because:
There aren’t a lot of rate limited users who would benefit from it.
The value that the rate limited users receive is marginal.
It’s unclear whether doing things that benefit users who have been rate limited is a good thing.
I don’t see any sorts of second order effects that would make it worthwhile, such as non-rate-limited people seeing these features and being more inclined to be involved in the community because of them.
There are lots of other very valuable things the team could be working on.
sure. I wouldn’t propose bending over backwards to do anything. I suggested some things, up to the team what they do. the most obviously good one is just editing some text, second most obviously good one is just changing some css. would take 20 minutes.
Features to benefit people accused of X may benefit mostly people who have been unjustly accused. So looking at the value to the entire category “people accused of X” may be wrong. You should look at the value to the subset that it was meant to protect.
If (as I suspect is the case) one of the in-practice purposes or benefits of a limit is to make it harder for an escalation spiral to continue via comments written in a heated emotional state, delaying the reading destroys that effect compared to delaying the writing. If the limited user is in a calm state and believes it’s worth it to push back, they can save their own draft elsewhere and set their own timer.
If someone is rate-limited because their posts are perceived as low quality, and they write a comment ahead of time, it’s good when they reread that comment before posting. If the process of posts from the queue getting posted is automatic that doesn’t happen the same way than when someone has their queue in their Google Doc (or whatever way the use to organize their thoughts) for copy-pasting.
True—your comment is more or less a duplicate of Rana Dexsin’s, which convinced me of this claim.
The way I interpret this, you only get this if you are a “high ROI” contributor. It’s quite possible that you get specific feedback, including the post they don’t like, if you are considered “high ROI”. I complained about bhauth rate limiting me by abusing downvotes in a discussion, and never got a reply. In fact I never received a reply by intercom about anything I asked about. Not even a “contribute to the site more and we’ll help, k bye”.
Which is fine, but habryka also admitted that
This was never said, I would have immediately deactivated my account had this occurred.
This was not done, and habryka admitted this wasn’t done. Also raemon gave the definition for an “established” user and it was an extremely high bar, only a small fraction of all users will meet it.
A little casual age discrimination, anyone over 22 isn’t worth helping.
I thought ‘rationality’ was about mechanistic reasoning, not just “do whatever we want on a whim”. As in you build a model on evidence, write it down, data science, take actions based on what the data says, not what you feel. A really great example of moderation using this is https://communitynotes.x.com/guide/en/under-the-hood/ranking-notes . If you don’t do this, or at least use a prediction market, won’t you probably be wrong?
Wasn’t done for me. Habryka says that they like to communicate with us in ‘subtle’ ways, give ‘hints’. Many of us have some degree of autism and need it spelled out.
I don’t really care about all this, ultimately it is just a game. I just feel incredibly disappointed. I thought rationality was about being actually correct, about being smart and beating the odds, and so on.
I thought that mods like Raemon etc all knew more than me, that I was actually doing something wrong that could be fixed. Not well, what it is.
That ratonality was an idea that ultimately just says next_action = argmax ( actions considered, argmax(EV estimate_algorithm)), where you swap out how you estimate EV to whatever has the strongest predictive power. That we shouldn’t be stuck with “the sequences” if they are wrong.
But it kinda looks more like a niche organization with the same ultimate fate of : https://www.lesswrong.com/posts/DtcbfwSrcewFubjxp/the-rationalists-of-the-1950s-and-before-also-called
If you know, AI doesn’t kill us first. Stopped clocks and all.
I’m interested in seeing direct evidence of this from DMs. I expect direct evidence would convince me it was in fact done.
Your ongoing assumption that everyone here shares the same beliefs about this continues to be frustrating, though understandable from a less vulcan perspective. Most of your comment appears to be a reply to habryka, not me.
I am confused. The quotes I sent are quotes from DMs we sent to Gerald. Here they are again just for posterity:
And:
I think we have more but they are in DMs with just Raemon in it, but the above IMO clearly communicate “your current contributions are not breaking even”.
ah. then indeed, I am in fact convinced.
I didn’t draw that conclusion. Feel free to post in Raemon’s DMs.
Again what I am disappointed is ultimately this just seems to be a private fiefdom where you make decisions on a whim. Not altogether different from the new twitter come to think of it.
And that’s fine, and I don’t have to post here, that’s fine. I just feel super disappointed because I’m not seeing anything like rationality here in moderation, just tribalism and the normal outcomes of ‘absolute power corrupts absolutely’. “A complex model that I can’t explain to anyone” is ultimately not scalable and frankly not really a very modern way to do it. It just simplifies to a gut feeling, and it cannot moderator better than the moderator’s own knowledge, which is where it fails on topics the moderators don’t actually understand.
This was a form response used word for word to many others. Evidence: https://www.lesswrong.com/posts/cwRCsCXei2J2CmTxG/lw-account-restricted-ok-for-me-but-not-sure-about-lesswrong
https://www.lesswrong.com/posts/HShv7oSW23RWaiFmd/rate-limiting-as-a-mod-tool
By the way you put “locally invalid” on a different comment. Care to explain which element is invalid?
I went over:
Permitted norms on reddit, where I have 120k karma and have never been banned.
why moderate this weird way different from essentially everywhere else?
Can you justify a complex theory on weak evidence?
I don’t see any significant evidence that the moderation here is weird or unusual. Most forums or chats I’ve encountered do not have bright line rules. Only very large forums do, and my impression is that their quality is worse for it. I do not wish to justify this impression at this time, this will likely be near my last comment on this post.