The moderators will likely be highly willing to give feedback on intercom in the bottom right.”
The way I interpret this, you only get this if you are a “high ROI” contributor. It’s quite possible that you get specific feedback, including the post they don’t like, if you are considered “high ROI”. I complained about bhauth rate limiting me by abusing downvotes in a discussion, and never got a reply. In fact I never received a reply by intercom about anything I asked about. Not even a “contribute to the site more and we’ll help, k bye”.
Which is fine, but habryka also admitted that
have communicated that we don’t think you are breaking even. Here are some messages we sent to you:
This was never said, I would have immediately deactivated my account had this occurred.
Old users are owed explanations, new users are (mostly) not
This was not done, and habryka admitted this wasn’t done. Also raemon gave the definition for an “established” user and it was an extremely high bar, only a small fraction of all users will meet it.
Feedback helps a bit, especially if you are young, but usually doesn’t
A little casual age discrimination, anyone over 22 isn’t worth helping.
As such, we don’t have laws.
I thought ‘rationality’ was about mechanistic reasoning, not just “do whatever we want on a whim”. As in you build a model on evidence, write it down, data science, take actions based on what the data says, not what you feel. A really great example of moderation using this is https://communitynotes.x.com/guide/en/under-the-hood/ranking-notes . If you don’t do this, or at least use a prediction market, won’t you probably be wrong?
Uncertainty is costly, and I think it’s worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them.
Wasn’t done for me. Habryka says that they like to communicate with us in ‘subtle’ ways, give ‘hints’. Many of us have some degree of autism and need it spelled out.
I don’t really care about all this, ultimately it is just a game. I just feel incredibly disappointed. I thought rationality was about being actually correct, about being smart and beating the odds, and so on.
I thought that mods like Raemon etc all knew more than me, that I was actually doing something wrong that could be fixed. Not well, what it is.
That ratonality was an idea that ultimately just says next_action = argmax ( actions considered, argmax(EV estimate_algorithm)), where you swap out how you estimate EV to whatever has the strongest predictive power. That we shouldn’t be stuck with “the sequences” if they are wrong.
This was not done, and habryka admitted this wasn’t done
I’m interested in seeing direct evidence of this from DMs. I expect direct evidence would convince me it was in fact done.
If you know, AI doesn’t kill us first. Stopped clocks and all.
Your ongoing assumption that everyone here shares the same beliefs about this continues to be frustrating, though understandable from a less vulcan perspective. Most of your comment appears to be a reply to habryka, not me.
I am confused. The quotes I sent are quotes from DMs we sent to Gerald. Here they are again just for posterity:
You’ve been commenting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful.
And:
To conclude, the rate limit is your warning. Currently I feel your typical comments (even not downvoted) ones aren’t amazing, and now that we’re prioritizing raising standards due the dramatic rise in new users, we’re also getting tougher on contributions from established users that don’t feel like they’re meeting the bar either.
I think we have more but they are in DMs with just Raemon in it, but the above IMO clearly communicate “your current contributions are not breaking even”.
I think we have more but they are in DMs with just Raemon in it, but the above IMO clearly communicate “your current contributions are not breaking even”.
I didn’t draw that conclusion. Feel free to post in Raemon’s DMs.
Again what I am disappointed is ultimately this just seems to be a private fiefdom where you make decisions on a whim. Not altogether different from the new twitter come to think of it.
And that’s fine, and I don’t have to post here, that’s fine. I just feel super disappointed because I’m not seeing anything like rationality here in moderation, just tribalism and the normal outcomes of ‘absolute power corrupts absolutely’. “A complex model that I can’t explain to anyone” is ultimately not scalable and frankly not really a very modern way to do it. It just simplifies to a gut feeling, and it cannot moderator better than the moderator’s own knowledge, which is where it fails on topics the moderators don’t actually understand.
You’ve been commenting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful.
The way I interpret this, you only get this if you are a “high ROI” contributor. It’s quite possible that you get specific feedback, including the post they don’t like, if you are considered “high ROI”. I complained about bhauth rate limiting me by abusing downvotes in a discussion, and never got a reply. In fact I never received a reply by intercom about anything I asked about. Not even a “contribute to the site more and we’ll help, k bye”.
Which is fine, but habryka also admitted that
This was never said, I would have immediately deactivated my account had this occurred.
This was not done, and habryka admitted this wasn’t done. Also raemon gave the definition for an “established” user and it was an extremely high bar, only a small fraction of all users will meet it.
A little casual age discrimination, anyone over 22 isn’t worth helping.
I thought ‘rationality’ was about mechanistic reasoning, not just “do whatever we want on a whim”. As in you build a model on evidence, write it down, data science, take actions based on what the data says, not what you feel. A really great example of moderation using this is https://communitynotes.x.com/guide/en/under-the-hood/ranking-notes . If you don’t do this, or at least use a prediction market, won’t you probably be wrong?
Wasn’t done for me. Habryka says that they like to communicate with us in ‘subtle’ ways, give ‘hints’. Many of us have some degree of autism and need it spelled out.
I don’t really care about all this, ultimately it is just a game. I just feel incredibly disappointed. I thought rationality was about being actually correct, about being smart and beating the odds, and so on.
I thought that mods like Raemon etc all knew more than me, that I was actually doing something wrong that could be fixed. Not well, what it is.
That ratonality was an idea that ultimately just says next_action = argmax ( actions considered, argmax(EV estimate_algorithm)), where you swap out how you estimate EV to whatever has the strongest predictive power. That we shouldn’t be stuck with “the sequences” if they are wrong.
But it kinda looks more like a niche organization with the same ultimate fate of : https://www.lesswrong.com/posts/DtcbfwSrcewFubjxp/the-rationalists-of-the-1950s-and-before-also-called
If you know, AI doesn’t kill us first. Stopped clocks and all.
I’m interested in seeing direct evidence of this from DMs. I expect direct evidence would convince me it was in fact done.
Your ongoing assumption that everyone here shares the same beliefs about this continues to be frustrating, though understandable from a less vulcan perspective. Most of your comment appears to be a reply to habryka, not me.
I am confused. The quotes I sent are quotes from DMs we sent to Gerald. Here they are again just for posterity:
And:
I think we have more but they are in DMs with just Raemon in it, but the above IMO clearly communicate “your current contributions are not breaking even”.
ah. then indeed, I am in fact convinced.
I didn’t draw that conclusion. Feel free to post in Raemon’s DMs.
Again what I am disappointed is ultimately this just seems to be a private fiefdom where you make decisions on a whim. Not altogether different from the new twitter come to think of it.
And that’s fine, and I don’t have to post here, that’s fine. I just feel super disappointed because I’m not seeing anything like rationality here in moderation, just tribalism and the normal outcomes of ‘absolute power corrupts absolutely’. “A complex model that I can’t explain to anyone” is ultimately not scalable and frankly not really a very modern way to do it. It just simplifies to a gut feeling, and it cannot moderator better than the moderator’s own knowledge, which is where it fails on topics the moderators don’t actually understand.
This was a form response used word for word to many others. Evidence: https://www.lesswrong.com/posts/cwRCsCXei2J2CmTxG/lw-account-restricted-ok-for-me-but-not-sure-about-lesswrong
https://www.lesswrong.com/posts/HShv7oSW23RWaiFmd/rate-limiting-as-a-mod-tool