I do have access, I just felt like waiting and replying here. By the way, if I comment 20 times on my shortform, will the rate-limit stop? This feels like an obvious exploit in the rate-limiting algorithm, but it’s still possible that I don’t know how it works.
It is to gatekeep in service of keeping lesswrong’s quality high
Then outright banning would work better than rate-limiting without feedback like this. If people contribute in good faith, they need to know just what other people approve of. Vague feedback doesn’t help alignment very much. And while an eternal september is dangerous, you likely don’t want a community dominated by veteran users who are hostile to new users. I’ve seen this in videogame communities and it leads to forms of stagnation.
It confuses me if you got 10 upvotes for the contents of your reply (I can’t find fault with the writing, formatting and tone), but it’s easily explained by assuming that users here don’t act much differently than they do on Reddit, which would be sad.
I already read the new users guide. Perhaps I didn’t put it clearly enough with “I think people should take responsibility for their words”, but it was the new users guide which told me to post. I read the “Is LessWrong for you?” section, and it told me that LessWrong was likely for me. I read the “well-kept garden” post in the past and found myself agreeing with its message. This is why I felt mislead and why I don’t think linking these two sections makes for a good counter-argument (after all, I attempted to communicate that I had already taken them into account). I thought LW should take responsibility for what it told me, as trusting it is what got me rate-limited. That’s the core message, the rest of my reply just defends my approach of commenting.
For issues interesting enough to have this problem, there is no ground source of truth that humans can access
In order not to be misunderstood completely, I’d need a disclaimer like this at the top of every comment I make, which is clearly not feasible:
Humanity is somewhat rational now, but our shared knowledge is still filled with old errors which were made before we learned how to think. Many core assumptions are just wrong. But if these beliefs are corrected, then the cascade would collapse some of the beliefs that people hold dear, or touch upon controversial subjects. The truth doesn’t stand a chance against politics, morality and social norms. Sadly, if you want to prevent society from collapsing, you will need to grapple a bit with these three subjects. But that will very likely lead to downvotes.
A lot of things are poorly explained, but nonetheless true. Other things are very well argued, but nonetheless false. “Manifesting the future by visualizing it” is pseudoscience, but it has a positive utility. “We must make new laws to keep everyone safe” sounds reasonable, but after 1000 iterations it should have dawned on us that the 1001th law isn’t going to save us. I think that the reasonable sentence would net you positive karma on here, while the pseudoscience would get called worthless.
My logical intelligence is much higher than my verbal—and most people who are successful in social and academic areas of life are the complete opposite. Nonetheless, some of us can see patterns that other people just can’t. Human beings also have a lot in common with AI, we’re blackboxes. Our instincts are discriminatory and biased, but only because people who weren’t went extinct. Those who attempt to get rid of biases should first know what they are good for (Chesterton’s fence). But I can’t see a single movement in society advocating for change which actually understands what it’s doing. But people don’t like hearing this. As of right now, the blackbox (intuition, instinct, etc) is still smarter than the explainable truth. This will change as people are taught how to disregard the blackbox and even break it. But this also goes against the consensus (in a way that I assume it will be considered “bad quality”. Some people might upvote what they disagree with, but I don’t think that goes for many types of disagreement)
And I’m also only human. Rate-limited users are perhaps the bottom 5% of posters? But I’m above that. I’m just grappling with subjects which are beyond my level. You told me to read the rules, that’s a lot easier. I could also get lots of upvotes if I engaged with subjects that I’m overqualified for. But like with AGI, some subjects are beyond our abilities, but I don’t think we can’t afford to ignore them, so we’re forced to make fools of ourselves trying.
Thanks for your reply!
I do have access, I just felt like waiting and replying here. By the way, if I comment 20 times on my shortform, will the rate-limit stop? This feels like an obvious exploit in the rate-limiting algorithm, but it’s still possible that I don’t know how it works.
Then outright banning would work better than rate-limiting without feedback like this. If people contribute in good faith, they need to know just what other people approve of. Vague feedback doesn’t help alignment very much. And while an eternal september is dangerous, you likely don’t want a community dominated by veteran users who are hostile to new users. I’ve seen this in videogame communities and it leads to forms of stagnation.
It confuses me if you got 10 upvotes for the contents of your reply (I can’t find fault with the writing, formatting and tone), but it’s easily explained by assuming that users here don’t act much differently than they do on Reddit, which would be sad.
I already read the new users guide. Perhaps I didn’t put it clearly enough with “I think people should take responsibility for their words”, but it was the new users guide which told me to post. I read the “Is LessWrong for you?” section, and it told me that LessWrong was likely for me. I read the “well-kept garden” post in the past and found myself agreeing with its message. This is why I felt mislead and why I don’t think linking these two sections makes for a good counter-argument (after all, I attempted to communicate that I had already taken them into account). I thought LW should take responsibility for what it told me, as trusting it is what got me rate-limited. That’s the core message, the rest of my reply just defends my approach of commenting.
In order not to be misunderstood completely, I’d need a disclaimer like this at the top of every comment I make, which is clearly not feasible:
Humanity is somewhat rational now, but our shared knowledge is still filled with old errors which were made before we learned how to think. Many core assumptions are just wrong. But if these beliefs are corrected, then the cascade would collapse some of the beliefs that people hold dear, or touch upon controversial subjects. The truth doesn’t stand a chance against politics, morality and social norms. Sadly, if you want to prevent society from collapsing, you will need to grapple a bit with these three subjects. But that will very likely lead to downvotes.
A lot of things are poorly explained, but nonetheless true. Other things are very well argued, but nonetheless false. “Manifesting the future by visualizing it” is pseudoscience, but it has a positive utility. “We must make new laws to keep everyone safe” sounds reasonable, but after 1000 iterations it should have dawned on us that the 1001th law isn’t going to save us. I think that the reasonable sentence would net you positive karma on here, while the pseudoscience would get called worthless.
My logical intelligence is much higher than my verbal—and most people who are successful in social and academic areas of life are the complete opposite. Nonetheless, some of us can see patterns that other people just can’t. Human beings also have a lot in common with AI, we’re blackboxes. Our instincts are discriminatory and biased, but only because people who weren’t went extinct. Those who attempt to get rid of biases should first know what they are good for (Chesterton’s fence). But I can’t see a single movement in society advocating for change which actually understands what it’s doing. But people don’t like hearing this.
As of right now, the blackbox (intuition, instinct, etc) is still smarter than the explainable truth. This will change as people are taught how to disregard the blackbox and even break it. But this also goes against the consensus (in a way that I assume it will be considered “bad quality”. Some people might upvote what they disagree with, but I don’t think that goes for many types of disagreement)
And I’m also only human. Rate-limited users are perhaps the bottom 5% of posters? But I’m above that. I’m just grappling with subjects which are beyond my level. You told me to read the rules, that’s a lot easier. I could also get lots of upvotes if I engaged with subjects that I’m overqualified for. But like with AGI, some subjects are beyond our abilities, but I don’t think we can’t afford to ignore them, so we’re forced to make fools of ourselves trying.