First of all, thank you, this was exactly the type of answer I was hoping for. Also, if you still have the ability to comment freely on your short form, I’m happy to hop over there.
You’ve requested people stop sugarcoating so I’m going to be harsher than normal. I think the major disagreement lies here:
> But the entire point of punishment is teaching
I do not believe the mod team’s goal is to punish individuals. It is to gatekeep in service of keeping lesswrong’s quality high. Anyone who happens to emerge from that process making good contributions is a bonus, but not the goal.
How well is this signposted? The new user message says
I think that message was put in last summer but am not sure when. You might have joined before it went up (although then you would have been on the site when the equivalent post went up).
Going against the consensus is *probably* enough to get one rate-limited, even if they’re correct
For issues interesting enough to have this problem, there is no ground source of truth that humans can access. There is human judgement, and a long process that will hopefully lead to better understanding eventually. Mods or readers are not contacting an oracle, hearing a post is true, and downvoting it anyway because they dislike it. They’re reading content, deciding whether it is well formed (for regular karma) and if they agree with it (for agreement votes, and probably also regular karma, although IIRC the correlation between those was less than I expected. LessWrong voters love to upvote high quality things they disagree with).
If you have a system that is more truth tracking I would love to hear it and I’m sure the team would too. But any system will have to take into account the fact that there is no magical source of truth for many important questions, so power will ultimately rest on human judgement.
On a practical level:
My comments can be shorter or easier to understand, but not both. Most people will communicate big ideas by linking to them, linking 20 pages is much more acceptable than writing them in a comment. But these are my own ideas, there’s no links.
Easier to understand. LessWrong is more tolerant of length than most of the internet.
When I need to spend many pages on something boring and detailed, I often write a separate post for it, which I link to in the real post. I realize you’re rate limited, but rate limits don’t apply to comments on your own posts (short form is in a weird middle ground, but nothing stops you from creating your own post to write on). Or create your own blog elsewhere and link to it.
I do have access, I just felt like waiting and replying here. By the way, if I comment 20 times on my shortform, will the rate-limit stop? This feels like an obvious exploit in the rate-limiting algorithm, but it’s still possible that I don’t know how it works.
It is to gatekeep in service of keeping lesswrong’s quality high
Then outright banning would work better than rate-limiting without feedback like this. If people contribute in good faith, they need to know just what other people approve of. Vague feedback doesn’t help alignment very much. And while an eternal september is dangerous, you likely don’t want a community dominated by veteran users who are hostile to new users. I’ve seen this in videogame communities and it leads to forms of stagnation.
It confuses me if you got 10 upvotes for the contents of your reply (I can’t find fault with the writing, formatting and tone), but it’s easily explained by assuming that users here don’t act much differently than they do on Reddit, which would be sad.
I already read the new users guide. Perhaps I didn’t put it clearly enough with “I think people should take responsibility for their words”, but it was the new users guide which told me to post. I read the “Is LessWrong for you?” section, and it told me that LessWrong was likely for me. I read the “well-kept garden” post in the past and found myself agreeing with its message. This is why I felt mislead and why I don’t think linking these two sections makes for a good counter-argument (after all, I attempted to communicate that I had already taken them into account). I thought LW should take responsibility for what it told me, as trusting it is what got me rate-limited. That’s the core message, the rest of my reply just defends my approach of commenting.
For issues interesting enough to have this problem, there is no ground source of truth that humans can access
In order not to be misunderstood completely, I’d need a disclaimer like this at the top of every comment I make, which is clearly not feasible:
Humanity is somewhat rational now, but our shared knowledge is still filled with old errors which were made before we learned how to think. Many core assumptions are just wrong. But if these beliefs are corrected, then the cascade would collapse some of the beliefs that people hold dear, or touch upon controversial subjects. The truth doesn’t stand a chance against politics, morality and social norms. Sadly, if you want to prevent society from collapsing, you will need to grapple a bit with these three subjects. But that will very likely lead to downvotes.
A lot of things are poorly explained, but nonetheless true. Other things are very well argued, but nonetheless false. “Manifesting the future by visualizing it” is pseudoscience, but it has a positive utility. “We must make new laws to keep everyone safe” sounds reasonable, but after 1000 iterations it should have dawned on us that the 1001th law isn’t going to save us. I think that the reasonable sentence would net you positive karma on here, while the pseudoscience would get called worthless.
My logical intelligence is much higher than my verbal—and most people who are successful in social and academic areas of life are the complete opposite. Nonetheless, some of us can see patterns that other people just can’t. Human beings also have a lot in common with AI, we’re blackboxes. Our instincts are discriminatory and biased, but only because people who weren’t went extinct. Those who attempt to get rid of biases should first know what they are good for (Chesterton’s fence). But I can’t see a single movement in society advocating for change which actually understands what it’s doing. But people don’t like hearing this. As of right now, the blackbox (intuition, instinct, etc) is still smarter than the explainable truth. This will change as people are taught how to disregard the blackbox and even break it. But this also goes against the consensus (in a way that I assume it will be considered “bad quality”. Some people might upvote what they disagree with, but I don’t think that goes for many types of disagreement)
And I’m also only human. Rate-limited users are perhaps the bottom 5% of posters? But I’m above that. I’m just grappling with subjects which are beyond my level. You told me to read the rules, that’s a lot easier. I could also get lots of upvotes if I engaged with subjects that I’m overqualified for. But like with AGI, some subjects are beyond our abilities, but I don’t think we can’t afford to ignore them, so we’re forced to make fools of ourselves trying.
First of all, thank you, this was exactly the type of answer I was hoping for. Also, if you still have the ability to comment freely on your short form, I’m happy to hop over there.
You’ve requested people stop sugarcoating so I’m going to be harsher than normal. I think the major disagreement lies here:
> But the entire point of punishment is teaching
I do not believe the mod team’s goal is to punish individuals. It is to gatekeep in service of keeping lesswrong’s quality high. Anyone who happens to emerge from that process making good contributions is a bonus, but not the goal.
How well is this signposted? The new user message says
Followed by a crippling long New User Guide.
I think that message was put in last summer but am not sure when. You might have joined before it went up (although then you would have been on the site when the equivalent post went up).
For issues interesting enough to have this problem, there is no ground source of truth that humans can access. There is human judgement, and a long process that will hopefully lead to better understanding eventually. Mods or readers are not contacting an oracle, hearing a post is true, and downvoting it anyway because they dislike it. They’re reading content, deciding whether it is well formed (for regular karma) and if they agree with it (for agreement votes, and probably also regular karma, although IIRC the correlation between those was less than I expected. LessWrong voters love to upvote high quality things they disagree with).
If you have a system that is more truth tracking I would love to hear it and I’m sure the team would too. But any system will have to take into account the fact that there is no magical source of truth for many important questions, so power will ultimately rest on human judgement.
On a practical level:
Easier to understand. LessWrong is more tolerant of length than most of the internet.
When I need to spend many pages on something boring and detailed, I often write a separate post for it, which I link to in the real post. I realize you’re rate limited, but rate limits don’t apply to comments on your own posts (short form is in a weird middle ground, but nothing stops you from creating your own post to write on). Or create your own blog elsewhere and link to it.
Thanks for your reply!
I do have access, I just felt like waiting and replying here. By the way, if I comment 20 times on my shortform, will the rate-limit stop? This feels like an obvious exploit in the rate-limiting algorithm, but it’s still possible that I don’t know how it works.
Then outright banning would work better than rate-limiting without feedback like this. If people contribute in good faith, they need to know just what other people approve of. Vague feedback doesn’t help alignment very much. And while an eternal september is dangerous, you likely don’t want a community dominated by veteran users who are hostile to new users. I’ve seen this in videogame communities and it leads to forms of stagnation.
It confuses me if you got 10 upvotes for the contents of your reply (I can’t find fault with the writing, formatting and tone), but it’s easily explained by assuming that users here don’t act much differently than they do on Reddit, which would be sad.
I already read the new users guide. Perhaps I didn’t put it clearly enough with “I think people should take responsibility for their words”, but it was the new users guide which told me to post. I read the “Is LessWrong for you?” section, and it told me that LessWrong was likely for me. I read the “well-kept garden” post in the past and found myself agreeing with its message. This is why I felt mislead and why I don’t think linking these two sections makes for a good counter-argument (after all, I attempted to communicate that I had already taken them into account). I thought LW should take responsibility for what it told me, as trusting it is what got me rate-limited. That’s the core message, the rest of my reply just defends my approach of commenting.
In order not to be misunderstood completely, I’d need a disclaimer like this at the top of every comment I make, which is clearly not feasible:
Humanity is somewhat rational now, but our shared knowledge is still filled with old errors which were made before we learned how to think. Many core assumptions are just wrong. But if these beliefs are corrected, then the cascade would collapse some of the beliefs that people hold dear, or touch upon controversial subjects. The truth doesn’t stand a chance against politics, morality and social norms. Sadly, if you want to prevent society from collapsing, you will need to grapple a bit with these three subjects. But that will very likely lead to downvotes.
A lot of things are poorly explained, but nonetheless true. Other things are very well argued, but nonetheless false. “Manifesting the future by visualizing it” is pseudoscience, but it has a positive utility. “We must make new laws to keep everyone safe” sounds reasonable, but after 1000 iterations it should have dawned on us that the 1001th law isn’t going to save us. I think that the reasonable sentence would net you positive karma on here, while the pseudoscience would get called worthless.
My logical intelligence is much higher than my verbal—and most people who are successful in social and academic areas of life are the complete opposite. Nonetheless, some of us can see patterns that other people just can’t. Human beings also have a lot in common with AI, we’re blackboxes. Our instincts are discriminatory and biased, but only because people who weren’t went extinct. Those who attempt to get rid of biases should first know what they are good for (Chesterton’s fence). But I can’t see a single movement in society advocating for change which actually understands what it’s doing. But people don’t like hearing this.
As of right now, the blackbox (intuition, instinct, etc) is still smarter than the explainable truth. This will change as people are taught how to disregard the blackbox and even break it. But this also goes against the consensus (in a way that I assume it will be considered “bad quality”. Some people might upvote what they disagree with, but I don’t think that goes for many types of disagreement)
And I’m also only human. Rate-limited users are perhaps the bottom 5% of posters? But I’m above that. I’m just grappling with subjects which are beyond my level. You told me to read the rules, that’s a lot easier. I could also get lots of upvotes if I engaged with subjects that I’m overqualified for. But like with AGI, some subjects are beyond our abilities, but I don’t think we can’t afford to ignore them, so we’re forced to make fools of ourselves trying.