We have really given you a lot of feedback and have communicated that we don’t think you are breaking even. Here are some messages we sent to you:
April 7th 2023:
You’ve been commenting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful.
And from Ruby:
In the “wrong” category, some of your criticisms of the Time piece post seemed to be failing to operating probabilistically which is a fundamental basic I expect from LW users. “May not” is not sufficient argument. You need to talk about probabilities and why yours are different from others. “It’s irrational to worry about X because it might not happen” does not cut it. That’s just something that stuck out to me.
In my mind, the 1 contribution/day is better than a ban because it gives you a chance to improve your contributions and become unrestricted.
Regarding your near-1000 karma, this is not a great sign given you have nearly 900 comments, meaning your average comment is not getting much positive engagement. Unfortunately karma is an imperfect measure and captures the combination of “is good” and “engages a lot” and engaging a lot alone isn’t something we reward.
To conclude, the rate limit is your warning. Currently I feel your typical comments (even not downvoted) ones aren’t amazing, and now that we’re prioritizing raising standards due the dramatic rise in new users, we’re also getting tougher on contributions from established users that don’t feel like they’re meeting the bar either.
Separately, here are some quotes from our about page and new user guide:
This is a hard section to write. The new users who need to read it least are more likely to spend time worrying about the below, and those who need it most are likely to ignore it. Don’t stress too hard. If you submit it and we don’t like it, we’ll give you some feedback.
A lot of the below is written for the people who aren’t putting in much effort at all, so we can at least say “hey, we did give you a heads up in multiple places”.
There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/comment from new users for the following. If the first submission is lacking, it might be rejected and you’ll get feedback on why.
Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if you:
Demonstrate understanding of LessWrong rationality fundamentals. Or at least don’t do anything contravened by them. These are the kinds of things covered in The Sequences such as probabilistic reasoning, proper use of beliefs, being curious about where you might be wrong, avoiding arguing over definitions, etc. See the Foundational Reading section above.
Write a clear introduction. If your first submission is lengthy, i.e. a long post, it’s more likely to get quickly approved if the site moderators can quickly understand what you’re trying to say rather than having to delve deep into your post to figure it out. Once you’re established on the site and people know that you have good things to say, you can pull off having a “literary” opening that doesn’t start with the main point.
Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it’s clear you’re aware of prior relevant discussion and are building upon on it. It’s not a big deal if you weren’t aware, there’s just a chance the moderator team will reject your submission and point you to relevant material.
This doesn’t mean that you can’t question positions commonly held on LessWrong, just that it’s a lot more productive for everyone involved if you’re able to respond to or build upon the existing arguments, e.g. showing why they’re wrong.
Address the LessWrong audience. A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There’s nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particularly interesting or insightful, nor demonstrate an interest in LessWrong’s culture/norms or audience (as revealed by a very different style and not really responding to anyone on site).
It’s good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts are good).
Aim for a high standard if you’re contributing on the topic AI. As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don’t think your AI-related contribution is particularly valuable and it’s not clear you’ve tried to understand the site’s culture or values, then it’s possible we’ll reject it.
And on the topic of positive goals for LessWrong and what we are trying to do here:
On LessWrong we attempt (though don’t always succeed) to apply the rationality lessons we’ve accumulated to any topic that interests us, and especially topics that seem important, like how to make the world a better place. We don’t just care about truth in the abstract, but care about having true beliefs about things we care about so that we can make better and more successful decisions.
Right now, AI seems like one of the most (or the most) important topics for humanity. It involves many tricky questions, high stakes, and uncertainty in an unprecedented situation. On LessWrong, many users are attempting to apply their best thinking to ensure that the advent of increasingly powerful AI goes well for humanity.[5]
It’s not amazingly concrete, but I do think it’s clear we are trying to do something specific here. We are here to develop an art of rationality and cause good outcomes on issues like AI and other world-scale outcomes, and we’ll moderate to achieve that.
I’d also want to add LW Team is adjusting moderation policy as a post that laid out some of our thinking here. One section that’s particularly relevant/standalone:
LessWrong has always had a goal of being a well-kept garden. We have higher and more opinionated standards than most of the rest of the internet. In many cases we treat some issues as more “settled” than the rest of the internet, so that instead of endlessly rehashing the same questions we can move on to solving more difficult and interesting questions.
What this translates to in terms of moderation policy is a bit murky. We’ve been stepping up moderation over the past couple months and frequently run into issues like “it seems like this comment is missing some kind of ‘LessWrong basics’, but ‘the basics’ aren’t well indexed and easy to reference.” It’s also not quite clear how to handle that from a moderation perspective.
I’m hoping to improve on “‘the basics’ are better indexed”, but meanwhile it’s just generally the case that if you participate on LessWrong, you are expected to have absorbed the set of principles in The Sequences (AKA Rationality A-Z).
In some cases you can get away without doing that while participating in local object level conversations, and pick up norms along the way. But if you’re getting downvoted and you haven’t read them, it’s likely you’re missing a lot of concepts or norms that are considered basic background reading on LessWrong. I recommend starting with the Sequences Highlights, and I’d also note that you don’t need to read the Sequences in order, you can pick some random posts that seem fun and jump around based on your interest.
(Note: it’s of course pretty important to be able to question all your basic assumptions. But I think doing that in a productive way requires actually understand why the current set of background assumptions are the way they are, and engaging with the object level reasoning)
There’s also a straightforward question of quality. LessWrong deals with complicated questions. It’s a place for making serious progress on those questions. One model I have of LessWrong is something like a university – there’s a role for undergrads who are learning lots of stuff but aren’t yet expected to be contributing to the cutting edge. There are grad students and professors who conduct novel research. But all of this is predicated on there being some barrier-to-entry. Not everyone gets accepted to any given university. You need some combination of intelligence, conscientiousness, etc to get accepted in the first place.
and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful.
This is astonishingly vague.
You need to talk about probabilities and why yours are different from others.
I did
the rate limit is your warning.
Punishment was in advance of a warning.
page and new user guide
I did everything here on most comments made after this was linked.
We are here to develop an art of rationality and cause good outcomes on issues like AI and other world-scale outcomes, and we’ll moderate to achieve that.
I wish you luck with this. I look forward to seeing the results of your experiment here. I don’t like advance or retroactive punishment or vague rules and no appeals, but I appreciate you have admitted upfront to doing this. Thank you.
We have really given you a lot of feedback and have communicated that we don’t think you are breaking even. Here are some messages we sent to you:
April 7th 2023:
And from Ruby:
Separately, here are some quotes from our about page and new user guide:
And on the topic of positive goals for LessWrong and what we are trying to do here:
It’s not amazingly concrete, but I do think it’s clear we are trying to do something specific here. We are here to develop an art of rationality and cause good outcomes on issues like AI and other world-scale outcomes, and we’ll moderate to achieve that.
I’d also want to add LW Team is adjusting moderation policy as a post that laid out some of our thinking here. One section that’s particularly relevant/standalone:
https://www.greaterwrong.com/users/gerald-monroe?show=comments&sort=top
Worst I ever did:
https://www.greaterwrong.com/users/gerald-monroe?show=comments&sort=top&offset=1260
This is astonishingly vague.
I did
Punishment was in advance of a warning.
I did everything here on most comments made after this was linked.
I wish you luck with this. I look forward to seeing the results of your experiment here. I don’t like advance or retroactive punishment or vague rules and no appeals, but I appreciate you have admitted upfront to doing this. Thank you.