First I do just acknowledge that the rollout of rate limits hasn’t been optimized for being as good-an-experience-as-possible for users (largely due to time constraints, and the quality of site-comments feeling like an overall pretty urgent thing last week. A lot of our current choices has been due to a general wave of poor quality AI comments.
In particular, I think it was bad of us to not give you more of a sense of how/when the rate limit might be lifted. I actually had started a reply to your last PM explaining our method a bit more, and then got distracted and didn’t send it, and I’m sorry about that.
I did expect, at the time, to fairly quickly ship an update that makes all new users start with a rate limit of 1/day. We haven’t ended up shipping that yet for a variety of reasons, but I still expect to in the not-too-distant future. (which would mean you wouldn’t be under harsher restrictions than a new user)
The rough algorithm we’ve been following is:if we notice a newish user we don’t recognize has an average karma-per-comment of <1.5 in their past 20 comments, look into their individual comments. If they seem not meeting our quality bar, give the user a 1-per-day rate limit. Our quality bar is higher for AI content right now because there is such a deluge of it.
(We picked the “1.5” flagging threshold based on looking at a bunch of comments from various users, seeing checking the average, and seeing where our bar of “okay we basically feel good about this user being net positive” tended to lie)
My idealized version of this includes automatically removing the rate limit if the user makes enough upvoted comments/posts, and also gives some advice on how to improve. We’re working on that, and I’m sorry about the current situation.
We give a lot more leeway to users who’ve established themselves with solid posts and comments. I’m expecting a typical person meeting the “extra leeway tenure bar” as something like 3+ posts with 75+ karma (on topics that aren’t optimized for karma-farming, such as certain kinds of drama posts). I’m not quite sure what the number of Good Comments would be to meet the bar, but don’t think you’re above the bar yet.
At the time I write this, your average karma for your past 20 comments, including your self-upvotes, is .9 (and was lower at the time we set the rate limit. Since this includes your self-upvotes, it means your recent comments have been net-downvoted. I’m not saying average karma is a perfect measure of quality here, but I do think it correlates reasonably well. My impression when I looked over your comments at the time was that many of them seem to be making more assertions than arguments and being kinda lowkey aggro about it.
I would request @Gerald Monroe have at most 2 hour rate limit; their comments, while sometimes frustrating due to disagreement, represent what I see as an important and underrepresented view: the “lol I know how to build it and it’s really obvious” view. Currently I only know a few others to have a similar perspective. I want to be able to discuss with representatives of that view; censoring them all would be frustrating. Certainly not all of them actually know exactly, but I don’t think any of the ones I’m thinking of are entirely wrong in their hubris.
I think my preference here is that that kind of conversation is allowed but doesn’t come up in every object-level post. (I’m not sure how to actually operationalize this as a moderation call)
I suppose this location would be as good as any for my response to this issue (I note that I currently have a 1-post / 3-day limit on my account, so this means this comment should be fairly long).
Looking back at my own top comments, I have noticed that what LessWrong the community tends to favor are things I would consider to be more critical in general: Not things that “disagree” on equal-footing so much as critiquing a post coming from the position as more knowledgeable and more experienced than the person I am critiquing. This means that if I wanted to “farm karma” so to speak, I could do so by limiting my comments to criticisms that are flavored with a wee bit of condescension.
When looking at what others’ consider to be your best contributions, it presents somewhat of an awkward question: Are these the same things I would rate as my best work? If not, does that mean that I was trying less hard, or taking it less seriously than I should when I wrote them? We have to consider that when one receives feedback on their work, positive or negative, one must interpret it by their own lights in order to incorporate that feedback in a way that would be considered adequate (assuming that they do indeed wish to improve themselves with respect to overall karma).
Note that it would be sort of funny if you sent a message to someone (in moderator capacity) in something like the following way:
“[User], you’ve been posting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful.For now I’ve given you a 1-per-day rate limit, and may evaluate as we think more about the new standards. As far as feedback goes, do you think you could try and be a little more critical to people, and flavor your posts with a wee bit of condescension? That would really help make LessWrong the well-kept garden it aims to be!”
In other words, optimize not for being “low-key aggro”, but rather for criticisms flavored with a wee bit of condescension. You don’t want to sound like the Omega, who can’t rule out that he truly deserves to be at the bottom of the status-hierarchy, just because he received some negative feedback and perhaps some moderation warnings. It’s easier to perform “established group-member” by limiting your output to critique that aims to come across as being helpful to the community as a whole, rather than the person you’re delivering it to.
The only problem with that is that when I am trying to write a post and I want feedback that’s not just from my own judgement, my mind turns to “what would people upvote more” and my own experience thus far tells me that if I were simply to alter the tone of my posts / comments as opposed to the actual content of the posts / comments, I could alter the resulting approval rating substantially more than I would otherwise think it deserves.
It’s not that I disagree with or dislike my own comments that I believe cater to the community’s expected response so much, just that I think that optimizing for that would be necessary to avoid the restrictions placed on my commenting, and I think optimizing for it enough would bring me outside of what I would consider to be my own standards and judgements for what I actually think as well as how expressing it should actually be done.
As a moderator, you have the responsibility of tuning the dials and knobs that change whatever metrics the users are optimizing for, and in this case, the weight applied to ‘impressing just the community’ as opposed to ‘just speak your mind, using your own standards for what you believe qualifies as good for yourself as well as the community’ (or equivalently, their relative ratio). You have to be pretty sure that increasing that weight is what you want. That weight applies to everyone, of course. So if it is tuned too high, then you get a situation in which everyone is optimizing for what they think everyone else thinks is good.
For the record, I don’t think that weight should be zero, but I also think that it will be non-zero somewhat naturally, so that any weight increases you apply to it with the infrastructure, rules and norms might look like they are being added to zero, but are actually being added to a non-zero initial quantity. It may be that you come to the conclusion that the optimal amount is higher than whatever you deem to be the initial amount, and so that some restrictions are still good to have. Please consider possibly subtracting out the base value from your estimations of how much stricter you want the norms on the site to be.
I checked your comment history. The top comments at this moment start with:
Here’s why I don’t find your argument compelling (K 15)
These norms / rules make me slightly worried that (K 13)
Here’s why I disagree with the core claims of this post (K 7)
Sounds like evidence in favor of “disagree with people, get upvoted”.
On the other hand, your comments with karma below zero:
I’ve never enjoyed, or agreed with, arguments of the form (K −1)
I think that … would imply that … Personally, I think it’s pretty easy to show that … is wrong. (K −1)
I don’t think that … means that … (K −1)
So now it seems more like you disagree a lot (nothing wrong with that), and some of those comments get upvoted, and some of them don’t. The upvoted ones do not feel more condescending than the downvoted ones.
It seems to be historically the case that “doomers” or “near-doomers” [...] (K −9)
AFAIK, the Secretary-General is a full-time position, e.g., [...] (K −5)
Remove the word “AI” and I think this claim is not really changed in any way. AI systems are the most general systems. [...] (K −5)
The following is simply my own assessment of why these comments were downvoted. For the first one, I assume that it was because of the use of the term “doomers” in a pejorative sense. (This is closer, I believe, to what I called “low-key aggro” in my earlier comment.)
I am not sure why the second one was taken so poorly, and I imagine that whoever downvoted it would probably claim it to be snarkier or more disrespectful somehow than it actually was. This is unfortunate, because I think this serves as evidence that comments will often be downvoted because they could be interpreted to be more hostile or low-effort than they actually are. Alternatively, it was downvoted because it was “political.”
The third one is also unfortunate. Disagree-downvoting for that comment makes sense, but not karma-downvoting. If you were to counter that it was somehow 101-material or misunderstanding basic points, I would still have to strongly disagree with that.
My second-highest comment is about why I am worried about site-norms unfairly disfavoring discussions that disagree with major points that are commonly accepted on LessWrong or taken as catechism, so that should also support the idea that if such norms exist, you will observe that comments that do so also appear to be karma-downvoted, so as to limit their visibility and discourage discussion of those topics.
First I do just acknowledge that the rollout of rate limits hasn’t been optimized for being as good-an-experience-as-possible for users (largely due to time constraints, and the quality of site-comments feeling like an overall pretty urgent thing last week. A lot of our current choices has been due to a general wave of poor quality AI comments.
In particular, I think it was bad of us to not give you more of a sense of how/when the rate limit might be lifted. I actually had started a reply to your last PM explaining our method a bit more, and then got distracted and didn’t send it, and I’m sorry about that.
I did expect, at the time, to fairly quickly ship an update that makes all new users start with a rate limit of 1/day. We haven’t ended up shipping that yet for a variety of reasons, but I still expect to in the not-too-distant future. (which would mean you wouldn’t be under harsher restrictions than a new user)
The rough algorithm we’ve been following is:if we notice a newish user we don’t recognize has an average karma-per-comment of <1.5 in their past 20 comments, look into their individual comments. If they seem not meeting our quality bar, give the user a 1-per-day rate limit. Our quality bar is higher for AI content right now because there is such a deluge of it.
(We picked the “1.5” flagging threshold based on looking at a bunch of comments from various users, seeing checking the average, and seeing where our bar of “okay we basically feel good about this user being net positive” tended to lie)
My idealized version of this includes automatically removing the rate limit if the user makes enough upvoted comments/posts, and also gives some advice on how to improve. We’re working on that, and I’m sorry about the current situation.
We give a lot more leeway to users who’ve established themselves with solid posts and comments. I’m expecting a typical person meeting the “extra leeway tenure bar” as something like 3+ posts with 75+ karma (on topics that aren’t optimized for karma-farming, such as certain kinds of drama posts). I’m not quite sure what the number of Good Comments would be to meet the bar, but don’t think you’re above the bar yet.
At the time I write this, your average karma for your past 20 comments, including your self-upvotes, is .9 (and was lower at the time we set the rate limit. Since this includes your self-upvotes, it means your recent comments have been net-downvoted. I’m not saying average karma is a perfect measure of quality here, but I do think it correlates reasonably well. My impression when I looked over your comments at the time was that many of them seem to be making more assertions than arguments and being kinda lowkey aggro about it.
I would request @Gerald Monroe have at most 2 hour rate limit; their comments, while sometimes frustrating due to disagreement, represent what I see as an important and underrepresented view: the “lol I know how to build it and it’s really obvious” view. Currently I only know a few others to have a similar perspective. I want to be able to discuss with representatives of that view; censoring them all would be frustrating. Certainly not all of them actually know exactly, but I don’t think any of the ones I’m thinking of are entirely wrong in their hubris.
I think my preference here is that that kind of conversation is allowed but doesn’t come up in every object-level post. (I’m not sure how to actually operationalize this as a moderation call)
I suppose this location would be as good as any for my response to this issue (I note that I currently have a 1-post / 3-day limit on my account, so this means this comment should be fairly long).
Looking back at my own top comments, I have noticed that what LessWrong the community tends to favor are things I would consider to be more critical in general: Not things that “disagree” on equal-footing so much as critiquing a post coming from the position as more knowledgeable and more experienced than the person I am critiquing. This means that if I wanted to “farm karma” so to speak, I could do so by limiting my comments to criticisms that are flavored with a wee bit of condescension.
When looking at what others’ consider to be your best contributions, it presents somewhat of an awkward question: Are these the same things I would rate as my best work? If not, does that mean that I was trying less hard, or taking it less seriously than I should when I wrote them? We have to consider that when one receives feedback on their work, positive or negative, one must interpret it by their own lights in order to incorporate that feedback in a way that would be considered adequate (assuming that they do indeed wish to improve themselves with respect to overall karma).
Note that it would be sort of funny if you sent a message to someone (in moderator capacity) in something like the following way:
“[User], you’ve been posting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren’t finding your comments sufficiently helpful. For now I’ve given you a 1-per-day rate limit, and may evaluate as we think more about the new standards. As far as feedback goes, do you think you could try and be a little more critical to people, and flavor your posts with a wee bit of condescension? That would really help make LessWrong the well-kept garden it aims to be!”
In other words, optimize not for being “low-key aggro”, but rather for criticisms flavored with a wee bit of condescension. You don’t want to sound like the Omega, who can’t rule out that he truly deserves to be at the bottom of the status-hierarchy, just because he received some negative feedback and perhaps some moderation warnings. It’s easier to perform “established group-member” by limiting your output to critique that aims to come across as being helpful to the community as a whole, rather than the person you’re delivering it to.
The only problem with that is that when I am trying to write a post and I want feedback that’s not just from my own judgement, my mind turns to “what would people upvote more” and my own experience thus far tells me that if I were simply to alter the tone of my posts / comments as opposed to the actual content of the posts / comments, I could alter the resulting approval rating substantially more than I would otherwise think it deserves.
It’s not that I disagree with or dislike my own comments that I believe cater to the community’s expected response so much, just that I think that optimizing for that would be necessary to avoid the restrictions placed on my commenting, and I think optimizing for it enough would bring me outside of what I would consider to be my own standards and judgements for what I actually think as well as how expressing it should actually be done.
As a moderator, you have the responsibility of tuning the dials and knobs that change whatever metrics the users are optimizing for, and in this case, the weight applied to ‘impressing just the community’ as opposed to ‘just speak your mind, using your own standards for what you believe qualifies as good for yourself as well as the community’ (or equivalently, their relative ratio). You have to be pretty sure that increasing that weight is what you want. That weight applies to everyone, of course. So if it is tuned too high, then you get a situation in which everyone is optimizing for what they think everyone else thinks is good.
For the record, I don’t think that weight should be zero, but I also think that it will be non-zero somewhat naturally, so that any weight increases you apply to it with the infrastructure, rules and norms might look like they are being added to zero, but are actually being added to a non-zero initial quantity. It may be that you come to the conclusion that the optimal amount is higher than whatever you deem to be the initial amount, and so that some restrictions are still good to have. Please consider possibly subtracting out the base value from your estimations of how much stricter you want the norms on the site to be.
I checked your comment history. The top comments at this moment start with:
Sounds like evidence in favor of “disagree with people, get upvoted”.
On the other hand, your comments with karma below zero:
So now it seems more like you disagree a lot (nothing wrong with that), and some of those comments get upvoted, and some of them don’t. The upvoted ones do not feel more condescending than the downvoted ones.
Actually, my lowest three comments are:
The following is simply my own assessment of why these comments were downvoted. For the first one, I assume that it was because of the use of the term “doomers” in a pejorative sense. (This is closer, I believe, to what I called “low-key aggro” in my earlier comment.)
I am not sure why the second one was taken so poorly, and I imagine that whoever downvoted it would probably claim it to be snarkier or more disrespectful somehow than it actually was. This is unfortunate, because I think this serves as evidence that comments will often be downvoted because they could be interpreted to be more hostile or low-effort than they actually are. Alternatively, it was downvoted because it was “political.”
The third one is also unfortunate. Disagree-downvoting for that comment makes sense, but not karma-downvoting. If you were to counter that it was somehow 101-material or misunderstanding basic points, I would still have to strongly disagree with that.
My second-highest comment is about why I am worried about site-norms unfairly disfavoring discussions that disagree with major points that are commonly accepted on LessWrong or taken as catechism, so that should also support the idea that if such norms exist, you will observe that comments that do so also appear to be karma-downvoted, so as to limit their visibility and discourage discussion of those topics.
This still supports my main point, I believe.