I think being unable to reply to comments on your own posts is very likely a mistake and we should change that. (Possibly if the conditions under which we think that was warranted, we should issue a ban.)
”I’m downvoted because I’m controversial” is a go-to stance for people getting downvoted (and resultantly rate-limited), though in my experience the issue is quality rather than controversy (or rather both in combination).
Overall though, we’ve been thinking about the rate limit system and its effects. I think there are likely bad effects even if it’s successfully in some case reducing low quality stuff.
though in my experience the issue is quality rather than controversy
That’s usually true but IMO in this case the heavy downvoting was pretty tribalistic in nature. Something about the subject matter makes people think that opposing views are clearly crazy, and they don’t bother trying to understand(admittedly omnizoid’s posts suffered from this as well)
I think being unable to reply to comments on your own posts is very likely a mistake and we should change that. (Possibly if the conditions under which we think that was warranted, we should issue a ban.)
I was about to write a comment to the effect that there should clearly be an exception for commenting on your own posts (and, indeed, anyone who can’t even be allowed to comment on their own posts should just be banned), so… yeah, strongly agreed that this particular thing should be fixed!
I agree that people should probably be able to reply to comments on their own posts. However, if enabling this is a non-trivial amount of work, I suspect the LW team’s time would be better spent elsewhere.
I base this on the presumptions that 1) there aren’t too many people this policy would help (dozens? single-digits?), 2) these people wouldn’t bring much value to the community, and 3) such a policy is unlikely to be deterring people we’d otherwise want from joining and contributing to the community.
They were downvoted because they were controversial (and I agree with it / like it).
They were downvoted because they were low-quality (and I disagree with it / dislike it).
Because I can sympathize with both views here, I think we should consider remaining agnostic to which is actually the case.
It seems like the major crux here is whether we think that debates over claim and counter-claim (basically, other cruxes) are likely to be useful or likely to cause harm. It seems from talking to the mods here and reading a few of their comments on this topic that they tend to learn towards them being harmful on average and thus need to be pushed down a bit.
Since omnizoid’s issue is not merely over issues of quality, but both over quality as well as being counter-claims to specific claims that have been dominant on LessWrong for some time.
The most agnostic side of the “top-level” crux that I mentioned above seems to point towards favoring agnosticism and furthermore that if we predict debates to be more fruitful than not, then one needn’t be too worried even if one is sure that one side of another crux is truly the lower-quality side of it.
It seems like the major crux here is whether we think that debates over claim and counter-claim (basically, other cruxes) are likely to be useful or likely to cause harm. It seems from talking to the mods here and reading a few of their comments on this topic that they tend to learn towards them being harmful on average and thus need to be pushed down a bit.
This is, as far as I can tell, totally false. There is a very different claim one could make which at least more accurately represents my opinion, i.e. see this comment by John Wentworth (who is not a mod).
Most of your comment seems to be an appeal to modest epistemology. We can in fact do better than total agnosticism about whether some arguments are productive or not, and worth having more or less of on the margin.
Most of your comment seems to be an appeal to modest epistemology. We can in fact do better than total agnosticism about whether some arguments are productive or not, and worth having more or less of on the margin.
Note that the more you believe that your commenters can tell whether some arguments are productive or not, and worth having more or less of on the margin, the less you should worry as mods about preventing or promoting such arguments (altho you still might want to put them near the top or bottom of pages for attention-management reasons)
Note that the more you believe that your commenters can tell whether some arguments are productive or not, and worth having more or less of on the margin
My actual belief is that commenters can (mostly) totally tell which arguments are productive… but… it’s hard to not end up having those unproductive arguments anyway, and the site gets worse.
Raemon’s comment below indicates mostly what I meant by:
It seems from talking to the mods here and reading a few of their comments on this topic that they tend to learn towards them being harmful on average and thus need to be pushed down a bit.
Furthermore, I think the mods’ stance on this is based primarily on Yudkowsky’s piece here. I think the relevant portion of that piece is this (emphases mine):
But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...
So, it seems to me that the relevant issues are the following. Being more tolerant of lower-quality discussion will cause:
Higher-quality members’ efforts being directed toward less fruitful endeavors than they would otherwise be.
Higher-quality existing members to leave the community.
Higher-quality potential members who would otherwise have joined the community, not to.
My previous comment primarily refers to the notion of the first bullet-point in this list. But “harmful on average” also means all three.
The issue I have most concern with is the belief that lower-quality members are capable of dominating the environment over higher-quality ones, with all-else-being-equal, and all members having roughly the same rights to interact with one another as they see fit.
This mimics a conversation I was having with someone else recently about Musk’s Twitter / X. They have different beliefs than I do about what happens when you try to implement a system that is inspired by Musk’s ideology. But I encountered an obstacle in this conversation: I said I have always liked using it [Twitter / X], and it also seems to be slightly more enjoyable to use post-acquisition. He said he did not really enjoy using it, and also that it seems to be less enjoyable to use post-acquisition. Unfortunately, if it comes down to a matter of pure preferences like this, than I am not sure how one ought to proceed with such a debate.
However, there is an empirical observation that one can make comparing environments that use voting systems or rank-based attention mechanisms: It should appear to one as though units of work that feel like more or better effort was applied to create them correlate with higher approval and lower disapproval. If this is not the case, then it is much harder to actually utilize feedback to improve one’s own output incrementally. [1]
On LessWrong, that seems to me to be less the case than it does on Twitter / X. Karma does not seem correlated to my perceptions about my own work quality, whereas impressions and likes on Twitter / X do seem correlated. But this is only one person’s observation, of course. Nonetheless I think it should be treated as useful data.
That being said, it may be that the intention of the voting system matters: Upvotes / downvotes here mean “I want to see more of / I want to see less of” respectively. They aren’t explicitly used to provide helpful feedback, and that may be why they seem uncorrelated with useful signal.
I think being unable to reply to comments on your own posts is very likely a mistake and we should change that. (Possibly if the conditions under which we think that was warranted, we should issue a ban.)
”I’m downvoted because I’m controversial” is a go-to stance for people getting downvoted (and resultantly rate-limited), though in my experience the issue is quality rather than controversy (or rather both in combination).
Overall though, we’ve been thinking about the rate limit system and its effects. I think there are likely bad effects even if it’s successfully in some case reducing low quality stuff.
That’s usually true but IMO in this case the heavy downvoting was pretty tribalistic in nature. Something about the subject matter makes people think that opposing views are clearly crazy, and they don’t bother trying to understand(admittedly omnizoid’s posts suffered from this as well)
I was about to write a comment to the effect that there should clearly be an exception for commenting on your own posts (and, indeed, anyone who can’t even be allowed to comment on their own posts should just be banned), so… yeah, strongly agreed that this particular thing should be fixed!
EDIT: Never mind, I see this has already been fixed. Excellent!
? (EDIT: I mean that the quoted sentence is currently missing some word and is hence incomprehensible.)
I agree that people should probably be able to reply to comments on their own posts. However, if enabling this is a non-trivial amount of work, I suspect the LW team’s time would be better spent elsewhere.
I base this on the presumptions that 1) there aren’t too many people this policy would help (dozens? single-digits?), 2) these people wouldn’t bring much value to the community, and 3) such a policy is unlikely to be deterring people we’d otherwise want from joining and contributing to the community.
Both views seem symmetric to me:
They were downvoted because they were controversial (and I agree with it / like it).
They were downvoted because they were low-quality (and I disagree with it / dislike it).
Because I can sympathize with both views here, I think we should consider remaining agnostic to which is actually the case.
It seems like the major crux here is whether we think that debates over claim and counter-claim (basically, other cruxes) are likely to be useful or likely to cause harm. It seems from talking to the mods here and reading a few of their comments on this topic that they tend to learn towards them being harmful on average and thus need to be pushed down a bit.
Since omnizoid’s issue is not merely over issues of quality, but both over quality as well as being counter-claims to specific claims that have been dominant on LessWrong for some time.
The most agnostic side of the “top-level” crux that I mentioned above seems to point towards favoring agnosticism and furthermore that if we predict debates to be more fruitful than not, then one needn’t be too worried even if one is sure that one side of another crux is truly the lower-quality side of it.
This is, as far as I can tell, totally false. There is a very different claim one could make which at least more accurately represents my opinion, i.e. see this comment by John Wentworth (who is not a mod).
Most of your comment seems to be an appeal to modest epistemology. We can in fact do better than total agnosticism about whether some arguments are productive or not, and worth having more or less of on the margin.
Note that the more you believe that your commenters can tell whether some arguments are productive or not, and worth having more or less of on the margin, the less you should worry as mods about preventing or promoting such arguments (altho you still might want to put them near the top or bottom of pages for attention-management reasons)
My actual belief is that commenters can (mostly) totally tell which arguments are productive… but… it’s hard to not end up having those unproductive arguments anyway, and the site gets worse.
Raemon’s comment below indicates mostly what I meant by:
Furthermore, I think the mods’ stance on this is based primarily on Yudkowsky’s piece here. I think the relevant portion of that piece is this (emphases mine):
So, it seems to me that the relevant issues are the following. Being more tolerant of lower-quality discussion will cause:
Higher-quality members’ efforts being directed toward less fruitful endeavors than they would otherwise be.
Higher-quality existing members to leave the community.
Higher-quality potential members who would otherwise have joined the community, not to.
My previous comment primarily refers to the notion of the first bullet-point in this list. But “harmful on average” also means all three.
The issue I have most concern with is the belief that lower-quality members are capable of dominating the environment over higher-quality ones, with all-else-being-equal, and all members having roughly the same rights to interact with one another as they see fit.
This mimics a conversation I was having with someone else recently about Musk’s Twitter / X. They have different beliefs than I do about what happens when you try to implement a system that is inspired by Musk’s ideology. But I encountered an obstacle in this conversation: I said I have always liked using it [Twitter / X], and it also seems to be slightly more enjoyable to use post-acquisition. He said he did not really enjoy using it, and also that it seems to be less enjoyable to use post-acquisition. Unfortunately, if it comes down to a matter of pure preferences like this, than I am not sure how one ought to proceed with such a debate.
However, there is an empirical observation that one can make comparing environments that use voting systems or rank-based attention mechanisms: It should appear to one as though units of work that feel like more or better effort was applied to create them correlate with higher approval and lower disapproval. If this is not the case, then it is much harder to actually utilize feedback to improve one’s own output incrementally. [1]
On LessWrong, that seems to me to be less the case than it does on Twitter / X. Karma does not seem correlated to my perceptions about my own work quality, whereas impressions and likes on Twitter / X do seem correlated. But this is only one person’s observation, of course. Nonetheless I think it should be treated as useful data.
That being said, it may be that the intention of the voting system matters: Upvotes / downvotes here mean “I want to see more of / I want to see less of” respectively. They aren’t explicitly used to provide helpful feedback, and that may be why they seem uncorrelated with useful signal.