Raemon’s comment below indicates mostly what I meant by:
It seems from talking to the mods here and reading a few of their comments on this topic that they tend to learn towards them being harmful on average and thus need to be pushed down a bit.
Furthermore, I think the mods’ stance on this is based primarily on Yudkowsky’s piece here. I think the relevant portion of that piece is this (emphases mine):
But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...
So, it seems to me that the relevant issues are the following. Being more tolerant of lower-quality discussion will cause:
Higher-quality members’ efforts being directed toward less fruitful endeavors than they would otherwise be.
Higher-quality existing members to leave the community.
Higher-quality potential members who would otherwise have joined the community, not to.
My previous comment primarily refers to the notion of the first bullet-point in this list. But “harmful on average” also means all three.
The issue I have most concern with is the belief that lower-quality members are capable of dominating the environment over higher-quality ones, with all-else-being-equal, and all members having roughly the same rights to interact with one another as they see fit.
This mimics a conversation I was having with someone else recently about Musk’s Twitter / X. They have different beliefs than I do about what happens when you try to implement a system that is inspired by Musk’s ideology. But I encountered an obstacle in this conversation: I said I have always liked using it [Twitter / X], and it also seems to be slightly more enjoyable to use post-acquisition. He said he did not really enjoy using it, and also that it seems to be less enjoyable to use post-acquisition. Unfortunately, if it comes down to a matter of pure preferences like this, than I am not sure how one ought to proceed with such a debate.
However, there is an empirical observation that one can make comparing environments that use voting systems or rank-based attention mechanisms: It should appear to one as though units of work that feel like more or better effort was applied to create them correlate with higher approval and lower disapproval. If this is not the case, then it is much harder to actually utilize feedback to improve one’s own output incrementally. [1]
On LessWrong, that seems to me to be less the case than it does on Twitter / X. Karma does not seem correlated to my perceptions about my own work quality, whereas impressions and likes on Twitter / X do seem correlated. But this is only one person’s observation, of course. Nonetheless I think it should be treated as useful data.
That being said, it may be that the intention of the voting system matters: Upvotes / downvotes here mean “I want to see more of / I want to see less of” respectively. They aren’t explicitly used to provide helpful feedback, and that may be why they seem uncorrelated with useful signal.
Raemon’s comment below indicates mostly what I meant by:
Furthermore, I think the mods’ stance on this is based primarily on Yudkowsky’s piece here. I think the relevant portion of that piece is this (emphases mine):
So, it seems to me that the relevant issues are the following. Being more tolerant of lower-quality discussion will cause:
Higher-quality members’ efforts being directed toward less fruitful endeavors than they would otherwise be.
Higher-quality existing members to leave the community.
Higher-quality potential members who would otherwise have joined the community, not to.
My previous comment primarily refers to the notion of the first bullet-point in this list. But “harmful on average” also means all three.
The issue I have most concern with is the belief that lower-quality members are capable of dominating the environment over higher-quality ones, with all-else-being-equal, and all members having roughly the same rights to interact with one another as they see fit.
This mimics a conversation I was having with someone else recently about Musk’s Twitter / X. They have different beliefs than I do about what happens when you try to implement a system that is inspired by Musk’s ideology. But I encountered an obstacle in this conversation: I said I have always liked using it [Twitter / X], and it also seems to be slightly more enjoyable to use post-acquisition. He said he did not really enjoy using it, and also that it seems to be less enjoyable to use post-acquisition. Unfortunately, if it comes down to a matter of pure preferences like this, than I am not sure how one ought to proceed with such a debate.
However, there is an empirical observation that one can make comparing environments that use voting systems or rank-based attention mechanisms: It should appear to one as though units of work that feel like more or better effort was applied to create them correlate with higher approval and lower disapproval. If this is not the case, then it is much harder to actually utilize feedback to improve one’s own output incrementally. [1]
On LessWrong, that seems to me to be less the case than it does on Twitter / X. Karma does not seem correlated to my perceptions about my own work quality, whereas impressions and likes on Twitter / X do seem correlated. But this is only one person’s observation, of course. Nonetheless I think it should be treated as useful data.
That being said, it may be that the intention of the voting system matters: Upvotes / downvotes here mean “I want to see more of / I want to see less of” respectively. They aren’t explicitly used to provide helpful feedback, and that may be why they seem uncorrelated with useful signal.