Moderation is basically the only way, I think. You could try to use fancy pagerank-anchored-by-trusted-users ratings, or make votes costly to the user in some way, but I think moderation is the necessary fallback.
Goodhart’s law is real, but people still try to use metrics. Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.
I can use name recognition to scroll through a comment thread to find all the comments by the people that I consider in high regard, but this is much more effort than just having a karma system which automatically shows the top-voted comments first. (The karma system also doesn’t discriminate against new writers as badly as relying on name recognition does.)
Going to reply to this because I don’t think it should be overlooked. It’s a valid point—people tend to want to filter out information that’s not from the sources they trust. I think these kind of incentive pressures are what led to the “LessWrong Diaspora” being concentrated around specific blogs belonging to people with very positive reputation such as Scott Alexander. And when people want to look at different sources of information they will follow the advice of said people usually. This is how I operate when I’m doing my own reading / research—I start somewhere I consider to be the “safest” and move out from there according to the references given at that spot and perhaps a few more steps outward.
When we use a karma / voting system, we are basically trying to calculate P(this contains useful information | this post has a high number of votes) but no voting system ever offers as much evidence as a specific reference from someone we recognize as trustworthy. The only way to increase the evidence gained from a voting system is to add further complexity to the system by increasing the amount of information contained in a vote, either by weighing the votes or by identifying the person behind the vote. And then from there you can add more to a vote, like a specific comment or a more nuanced judgement. I think the end of that track is basically what we have now, blogs by a specific person linking to other blogs, or social media like Facebook where no user is anonymous and everyone has their information filtered in some way.
Essentially I’m saying we should not ignore the role that optimization pressure has played in producing the systems we already have.
Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.
Which is why there should be a way to vote on users, not content, the quantity of unevaluated content shouldn’t divide the signal. This would matter if the primary mission succeeds and there is actual conversation worth protecting.
Moderation is basically the only way, I think. You could try to use fancy pagerank-anchored-by-trusted-users ratings, or make votes costly to the user in some way, but I think moderation is the necessary fallback.
Goodhart’s law is real, but people still try to use metrics. Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.
People use name recognition in practice, works pretty well.
I can use name recognition to scroll through a comment thread to find all the comments by the people that I consider in high regard, but this is much more effort than just having a karma system which automatically shows the top-voted comments first. (The karma system also doesn’t discriminate against new writers as badly as relying on name recognition does.)
Going to reply to this because I don’t think it should be overlooked. It’s a valid point—people tend to want to filter out information that’s not from the sources they trust. I think these kind of incentive pressures are what led to the “LessWrong Diaspora” being concentrated around specific blogs belonging to people with very positive reputation such as Scott Alexander. And when people want to look at different sources of information they will follow the advice of said people usually. This is how I operate when I’m doing my own reading / research—I start somewhere I consider to be the “safest” and move out from there according to the references given at that spot and perhaps a few more steps outward.
When we use a karma / voting system, we are basically trying to calculate P(this contains useful information | this post has a high number of votes) but no voting system ever offers as much evidence as a specific reference from someone we recognize as trustworthy. The only way to increase the evidence gained from a voting system is to add further complexity to the system by increasing the amount of information contained in a vote, either by weighing the votes or by identifying the person behind the vote. And then from there you can add more to a vote, like a specific comment or a more nuanced judgement. I think the end of that track is basically what we have now, blogs by a specific person linking to other blogs, or social media like Facebook where no user is anonymous and everyone has their information filtered in some way.
Essentially I’m saying we should not ignore the role that optimization pressure has played in producing the systems we already have.
Which is why there should be a way to vote on users, not content, the quantity of unevaluated content shouldn’t divide the signal. This would matter if the primary mission succeeds and there is actual conversation worth protecting.