Yeah I agree that people need to weigh experts highly. LW pays lipservice to this, but only that—basically as soon as people have a strong opinion experts get discarded. Started with EY.
My impression of how to do this is to give experts an “as an expert, I...” vote. So you could see that a post has 5 upvotes and a beaker downvote, and say “hmm, the scientist thinks this is bad and other people think it’s good.”
Multiple flavors lets you separate out different parts of the comment in a way that’s meaningfully distinct from the Slashdot-style “everyone can pick a descriptor;” you don’t want everyone to be able to say “that’s funny,” just the comedians.
This works somewhat better than simple vote weighting because it lets people say whether they’re doing this as just another reader or ‘in their professional capacity;’ I want Ilya’s votes on stats comments to be very highly weighted and I want his votes on, say, rationality quotes to be weighted roughly like anyone else’s.
Of course, this sketch has many problems of its own. As written, I lumped many different forms of expertise into “scientist,” and you’re trusting the user to vote in the right contexts.
If you have a more-legible quality signal (in the James C. Scott sense of “legibility”), and a less-legible quality signal, you will inevitably end up using the more-legible quality signal more, and the less-legible one will be ignored—even if the less-legible one is tremendously more accurate and valuable.
Your suggestion is not implausible on its face, but the devil is in the details. No doubt you know this, as you say “this sketch has many problems of its own”. But these details and problems conspire to make such a formalized version of the “expert’s vote” either substantially decoupled from what it’s supposed to represent, or not nearly as legible as the simple “people’s vote”. In the former case, what’s the point? In the latter case, the result is that the “people’s vote” will remain much more influential on visibility, ranking, inclusion in canon, contribution to a member’s influence in various ways, and everything else you might care to use such formalized rating numbers for.
The question of reputation, and of whose opinion to trust and value, is a deep and fundamental one. I don’t say it’s impossible to algorithmize, but if possible, it is surely quite difficult. And simple karma (based on unweighted votes) is, I think, a step in the wrong direction.
Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems, but consider that even in academia (with a built-in citation pagerank), people still rely on names. That’s evidence about pagerank systems not being great on their own. People game the hell out of citations.
Probably should weigh my opinion of rationality stuff quite low, I am neither a practitioner nor a historian of rationality. I have gotten gradually more pessimistic about the whole project.
From context, it’s clearly (conditional on the feature being there at all) “someone accepted by the administrators of the site as an expert”. How they make that determination would be up to them; I would hope that (again, conditional on the thing happening at all) they would err on the side of caution and accept people as experts only in cases where few reasonable people would disagree.
Yeah I agree that people need to weigh experts highly. LW pays lipservice to this, but only that—basically as soon as people have a strong opinion experts get discarded. Started with EY.
My impression of how to do this is to give experts an “as an expert, I...” vote. So you could see that a post has 5 upvotes and a beaker downvote, and say “hmm, the scientist thinks this is bad and other people think it’s good.”
Multiple flavors lets you separate out different parts of the comment in a way that’s meaningfully distinct from the Slashdot-style “everyone can pick a descriptor;” you don’t want everyone to be able to say “that’s funny,” just the comedians.
This works somewhat better than simple vote weighting because it lets people say whether they’re doing this as just another reader or ‘in their professional capacity;’ I want Ilya’s votes on stats comments to be very highly weighted and I want his votes on, say, rationality quotes to be weighted roughly like anyone else’s.
Of course, this sketch has many problems of its own. As written, I lumped many different forms of expertise into “scientist,” and you’re trusting the user to vote in the right contexts.
If you have a more-legible quality signal (in the James C. Scott sense of “legibility”), and a less-legible quality signal, you will inevitably end up using the more-legible quality signal more, and the less-legible one will be ignored—even if the less-legible one is tremendously more accurate and valuable.
Your suggestion is not implausible on its face, but the devil is in the details. No doubt you know this, as you say “this sketch has many problems of its own”. But these details and problems conspire to make such a formalized version of the “expert’s vote” either substantially decoupled from what it’s supposed to represent, or not nearly as legible as the simple “people’s vote”. In the former case, what’s the point? In the latter case, the result is that the “people’s vote” will remain much more influential on visibility, ranking, inclusion in canon, contribution to a member’s influence in various ways, and everything else you might care to use such formalized rating numbers for.
The question of reputation, and of whose opinion to trust and value, is a deep and fundamental one. I don’t say it’s impossible to algorithmize, but if possible, it is surely quite difficult. And simple karma (based on unweighted votes) is, I think, a step in the wrong direction.
As far as an algorithm for reputation goes, academia seems to have something that sort of scales in the form of citations and co-authors:
http://www.overcomingbias.com/2017/08/the-problem-with-prestige.html
It’s certainly a difficult problem however.
Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems, but consider that even in academia (with a built-in citation pagerank), people still rely on names. That’s evidence about pagerank systems not being great on their own. People game the hell out of citations.
Probably should weigh my opinion of rationality stuff quite low, I am neither a practitioner nor a historian of rationality. I have gotten gradually more pessimistic about the whole project.
To be clear, in this scheme whether or not someone had access to the expert votes would be set by hand.
What is going to be the definition of “an expert” in LW 2.0?
From context, it’s clearly (conditional on the feature being there at all) “someone accepted by the administrators of the site as an expert”. How they make that determination would be up to them; I would hope that (again, conditional on the thing happening at all) they would err on the side of caution and accept people as experts only in cases where few reasonable people would disagree.
“All animals are equal… ” X-)
The issue is credibility.
Is there anyone whose makes it their business to guard against this?
Academics make it their business, and they rely on name recognition and social networks.