Thanks, that’s very interesting. I was especially interested in this:
We can gauge each Superuser’s voting accuracy based on their performance on honeypots (proposed updates with known answers which are deliberately inserted into the updates queue). Measuring performance and using these probabilities correctly is the key to how we assign points to a Superuser’s vote.
So they measure voting accuracy based on some questions on which they know the true answer.
There is a difference between their votes and the kind of votes cast here, though; namely that on Less Wrong there is not in a strict sense a “true answer” to how good a post or comment is. So that tactics cannot be used.
On questions on which there is a true answer it is easier to track people’s reliability and provide them with incentives to answer reliably. On questions which are more an issue of preference (“e.g. how good is this post?”) that is harder.
See also The Mathematics of Gamification—Application of Bayes Rule to Voting.
Thanks, that’s very interesting. I was especially interested in this:
So they measure voting accuracy based on some questions on which they know the true answer.
There is a difference between their votes and the kind of votes cast here, though; namely that on Less Wrong there is not in a strict sense a “true answer” to how good a post or comment is. So that tactics cannot be used.
On questions on which there is a true answer it is easier to track people’s reliability and provide them with incentives to answer reliably. On questions which are more an issue of preference (“e.g. how good is this post?”) that is harder.