Well, I think I might’ve been unclear. I wasn’t actually suggesting that upvotes come with authorship labels. All the reasons you list for why this isn’t a great idea, I agree with.
I was saying, rather, that the upvote/downvote system is fundamentally missing something; that it can’t substitute for expressing explicit verbal agreement. The immediate corollary that should occur to us is: what is voting even for?
Consider a scenario. I write a post about software usability. A hundred people read it, and have a strong enough opinion on its quality that they are moved to click the voting widget. 99 of those people are ordinary LessWrongers, with no particular expertise in the subject. They upvote me. The 100th person is Jakob Nielsen. He downvotes me.
My post now has a score of 99 points. Is this an accurate representation of its value?
No. One “layman” doesn’t equal one Jakob Nielsen, when it comes to evaluating claims or opinions about usability engineering. Even 99 laymen doesn’t equal one Jakob Nielsen. If Nielsen thinks that my post is crap, and that basically everything I’m saying is wrong and confused, well, basically, that’s that. 99 non-expert LessWrongers doesn’t “balance that out”, and the sum of “99 LessWrongers think I’m right” and “Jakob Nielsen thinks I’m wrong” does not come out to “a score of +99! what a great post!”. That’s just not how that math works.
Furthermore, suppose Nielsen posts a comment under my post, saying “this is crap and you’re a nincompoop”. What, now, is the value of that “99” score, to a reader? You now know what a domain expert thinks. Unless other domain experts weigh in, there’s nothing more to discuss. That 99 LessWrongers disagree with Jakob Nielsen about usability is… interesting, perhaps, in some academic sense. But from an epistemic standpoint, Nielsen’s hypothetical comment tells you all you need to know about my post. The upvote score is obviated as a source of information about my post’s value.
And yet, it’s the upvote score that would be used, by various automated parts of the system (and by readers who aren’t checking the comments carefully), to decide how good my post is. That seems perverse! Now, I’m not suggesting that “sort by experts’ opinions, as expressed in comments” is a viable algorithm, of course. But this scenario, in my mind, calls into serious question what upvotes mean, and what sense there is in using them as a way to judge the value of content.
I think these are two wholly orthogonal functions: anonymous voting, and public comment badges. For badges, I’d like to see something much more like eg Discord where you can apply as many as you think apply, rather than Facebook where you can only apply at most one of the six options (eg both “agree” and “don’t like tone”).
Well, I think I might’ve been unclear. I wasn’t actually suggesting that upvotes come with authorship labels. All the reasons you list for why this isn’t a great idea, I agree with.
I was saying, rather, that the upvote/downvote system is fundamentally missing something; that it can’t substitute for expressing explicit verbal agreement. The immediate corollary that should occur to us is: what is voting even for?
Consider a scenario. I write a post about software usability. A hundred people read it, and have a strong enough opinion on its quality that they are moved to click the voting widget. 99 of those people are ordinary LessWrongers, with no particular expertise in the subject. They upvote me. The 100th person is Jakob Nielsen. He downvotes me.
My post now has a score of 99 points. Is this an accurate representation of its value?
No. One “layman” doesn’t equal one Jakob Nielsen, when it comes to evaluating claims or opinions about usability engineering. Even 99 laymen doesn’t equal one Jakob Nielsen. If Nielsen thinks that my post is crap, and that basically everything I’m saying is wrong and confused, well, basically, that’s that. 99 non-expert LessWrongers doesn’t “balance that out”, and the sum of “99 LessWrongers think I’m right” and “Jakob Nielsen thinks I’m wrong” does not come out to “a score of +99! what a great post!”. That’s just not how that math works.
Furthermore, suppose Nielsen posts a comment under my post, saying “this is crap and you’re a nincompoop”. What, now, is the value of that “99” score, to a reader? You now know what a domain expert thinks. Unless other domain experts weigh in, there’s nothing more to discuss. That 99 LessWrongers disagree with Jakob Nielsen about usability is… interesting, perhaps, in some academic sense. But from an epistemic standpoint, Nielsen’s hypothetical comment tells you all you need to know about my post. The upvote score is obviated as a source of information about my post’s value.
And yet, it’s the upvote score that would be used, by various automated parts of the system (and by readers who aren’t checking the comments carefully), to decide how good my post is. That seems perverse! Now, I’m not suggesting that “sort by experts’ opinions, as expressed in comments” is a viable algorithm, of course. But this scenario, in my mind, calls into serious question what upvotes mean, and what sense there is in using them as a way to judge the value of content.
I think these are two wholly orthogonal functions: anonymous voting, and public comment badges. For badges, I’d like to see something much more like eg Discord where you can apply as many as you think apply, rather than Facebook where you can only apply at most one of the six options (eg both “agree” and “don’t like tone”).
EDIT: now a feature request.