I’m also struggling to interpret cases where karma & agreement diverge, and would also prefer a system that lets me understand how individuals have voted. E.g. Duncan’s comment above currently has positive karma but negative agreement, with different numbers of upvotes and agreement votes. There are many potential voting patterns that can have such a result, so it’s unclear how to interpret it.
Whereas in Duncan’s suggestion, a) all votes contain two bits of information and hence take a stand on something like agreement (so there’s never a divergence between numbers of votes on different axes), and b) you can tell if e.g. your score is the result of lots of voters with “begrudging upvotes”, or “conflicted downvotes” or something.
Whereas in Duncan’s suggestion, a) all votes contain two bits of information and hence take a stand on something like agreement
I didn’t notice that! I don’t want to have to decide on whether to reward or punish someone every time I figure out whether they said a true or false thing. Seems like it would also severely enhance the problem of “people who say things that most people believe get lots of karma”.
I’m also struggling to interpret cases where karma & agreement diverge, and would also prefer a system that lets me understand how individuals have voted. E.g. Duncan’s comment above currently has positive karma but negative agreement, with different numbers of upvotes and agreement votes. There are many potential voting patterns that can have such a result, so it’s unclear how to interpret it.
Whereas in Duncan’s suggestion, a) all votes contain two bits of information and hence take a stand on something like agreement (so there’s never a divergence between numbers of votes on different axes), and b) you can tell if e.g. your score is the result of lots of voters with “begrudging upvotes”, or “conflicted downvotes” or something.
I didn’t notice that! I don’t want to have to decide on whether to reward or punish someone every time I figure out whether they said a true or false thing. Seems like it would also severely enhance the problem of “people who say things that most people believe get lots of karma”.