Provided we do not have access to the actual numbers, we can still attempt to build a decent map for the territory. After all, that’s what LW is all about, right? So here is my first crack at it.
I’d guess that there is a correlation between the total number of votes and the total number of comments. My rationale: roughly the same percentage of people who vote also comment, regardless of the post.
If true, this can be calibrated using some popular but non-controversial posts, where nearly all votes are expected to be upvotes. Thus, in absence of better indicators, you can count the (top-level) comments, scale it by the (yet to be determined) comment/vote factor and get an estimate for the total number of votes.
Now, in the spirit of rationality, here is how this model can be tested: find several uniformly well received posts from different (popular) authors with a fair number of comments, apply the above metric (post karma/# comments) and see if it is reasonably stable (the value is within, say, 50%). A quick glance at some of the more popular posts suggests the value of .5 to 1.5 for karma/#comments, not accounted for controversy. If this holds, then having a post karma/#comments significantly below, say, 1⁄3 would mean that people downvote the post a lot, rather than ignore it. I suppose this is a long-winded way to say “just count the comments!”
This is, of course, only a zero-approximation map, and it’s easy to see how one can improve it, but one always needs a starting point.
Provided we do not have access to the actual numbers, we can still attempt to build a decent map for the territory. After all, that’s what LW is all about, right? So here is my first crack at it.
I’d guess that there is a correlation between the total number of votes and the total number of comments. My rationale: roughly the same percentage of people who vote also comment, regardless of the post.
If true, this can be calibrated using some popular but non-controversial posts, where nearly all votes are expected to be upvotes. Thus, in absence of better indicators, you can count the (top-level) comments, scale it by the (yet to be determined) comment/vote factor and get an estimate for the total number of votes.
Now, in the spirit of rationality, here is how this model can be tested: find several uniformly well received posts from different (popular) authors with a fair number of comments, apply the above metric (post karma/# comments) and see if it is reasonably stable (the value is within, say, 50%). A quick glance at some of the more popular posts suggests the value of .5 to 1.5 for karma/#comments, not accounted for controversy. If this holds, then having a post karma/#comments significantly below, say, 1⁄3 would mean that people downvote the post a lot, rather than ignore it. I suppose this is a long-winded way to say “just count the comments!”
This is, of course, only a zero-approximation map, and it’s easy to see how one can improve it, but one always needs a starting point.