If we were serious about this, I’d suggest a double-blind experiment where for a randomly selected minority of posts or comments, half of us see a score higher than the real score and half see a lower score. Something like +/- per , so they still look believable and change as expected as the user makes a vote. We then see how this affected voting, and whether being influenced by scoring correlates to other factors. While it’s on, users would be asked not to discuss specific scores.
Great idea. One potential problem though for these sorts of experiments is that knowledge (or reasonable suspicion) of the experiments would alter users’ behavior.
Yes, but I’m hoping using a randomly selected minority posts or comments would help, and I’d expect our estimations as to which posts have been raised or lowered would be interestingly inaccurate. Maybe we could submit our guesses along with the probability we assign to each guess, then the calibration test results could be posted… :-)
If we were serious about this, I’d suggest a double-blind experiment where for a randomly selected minority of posts or comments, half of us see a score higher than the real score and half see a lower score. Something like +/- per , so they still look believable and change as expected as the user makes a vote. We then see how this affected voting, and whether being influenced by scoring correlates to other factors. While it’s on, users would be asked not to discuss specific scores.
Great idea. One potential problem though for these sorts of experiments is that knowledge (or reasonable suspicion) of the experiments would alter users’ behavior.
Yes, but I’m hoping using a randomly selected minority posts or comments would help, and I’d expect our estimations as to which posts have been raised or lowered would be interestingly inaccurate. Maybe we could submit our guesses along with the probability we assign to each guess, then the calibration test results could be posted… :-)