Item 1 would only seem useful when you have sufficient trusted expert ranking to calibrate, but still need to use the votes to extrapolate elsewhere (and where you expect trusted experts to align with your audience—if experts routinely downvote dark ales, and your audience prefers them, you’re going to get a wonky heuristic). Basically, at that point, you’re JUST using votes as a method to try predicting and extrapolating expert rankings, and I’d expect there’s usually better heuristics for that which don’t require user votes.
Item 2 strikes me as clever and ideal, but I’d think you’d need quite a lot of data before you’d be able to actually calibrate that. So you’re stuck using 0.05 until you have quite a lot of data.
(Customer satisfaction surveys, etc. also run in to the “resource intensive” issue)
(edit: apparently pound makes the whole row a header or something)
Item 1 would only seem useful when you have sufficient trusted expert ranking to calibrate, but still need to use the votes to extrapolate elsewhere [...]
Exactly. Remember, the whole point of this procedure is to tweak how much credibility you give to voters as a function of the number of voters you have—the only reason I mention experts is that they bypass the sample size problem.
(and where you expect trusted experts to align with your audience—if experts routinely downvote dark ales, and your audience prefers them, you’re going to get a wonky heuristic)
Okay, that’s a problem. I think it falls as a subset of the earlier problem of finding trusted expert rankings, however.
Item 2 strikes me as clever and ideal, but I’d think you’d need quite a lot of data before you’d be able to actually calibrate that. So you’re stuck using 0.05 until you have quite a lot of data.
If you don’t have a lot of data, you’re not going to have much to offer your users anyway.
Item 1 would only seem useful when you have sufficient trusted expert ranking to calibrate, but still need to use the votes to extrapolate elsewhere (and where you expect trusted experts to align with your audience—if experts routinely downvote dark ales, and your audience prefers them, you’re going to get a wonky heuristic). Basically, at that point, you’re JUST using votes as a method to try predicting and extrapolating expert rankings, and I’d expect there’s usually better heuristics for that which don’t require user votes.
Item 2 strikes me as clever and ideal, but I’d think you’d need quite a lot of data before you’d be able to actually calibrate that. So you’re stuck using 0.05 until you have quite a lot of data.
(Customer satisfaction surveys, etc. also run in to the “resource intensive” issue)
(edit: apparently pound makes the whole row a header or something)
Exactly. Remember, the whole point of this procedure is to tweak how much credibility you give to voters as a function of the number of voters you have—the only reason I mention experts is that they bypass the sample size problem.
Okay, that’s a problem. I think it falls as a subset of the earlier problem of finding trusted expert rankings, however.
If you don’t have a lot of data, you’re not going to have much to offer your users anyway.