If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.