I really like the approach from Tournesol and I was wondering a way to improve the rating system. Not only I would like to rate a video, but sometimes I would like to dispute the rating given by another user.
Suppose I am a seller on AliExpress. I have been selling a 4.8 star product for a year. Suddenly I receive a new and rare one star rating. The rating goes like this:
“User123: I waiting the product to arrive, can’t wait to see it. When it arrives I will update the rating”
This is not uncommon. A good proportion of ratings happen without any correlation to the meaning of the rating labels. Sometimes someone miss click, and sometimes the user is completely stupid and does unexpected things. A user may say that some content is unreliable while being factually incorrect.
One thing that could work is that if enough people clicked on ‘dispute button’ to dispute the rating given from some user, the data from that user would be considered “noise” until some debate was settled. Or we could just change the weight of his input. Maybe the user has to provide some explanation for his deviant input and his input weight would go back to normal. Otherwise few trolls could destroy the system by creating bots to steer the rating system to his preference. But it would be harder for him to justify each instance, and a user with a high proportion of ratings being disputed should raise a red flag.
On the UI provided on the white paper I can see 5 horizontal sliders. Maybe we could add optional explanations for the user to clarify why he gives “1 star importance” for this video about “very important topic”. Users that give explanations for their ratings should have higher weights than users that do not provide any explanation. Maybe I could give a like on the rating+explanation from another user, updating my view on the platform and the final weight of his rating.
Thanks for the interesting comment. Perhaps to clarify, our current algorithms are by no means a final solution. In fact, our hope is to collect an interesting database to then encourage research on better algorithms that will factor, e.g., the comments on the videos.
Also, in the “settings” of the rating page, we have a functionality that allows contributors to input both their judgments and their confidence in their judgments, on a scale from 0 to 3 stars (default is 2). One idea could be to demand comments when the contributor claims a 3-star confidence judgment. This can allow disputes in the comment section.
I really like the approach from Tournesol and I was wondering a way to improve the rating system.
Not only I would like to rate a video, but sometimes I would like to dispute the rating given by another user.
Suppose I am a seller on AliExpress. I have been selling a 4.8 star product for a year.
Suddenly I receive a new and rare one star rating. The rating goes like this:
“User123: I waiting the product to arrive, can’t wait to see it. When it arrives I will update the rating”
This is not uncommon. A good proportion of ratings happen without any correlation to the meaning of the rating labels. Sometimes someone miss click, and sometimes the user is completely stupid and does unexpected things. A user may say that some content is unreliable while being factually incorrect.
One thing that could work is that if enough people clicked on ‘dispute button’ to dispute the rating given from some user, the data from that user would be considered “noise” until some debate was settled. Or we could just change the weight of his input. Maybe the user has to provide some explanation for his deviant input and his input weight would go back to normal. Otherwise few trolls could destroy the system by creating bots to steer the rating system to his preference. But it would be harder for him to justify each instance, and a user with a high proportion of ratings being disputed should raise a red flag.
On the UI provided on the white paper I can see 5 horizontal sliders. Maybe we could add optional explanations for the user to clarify why he gives “1 star importance” for this video about “very important topic”. Users that give explanations for their ratings should have higher weights than users that do not provide any explanation. Maybe I could give a like on the rating+explanation from another user, updating my view on the platform and the final weight of his rating.
Thanks for the interesting comment. Perhaps to clarify, our current algorithms are by no means a final solution. In fact, our hope is to collect an interesting database to then encourage research on better algorithms that will factor, e.g., the comments on the videos.
Also, in the “settings” of the rating page, we have a functionality that allows contributors to input both their judgments and their confidence in their judgments, on a scale from 0 to 3 stars (default is 2). One idea could be to demand comments when the contributor claims a 3-star confidence judgment. This can allow disputes in the comment section.