Agree.
Human values are very complex and most recommender systems don’t even try to model them. Instead most of them optimise for things like ‘engagement’ which they claim to be aligned with a user’s ‘revealed preference’. This notion of ‘revealed preference’ is a far cry from true preferences (which are very complex) let alone human values (which are also very complex). I recommend this article for an introduction to some of the issues here: https://medium.com/understanding-recommenders/what-does-it-mean-to-give-someone-what-they-want-the-nature-of-preferences-in-recommender-systems-82b5a1559157
Agree.
Human values are very complex and most recommender systems don’t even try to model them. Instead most of them optimise for things like ‘engagement’ which they claim to be aligned with a user’s ‘revealed preference’. This notion of ‘revealed preference’ is a far cry from true preferences (which are very complex) let alone human values (which are also very complex). I recommend this article for an introduction to some of the issues here: https://medium.com/understanding-recommenders/what-does-it-mean-to-give-someone-what-they-want-the-nature-of-preferences-in-recommender-systems-82b5a1559157