In case anyone stumbles across this post in the future, I found these posts from the past both arguing for and against some of the worries I gloss over here. I don’t think my post boils down completely to merely “recommender systems should be better aligned with human interests”, but that is a big theme.
In case anyone stumbles across this post in the future, I found these posts from the past both arguing for and against some of the worries I gloss over here. I don’t think my post boils down completely to merely “recommender systems should be better aligned with human interests”, but that is a big theme.
https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area
https://www.alignmentforum.org/posts/TmHRACaxXrLbXb5tS/rohinmshah-s-shortform?commentId=EAKEfPmP8mKbEbERv