The usual way has a pretty serious pathology where they’ll tend to vote on comments that’re already most upvoted, which actually decreases the usefulness of the vote scores (but I suppose that wouldn’t apply to a predictor system.)
This is specifically one of the problems [BetterDiscourse] is conceived to address. Like, there are many “basically reasonable” positions/comments that I am happy to promote through an upvote (and most people vote this way, too), but is a low information content for me because it’s already my position, or close to my position. With separate upvote/downvote and insightful/not reactions, I can switch between looking at the most popular positions among the crowd (and Pol.is, Viewpoints.xyz, and Community Notes further remove political bias from this signal, thus prioritising the “greatest common denominator” position), and the comments that are most likely to have the greatest informational value for me personally.
And to make it clear, the claim that such “informational value first” comment ordering model is realistically trainable on user’s reactions to comments on different topics, and quickly, i.e., only on a few or a few dozen reactions from the user, is currently a hypothesis. I’m not sure there are good ways to test this hypothesis short of just trying to train such a model and see whether a large portion of people will find it useful.
In the beginning of the “Solution” section, I wrote that in principle, the information value of the comment should be in part predictable from “user’s levels of knowledge in this or that fields, beliefs, current interests, ethics, and aesthetics”, but there is a big question mark whether this information could be easily inferred from user’s reactions to other comments, or assessed for a comment in isolation when the prediction model is applied to it.
there is a big question mark whether this information could be easily inferred from user’s reactions to other comments
Right… I think it can’t, recognizing that is equivalent to being able to recognize surprising truth, it’s kind of AGI-complete. There are not so many top experts in any particular niche, and as soon as any are identified, there comes to be a huge bulk of users who will imitate them, so actual experts wont be an obviously important category to the recommender engine and it might not be able to tell them apart from their crowd.
For that we may depend on more explicit systems like webs of trust for expert recommendations. Users have to apply their own intelligence to identify the real (probable) experts, explicitly communicate that recognition, and they have to see that the experts have endorsed the comment being shown to them. We follow experts because their taste differs from ours, because their recommendations are not intuitive to us.
I should ask, is free energy reduction something we actually know how to train? I can see a way of measuring it, but it’s not economically feasible.
This is specifically one of the problems [BetterDiscourse] is conceived to address. Like, there are many “basically reasonable” positions/comments that I am happy to promote through an upvote (and most people vote this way, too), but is a low information content for me because it’s already my position, or close to my position. With separate upvote/downvote and insightful/not reactions, I can switch between looking at the most popular positions among the crowd (and Pol.is, Viewpoints.xyz, and Community Notes further remove political bias from this signal, thus prioritising the “greatest common denominator” position), and the comments that are most likely to have the greatest informational value for me personally.
And to make it clear, the claim that such “informational value first” comment ordering model is realistically trainable on user’s reactions to comments on different topics, and quickly, i.e., only on a few or a few dozen reactions from the user, is currently a hypothesis. I’m not sure there are good ways to test this hypothesis short of just trying to train such a model and see whether a large portion of people will find it useful.
In the beginning of the “Solution” section, I wrote that in principle, the information value of the comment should be in part predictable from “user’s levels of knowledge in this or that fields, beliefs, current interests, ethics, and aesthetics”, but there is a big question mark whether this information could be easily inferred from user’s reactions to other comments, or assessed for a comment in isolation when the prediction model is applied to it.
Right… I think it can’t, recognizing that is equivalent to being able to recognize surprising truth, it’s kind of AGI-complete.
There are not so many top experts in any particular niche, and as soon as any are identified, there comes to be a huge bulk of users who will imitate them, so actual experts wont be an obviously important category to the recommender engine and it might not be able to tell them apart from their crowd.
For that we may depend on more explicit systems like webs of trust for expert recommendations. Users have to apply their own intelligence to identify the real (probable) experts, explicitly communicate that recognition, and they have to see that the experts have endorsed the comment being shown to them.
We follow experts because their taste differs from ours, because their recommendations are not intuitive to us.
I should ask, is free energy reduction something we actually know how to train? I can see a way of measuring it, but it’s not economically feasible.