I don’t know how it works but if you have user buckets for basic political denominations (as somewhat suggested by your use of language) then the buckets determine whos posts will compete for the same audiences. That is if the groups are not formed by some dynamics and in that way be either based on user choice or some “fair” mechanism then the decision on what buckets there exist is a moderation choice. That is do we recognise subcategories of libertarians? There can be charged terms like “free market” or “socialism” that might have (semi)-standardised meanings within a group (ie for some socialism might mean anything they don’t like that vaguely smells like red, while for others it migth be a spesific political line within the left spectrum different from other leftist ideologies). While it would be good discussion practise to always be certain that the sense used is sufficently clear, if you have a bucket that often uses the same terms with slight deviations having that group of persons as one bucket (ie all libertarians) makes differentiating between the senses harder, while having it as multiple buckets (ie each kind of libertarian as it’s won kind) avoids forming as wide stereotypes (but then there is the diffculty of keeping up with the “thought zoo”).
In saying that a probability is used doesn’t tell anythign on what the probability is based on. It just tells me that the result is a sliding scale between 0 and 1 but doesn’t tell me whether it’s a completely made up number. What is the reference class used in the “liked cases per viewed cases” estimation? For example I could use a weighted average of denomination score, author score and recommendation score. Now there are many weightings and details about how to turn individual recommendations in to a recommendation score, but I could be sceptical for this method of score keeping in that it’s just a averaging of naive methods. No amount of weighting could get rid of the strucutres it inherits, we could only find a tolerable balance on the way it’s stupid (choosing between flavours to make an okay taste). That is that kind of sophistication would still require moderation choice in the demonimation bucket definitions, there would still be harder times to correctly peg users with great variance in their writing content and you still get the echochamber effect of users of a set mind more likely exposed to content that resonates with them.
For example if there is a “pros and cons” viewing that means you need to be able to categorise posts as pros and cons (duh!). But how do posts that are not pro or con appear in such a view? How do you determine whether a post is pro or con? Author checks a box? Readers check a box? This can give incentives for a adversial mindset and framing. This could be seen as a medium property advocating politics as war. Can a post be a fraction of pro and con if it is pro with reservations? Is this kind of post a thing different from a low-quality fully pro post?
I don’t know how it works but if you have user buckets for basic political denominations
Users’ preferences are determined based on how they rate content, not on how they self-label.
In saying that a probability is used doesn’t tell anythign on what the probability is based on. It just tells me that the result is a sliding scale between 0 and 1 but doesn’t tell me whether it’s a completely made up number.
I don’t think users need to know the actual equations (especially since the math is somewhat complicated). But they would easily find out if the numbers are made up (average probabilities for comments they like would be the same as for comments they don’t like).
Our recommendation system is based on principles of collaborative filtering. The average recommendation accuracy depends on the number of ratings in our database. With a relatively small number of users we can distinguish basic population clusters (e.g., left vs right or highbrow vs lowbrow). With a larger dataset we would be able to make more nuanced distinctions.
I don’t know how it works but if you have user buckets for basic political denominations (as somewhat suggested by your use of language) then the buckets determine whos posts will compete for the same audiences. That is if the groups are not formed by some dynamics and in that way be either based on user choice or some “fair” mechanism then the decision on what buckets there exist is a moderation choice. That is do we recognise subcategories of libertarians? There can be charged terms like “free market” or “socialism” that might have (semi)-standardised meanings within a group (ie for some socialism might mean anything they don’t like that vaguely smells like red, while for others it migth be a spesific political line within the left spectrum different from other leftist ideologies). While it would be good discussion practise to always be certain that the sense used is sufficently clear, if you have a bucket that often uses the same terms with slight deviations having that group of persons as one bucket (ie all libertarians) makes differentiating between the senses harder, while having it as multiple buckets (ie each kind of libertarian as it’s won kind) avoids forming as wide stereotypes (but then there is the diffculty of keeping up with the “thought zoo”).
In saying that a probability is used doesn’t tell anythign on what the probability is based on. It just tells me that the result is a sliding scale between 0 and 1 but doesn’t tell me whether it’s a completely made up number. What is the reference class used in the “liked cases per viewed cases” estimation? For example I could use a weighted average of denomination score, author score and recommendation score. Now there are many weightings and details about how to turn individual recommendations in to a recommendation score, but I could be sceptical for this method of score keeping in that it’s just a averaging of naive methods. No amount of weighting could get rid of the strucutres it inherits, we could only find a tolerable balance on the way it’s stupid (choosing between flavours to make an okay taste). That is that kind of sophistication would still require moderation choice in the demonimation bucket definitions, there would still be harder times to correctly peg users with great variance in their writing content and you still get the echochamber effect of users of a set mind more likely exposed to content that resonates with them.
For example if there is a “pros and cons” viewing that means you need to be able to categorise posts as pros and cons (duh!). But how do posts that are not pro or con appear in such a view? How do you determine whether a post is pro or con? Author checks a box? Readers check a box? This can give incentives for a adversial mindset and framing. This could be seen as a medium property advocating politics as war. Can a post be a fraction of pro and con if it is pro with reservations? Is this kind of post a thing different from a low-quality fully pro post?
Users’ preferences are determined based on how they rate content, not on how they self-label.
I don’t think users need to know the actual equations (especially since the math is somewhat complicated). But they would easily find out if the numbers are made up (average probabilities for comments they like would be the same as for comments they don’t like).
Our recommendation system is based on principles of collaborative filtering. The average recommendation accuracy depends on the number of ratings in our database. With a relatively small number of users we can distinguish basic population clusters (e.g., left vs right or highbrow vs lowbrow). With a larger dataset we would be able to make more nuanced distinctions.