You can get the rating statistics of your LW comments by registering on Omnilibrium and then clicking on this link.
cleonid
It’s an interesting possibility. But I have looked at the data and for all ten users the comments above 1000 characters get higher average ratings than shorter comments.
Would statistical feedback on the style and content of your posts be useful to you?
[pollid:1010]
as people believe in fundamentally different political values and philosophies they cannot really make a lot of progress towards a consensus on the object level
At least in theory, it may be possible for people to find common objectives even when their values are fundamentally different. For instance, some conservatives support raising the minimum wage on the ground that it reduces the number of low-skill jobs and deters illegal immigration.
I would probably add a historical debate section as well.
History is already included as one of the main sections (though it currently includes only one article and one debate topic). You just need to click on “History” below the banner to get to it. Once there are enough posts on the topic of political philosophy, it can be also added as a separate section.
You are welcome to open a new debate about the Spanish Civil War (personally, I also find the topic interesting).
I’m sure there will be some correlations but I would not know what to do with them. Traits like conscientiousness have no obvious connection to my question. Openness to new experiences is sometimes used as a proxy for open-mindedness, but to me this seems a little farfetched. Is there a strong reason to believe that an adventurous eater will be more open-minded on political questions?
Suppose, for the sake of the argument, that my own data is totally wrong and consider the same question for a purely hypothetical case:
Group A upvotes only its own comments. Group B upvotes preferentially its own comments. Is there a way to tell whether the difference lies in the comment quality or the characters of the group members?
Suppose people are divided by some arbitrary criteria (e.g., blondes vs. brunettes) and then it turns out that blondes upvote brunettes much more often than vice versa. You could still ask the same question.
Regarding elevation, I simply wanted a short and easy to understand title and it did not occur to me that it would be perceived as prejudicial.
The word “better” may be replaced with “more coherent” or even “more grammatically correct”. Fundamentally, the question is whether the difference in ratings arises from the difference in the comment qualities (other than political orientation) or from the difference in those who rate them.
is it possible that the way you are choosing “principal vectors” is entangled with how the resulting clusters rate?
The system chooses vectors automatically. But I think the above question would still be valid even if people were divided in two groups in some totally arbitrary way.
Yes.
In the “optimate” vs “populare” case, the difference was significant at about 2.5 sigmas. I don’t remember the exact values in the “left” vs “right” case, but it was over 10 sigmas.
Would your algorithm sort the people who don’t strongly agree with either side with the “optimates”, since their preferences are closer to the “optimate” group than the “populare” group?
In principle, this is possible. The system assigns each user a number corresponding to his/her position on the “left-right” (“populare-optimate”) axis. If, based on their votes, 25% of users are assigned “-10”, 50% are assigned “10” and 25% are assigned “0”, then the average is “2.5” which would make those with “0” into “left-wingers”.
would that produce the effect you’re seeing, since half the “optimate” group are upvoting more or less equally?
At least in our first group (where the effect was the strongest and the distribution was pretty close to Gaussian) this is not what had happened.
As I’ve written above, the two groups may not be representative of the LW community or the US population. But within each group the differences were statistically significant, so the question about their origin would be valid in any case.
Sure.
The system assigns “left-wing” and “right-wing” (“populare” and “optimate”) labels by comparing user’s preferences to the average preferences of all users, so both sides are nearly equal. In any case, the 27% difference was in the proportions of positive votes, not in the absolute numbers of upvotes.
It is similarity based.
One essential difference is that our recommendation system is guided by the individual rather than the group preferences. Reddit is based on finding the lowest common denominator.
I don’t know how it works but if you have user buckets for basic political denominations
Users’ preferences are determined based on how they rate content, not on how they self-label.
In saying that a probability is used doesn’t tell anythign on what the probability is based on. It just tells me that the result is a sliding scale between 0 and 1 but doesn’t tell me whether it’s a completely made up number.
I don’t think users need to know the actual equations (especially since the math is somewhat complicated). But they would easily find out if the numbers are made up (average probabilities for comments they like would be the same as for comments they don’t like).
Our recommendation system is based on principles of collaborative filtering. The average recommendation accuracy depends on the number of ratings in our database. With a relatively small number of users we can distinguish basic population clusters (e.g., left vs right or highbrow vs lowbrow). With a larger dataset we would be able to make more nuanced distinctions.
Also, how long does it take until I receive the authentication email?
Sorry for the confusion.
To avoid cold start, we wanted to sign up a sufficiently large group of people before opening the discussions. The site is scheduled to be opened on May 1 (you’ll receive an email notification).
There doesn’t seem to be a way to propose new or non-traditional discussion topics
the discussion seems to be US-centric
The site is not officially open yet. So far, we just had several test runs with randomly selected people.
Each point on the graph corresponds to an average of several hundred (about two thousand for the middle graph) data points. A number of short posts is indeed greater than the number of long posts, so the horizontal distance between the points on the graph increases with increasing number of characters.