Alternatively just allow people to have an “ignored users” file.
You can “click to ignore this user” on anybody that you find to be continuously less worthwhile on average.
Or, even better, you can apply a “handicap” to certain people. eg that you will only view comments by a certain person if the comment has been upvoted to at least 4 (or whatever).
Hm. Right now, you can’t downvote more than you’ve been upvoted. Suppose a Plonk costs 1000 downvotes, could only be applied once per user-pair, and increased the minimum viewability threshold of a user by 1. So if two people Plonked timtyler, his comments would start disappearing once they’d been voted down to −1, instead of −3. The opposite of a Plonk would be an Accolade and that would make comments harder to hide, lowering the threshold by 1?
Doesn’t actually sound like a good idea to me, but I do sometimes get the sense that there ought to be better incentives for people to take hints.
Automatic threshold effect seems like a bad idea, but displaying the Plonk score alongside total Karma on the user page might prove effective at making community’s perception of the user available.
(I presently have exactly two users I wish to “Plonk”, Tim one of them and the other I would rather only indicate anonymously, and I want a socially appropriate and persistent method of showing this opinion.)
Being plonked by a single user having a drastic effect on one’s comments’ visibility strikes me as having a lot of downsides.
I’m wondering (aside from that it would be nice to have killfiles) whether it would have a good effect if plonks were anonymous, but the number of plonks each person has received is public..
I don’t think plonks and accolades would be globally visible; users would affect what they see themselves, but other users would see it as just a regular vote, if at all.
I believe that we could find many ad hoc changes which seem good. But if you understand exactly what it is your ad hoc solutions are trying to accomplish, you may instead be able to find a real solution that actually does what you want, rather than dancing around the issue.
To give a (probably misguided) example of the sort of precise request I might make: I could wish to see as many things I would upvote as possible, and as few I would downvote. In addition to having a “Recent Comments” page to visit, I would have a “Things You Would Upvote” page to visit. My upvotes/downvotes not only serve to help the community, they define my own standards for moderation.
Of course there are more sophisticated approaches and subtle issues (if I never see posts I would have downvoted, doesn’t that interfere with the moderation system?), but hopefully that suggests the general flavor.
… I would have a “Things You Would Upvote” page...
If I get you correctly, you’d like the system to know the sorts of things you’d downvote and automatically show/hide comments based on your preferences.
This is a great idea.
Apologies if I got your idea wrong… but if not, then sadly, it’s not currently feasible.
After all, for most users actual downvoting preferences (eg “excessive profanity” or “religious intolerance” or even just “being wrong”) it would require the system to understand the content of comments. Maybe the excessive profanity could be easily picked, but the other two would require an actual AI… AFAIK we’re still working on that one ;)
But even if we only had simpler requirements (eg a profanity filter), It’d also be extremely resource-intensive—especially if every single user on the system required this kind of processing. Currently, the lesswrong site is just simple server software. It’s not an AI and does not understand the content of posts. It just displays the posts/comments without digesting their contents in any way. Karma works because somebody else (ie the humans out here) are the ones digesting and understanding the posts… then they turn their judgement into simple number (+1 for upvote, −1 for downvote), so that’s all the system has to remember.
Anything else would require text-processing of every single comment… every time the page is displayed. With 50-100 comments on every page, this would be a noticeable increase in the processing-time for each page, for only a limited utility increase.
Of course, as I said—I may have misinterpreted your idea.
If so—let me know what you had in mind.
The point isn’t to determine if you will like a post by applying sophisticated language processing etc. Its to determine if you will like a post by looking at the people who have upvoted/downvoted it and learning how to extrapolate.
For example, suppose Alice always upvotes/downvotes identically to Bob. Of particular interest to Alice are posts Bob has already upvoted. In real life you are looking for significantly more subtle patterns (if you only looked directly at correlations between users’ feedback you wouldn’t get too much advantage, at least not in theory) and you need to be able to do it automatically and quickly, but hopefully it seems plausible that you can use the pattern of upvotes/downvotes to practically and effectively predict what will interest any particular user or the average guest.
(nods) I’ve contemplated in other contexts a fully collaboratively-filtered forum… that is, one in which the sort-order for threads to read is controlled by their popularity (karma) weighted by a similarity factor—where an upvote given by someone whose prior voting patterns perfectly match yours is worth 10 points, say, and given by someone whose priors perfectly anti-match yours is worth −10 points, and prorated accordingly for less-perfect matches.
But mostly, I think that’s a very useful way to allow large numbers of users with heterogenous values and preferences to use the same system without getting in each others’ way. It would make sense for a popular politics discussion site, for example.
(The simple version just creates a series of echo chambers, of course. Though some people seem to like that. But further refinements can ameliorate that if desired.)
LW doesn’t seem to have that goal at all. Instead, it endorses particular values and preferences and rejects others, and when discussions of filtering come up they are framed as how to more efficiently implement those particular rejections and endorsements.
So mostly, collaborative filtering seems like it solves a problem this site hasn’t got.
You can use collaborative learning for other purposes. For example, suppose I wanted to show a user posts which Eliezer Yudkowsky would upvote (a “Things EY would Upvote” tab...), rather than posts they personally would upvote. This allows a moderator to implicitly choose which component of users has the “right” taste, without having to explicitly upvote/downvote every individual post.
I don’t know if imposing one individual’s taste is such a good idea, but it is an option. It seems like you should think for a while about what exactly you want, rather than just proposing mechanisms and then evaluating whether you like them or not. Once you know what you want, then we have the theoretical machinery to build a mechanism which implements your goal well (or, we can sit down for a while and develop it).
Also, it is worth pointing out that you can do much better than just weighting votes by similarity factors. In general, it may be the case that Alice and Bob have never voted on the same comment, and yet Alice still learns interesting information from Bob’s vote. (And there are situations where weighting by similarity breaks down quite explicitly.) My point is that instead of doing something ad-hoc, you can employ a predictor which is actually approximately optimal.
It seems like you should think for a while about what exactly you want, rather than just proposing mechanisms and then evaluating whether you like them or not.
Fair enough. Apologies for wasting your time with undirected musings.
In terms of what I want, everything I can think of shares the property of being more useful in a more heterogenous environment. I put together a wishlist along these lines some months ago.But within an environment as homogenous as LW, none of that seems worth the effort.
That said, I would find it at least idly interesting to be able to switch among filters (e.g., “Things EY would upvote”, “Things Yvain would upvote”, etc.), especially composite filters (e.g., “Things EY would upvote that aren’t things Yvain would upvote,” “90% things EY would upvote and 10% things he wouldn’t”, etc.).
But if you understand exactly what it is your ad hoc solutions are trying to accomplish, you may instead be able to find a real solution that actually does what you want, rather than dancing around the issue.
I would love this to be the case. Unfortunately, we’re talking about human behaviour here, and specifically, talking about the fact that, for some people, that behaviour doesn’t change even though other attempts have been made to actually address the real issue.
From having been present in forums that drowned under the weight of such people, I think it’s also a good idea to have a backup plan. Especially one where the noise can still exist, but can be “filtered out” at will.
if I never see posts I would have downvoted, doesn’t that interfere with the moderation system
Right now, the downvoted comments are hidden if they reach a certain threshold.
The sorts of posts that are downvoted to this level are rude and uselessly inflammatory. Still—they are not “totally hidden”. They are shown, in place, just as a “there is a hidden comment” link. If you want to see them, all you have to do is click on the link - and you can decide for yourself if that post deserved the harsh treatment (ie it does not interfere with moderation).
You can also adjust your own downvote threshold eg to hide all comments downvoted anywhere from −1 down… or to show them all until they’re −10, which is actually what I’ve done. If you want, you can choose a sufficiently large negative value and will probably never see a hidden comment.
Alternatively just allow people to have an “ignored users” file. You can “click to ignore this user” on anybody that you find to be continuously less worthwhile on average.
Or, even better, you can apply a “handicap” to certain people. eg that you will only view comments by a certain person if the comment has been upvoted to at least 4 (or whatever).
Hm. Right now, you can’t downvote more than you’ve been upvoted. Suppose a Plonk costs 1000 downvotes, could only be applied once per user-pair, and increased the minimum viewability threshold of a user by 1. So if two people Plonked timtyler, his comments would start disappearing once they’d been voted down to −1, instead of −3. The opposite of a Plonk would be an Accolade and that would make comments harder to hide, lowering the threshold by 1?
Doesn’t actually sound like a good idea to me, but I do sometimes get the sense that there ought to be better incentives for people to take hints.
Automatic threshold effect seems like a bad idea, but displaying the Plonk score alongside total Karma on the user page might prove effective at making community’s perception of the user available.
(I presently have exactly two users I wish to “Plonk”, Tim one of them and the other I would rather only indicate anonymously, and I want a socially appropriate and persistent method of showing this opinion.)
Being plonked by a single user having a drastic effect on one’s comments’ visibility strikes me as having a lot of downsides.
I’m wondering (aside from that it would be nice to have killfiles) whether it would have a good effect if plonks were anonymous, but the number of plonks each person has received is public..
Note, of course, that threshold of hiding is editable in the first place, so this would have to act as a modifier on that.
I think so. But perhaps ability to plonk/accolade should only be given to people with a high level of karma.
To stop the pathological case where people can set up a hundred accounts and accolade themselves (or plonk a rival).
Also—people should be able to adjust their personal “plonk horizon” just as they can with the low-comment threshold at present.
I don’t think plonks and accolades would be globally visible; users would affect what they see themselves, but other users would see it as just a regular vote, if at all.
I believe that we could find many ad hoc changes which seem good. But if you understand exactly what it is your ad hoc solutions are trying to accomplish, you may instead be able to find a real solution that actually does what you want, rather than dancing around the issue.
To give a (probably misguided) example of the sort of precise request I might make: I could wish to see as many things I would upvote as possible, and as few I would downvote. In addition to having a “Recent Comments” page to visit, I would have a “Things You Would Upvote” page to visit. My upvotes/downvotes not only serve to help the community, they define my own standards for moderation.
Of course there are more sophisticated approaches and subtle issues (if I never see posts I would have downvoted, doesn’t that interfere with the moderation system?), but hopefully that suggests the general flavor.
If I get you correctly, you’d like the system to know the sorts of things you’d downvote and automatically show/hide comments based on your preferences.
This is a great idea.
Apologies if I got your idea wrong… but if not, then sadly, it’s not currently feasible.
After all, for most users actual downvoting preferences (eg “excessive profanity” or “religious intolerance” or even just “being wrong”) it would require the system to understand the content of comments. Maybe the excessive profanity could be easily picked, but the other two would require an actual AI… AFAIK we’re still working on that one ;)
But even if we only had simpler requirements (eg a profanity filter), It’d also be extremely resource-intensive—especially if every single user on the system required this kind of processing. Currently, the lesswrong site is just simple server software. It’s not an AI and does not understand the content of posts. It just displays the posts/comments without digesting their contents in any way. Karma works because somebody else (ie the humans out here) are the ones digesting and understanding the posts… then they turn their judgement into simple number (+1 for upvote, −1 for downvote), so that’s all the system has to remember.
Anything else would require text-processing of every single comment… every time the page is displayed. With 50-100 comments on every page, this would be a noticeable increase in the processing-time for each page, for only a limited utility increase.
Of course, as I said—I may have misinterpreted your idea. If so—let me know what you had in mind.
The point isn’t to determine if you will like a post by applying sophisticated language processing etc. Its to determine if you will like a post by looking at the people who have upvoted/downvoted it and learning how to extrapolate.
For example, suppose Alice always upvotes/downvotes identically to Bob. Of particular interest to Alice are posts Bob has already upvoted. In real life you are looking for significantly more subtle patterns (if you only looked directly at correlations between users’ feedback you wouldn’t get too much advantage, at least not in theory) and you need to be able to do it automatically and quickly, but hopefully it seems plausible that you can use the pattern of upvotes/downvotes to practically and effectively predict what will interest any particular user or the average guest.
(nods) I’ve contemplated in other contexts a fully collaboratively-filtered forum… that is, one in which the sort-order for threads to read is controlled by their popularity (karma) weighted by a similarity factor—where an upvote given by someone whose prior voting patterns perfectly match yours is worth 10 points, say, and given by someone whose priors perfectly anti-match yours is worth −10 points, and prorated accordingly for less-perfect matches.
But mostly, I think that’s a very useful way to allow large numbers of users with heterogenous values and preferences to use the same system without getting in each others’ way. It would make sense for a popular politics discussion site, for example.
(The simple version just creates a series of echo chambers, of course. Though some people seem to like that. But further refinements can ameliorate that if desired.)
LW doesn’t seem to have that goal at all. Instead, it endorses particular values and preferences and rejects others, and when discussions of filtering come up they are framed as how to more efficiently implement those particular rejections and endorsements.
So mostly, collaborative filtering seems like it solves a problem this site hasn’t got.
You can use collaborative learning for other purposes. For example, suppose I wanted to show a user posts which Eliezer Yudkowsky would upvote (a “Things EY would Upvote” tab...), rather than posts they personally would upvote. This allows a moderator to implicitly choose which component of users has the “right” taste, without having to explicitly upvote/downvote every individual post.
I don’t know if imposing one individual’s taste is such a good idea, but it is an option. It seems like you should think for a while about what exactly you want, rather than just proposing mechanisms and then evaluating whether you like them or not. Once you know what you want, then we have the theoretical machinery to build a mechanism which implements your goal well (or, we can sit down for a while and develop it).
Also, it is worth pointing out that you can do much better than just weighting votes by similarity factors. In general, it may be the case that Alice and Bob have never voted on the same comment, and yet Alice still learns interesting information from Bob’s vote. (And there are situations where weighting by similarity breaks down quite explicitly.) My point is that instead of doing something ad-hoc, you can employ a predictor which is actually approximately optimal.
Fair enough. Apologies for wasting your time with undirected musings.
In terms of what I want, everything I can think of shares the property of being more useful in a more heterogenous environment. I put together a wishlist along these lines some months ago.But within an environment as homogenous as LW, none of that seems worth the effort.
That said, I would find it at least idly interesting to be able to switch among filters (e.g., “Things EY would upvote”, “Things Yvain would upvote”, etc.), especially composite filters (e.g., “Things EY would upvote that aren’t things Yvain would upvote,” “90% things EY would upvote and 10% things he wouldn’t”, etc.).
Hmmm—so a kind of Amazon-style “people who liked posts by X also liked posts by Y ” idea. Could be interesting.
I would love this to be the case. Unfortunately, we’re talking about human behaviour here, and specifically, talking about the fact that, for some people, that behaviour doesn’t change even though other attempts have been made to actually address the real issue.
From having been present in forums that drowned under the weight of such people, I think it’s also a good idea to have a backup plan. Especially one where the noise can still exist, but can be “filtered out” at will.
Right now, the downvoted comments are hidden if they reach a certain threshold. The sorts of posts that are downvoted to this level are rude and uselessly inflammatory. Still—they are not “totally hidden”. They are shown, in place, just as a “there is a hidden comment” link. If you want to see them, all you have to do is click on the link - and you can decide for yourself if that post deserved the harsh treatment (ie it does not interfere with moderation).
You can also adjust your own downvote threshold eg to hide all comments downvoted anywhere from −1 down… or to show them all until they’re −10, which is actually what I’ve done. If you want, you can choose a sufficiently large negative value and will probably never see a hidden comment.