A logic which applies only to people who are interested in getting a warm glow and not for people interested in helping. Diversifying charitable investments maximizes your chance of getting at least some warm glow from “having helped”. It does not help people as best they can be helped.
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything. I know people don’t like it when I say this sort of thing, but seriously, people like that can lower the perceived quality of a whole website.
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything.
The problem is quite simple. Tim, and the rest of the class of commenters to which you refer, simply haven’t learned how to lose. This can be fixed by making it clear that this community’s respect is contingent on retracting any inaccurate positions. Posts in which people announce that they have changed their mind are usually upvoted (in contrast to other communities), but some people don’t seem to have noticed.
Therefore, I propose adding a “plonk” button on each comment. Pressing it would hide all posts from that user for a fixed duration, and also send them an anonymous message (red envelope) telling them that someone plonked them, which post they were plonked for, and a form letter reminder that self-consistency is not a virtue and a short guide to losing gracefully.
Posts in which people announce that they have changed their mind are usually upvoted
As a total newbie to this site, I applaud this sentiment, but have just gone through an experience where this has not, in fact, happened.
After immediately retracting my erroneous statement (and explaining exactly where and why I’d gone wrong), I continued to be hammered over arguments that I had not actually made. My retracted statements (which I’ve left in place, along with the edits explaining why they’re wrong) stay just as down-voted as before...
My guess is that some of the older members of this site may realise that this is how it’s supposed to work… but it’s certainly not got through to us newbies yet ;)
Perhaps it should be added to the etiquette section in the newbie pages (eg the karma-section in the FAQ) ?
I hereby suggest once again that “Vote up” and “Vote down” be changed to “More like this” and “Less like this” in the interface.
OTOH, there’s the reasonable counterargument that anyone who needs to be told this won’t change their behaviour because of it—i.e., rules against cluelessness don’t have anything to work via.
Translation: I haven’t managed to convince you therefore you must be punished for your insolent behaviour of not being convinced by my arguments. I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement.
Who did you say hasn’t learned how to lose?
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything. I know people don’t like it when I say this sort of thing, but seriously, people like that can lower the perceived quality of a whole website.
Warn, then ban the people involved.
If you decide that refusing to be convinced by evidence while also unable to convincingly counter it, and at the same time continuing to argue is bad form for the LW that you want to create, then stand by that decision and act on it.
Translation: [...] I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement.
I never said anything about using force. Not that there’s anything wrong with that, but it’s a different position, not a translation.
If you can clarify the distinction you draw between the use of force and the use of punishments to modify behavior and why that distinction is important, I’d be interested.
Of course. The defining difference is that force can’t be ignored, so threatening a punishment only constitutes force if the punishment threatened is strong enough; condemnation doesn’t count unless it comes with additional consequences. Force is typically used in the short term to ensure conformance with plans, while behaviour modification is more like long-term groundwork. Well executed behaviour modifications stay in place with minimal maintenance, but the targets of force will become more hostile with each application. If you use a behaviour modification strategy when you should be using force, people may defy you when you can ill afford it. If you use force when you should be using behavior modification strategies, you will accumulate enemies you don’t need.
So, if sfb edits the parent to read “then we will rely on punishment to modify your behavior so you profess agreement” instead of “then you will be forced to profess agreement,” that addresses your objection?
Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.
Interesting, I would have guessed that memory modification would be easier when minds aren’t ontologically basic mental entities because there are then actual parts of the mind that one can target.
You (probably) know what I meant, and whether or not you mentioned force specifically—either way doesn’t change the gist of the “translation”. A weasely objection.
and a form letter reminder that self-consistency is not a virtue [..] making it clear that this community’s respect is contingent on [..]
Is changing professed beliefs to something else without understanding / agreeing with the new position, but just doing it to gain community respect, a virtue?
Tim, and the rest of the class of commenters to which you refer, simply haven’t learned how to lose.
Or still isn’t convinced that he is wrong by the time you have passed your tolerance of explaining so you give up and decide he must be broken. Your proposed ‘solution’ is a hack so you can give up on convincing him but still have him act convinced for the benefit of appearances—maybe you are simply expecting far far too short inferential distances?
Escalating punishment so someone “learns better” can work, but it requires real punishments, not symbolic ones. It’s not clear to me that “plonking” would accomplish that.
And, of course, it has all the same problems that punishment-based behavior modification always has.
online communities, being largely by and for geeks, dislike overt exclusionary tactics because it brings up painful associations. I think well established communities often have more to gain from elitism than they stand to lose.
online communities, being largely by and for geeks, dislike overt exclusionary tactics because it brings up painful associations. I think well established communities often have more to gain from elitism than they stand to lose.
These two statements are contradictory. Did you swap “gain” and “lose” in the second statement?
I think he meant in the second sentence that observation about what communities actually would benefit from. The first sentence is an observation of what preferences people have due to cultural issues. In this case, he is implying that general preferences don’t fit what is actually optimal.
One solution is to try and convince people to downvote more aggressively.
A second solution, which is one of my current research projects, is to develop a more effective automatic moderation mechanism than screening out low karma posts. If there is enough interest, it may be worthwhile to discuss exactly what the community would like automatic moderation to accomplish and the feasibility of modifying it to meet those goals (preferably in a way that remains compatible with the current karma system). Depending on the outcome, I may be interested in helping with such an effort (and there is a chance I could get some funding to work on it, as an application of the theory).
Another solution is to change the karma system to remove the psychological obstacles that may keep people from downvoting. It feels a little mean to directly cause a comment to be filtered, even when it would probably improve the mean quality of discourse. It may be a little easier to express your opinion that a comment is not constructive, and have a less direct mechanism responsible for converting a consensus into moderation / karma penalty.
Alternatively just allow people to have an “ignored users” file.
You can “click to ignore this user” on anybody that you find to be continuously less worthwhile on average.
Or, even better, you can apply a “handicap” to certain people. eg that you will only view comments by a certain person if the comment has been upvoted to at least 4 (or whatever).
Hm. Right now, you can’t downvote more than you’ve been upvoted. Suppose a Plonk costs 1000 downvotes, could only be applied once per user-pair, and increased the minimum viewability threshold of a user by 1. So if two people Plonked timtyler, his comments would start disappearing once they’d been voted down to −1, instead of −3. The opposite of a Plonk would be an Accolade and that would make comments harder to hide, lowering the threshold by 1?
Doesn’t actually sound like a good idea to me, but I do sometimes get the sense that there ought to be better incentives for people to take hints.
Automatic threshold effect seems like a bad idea, but displaying the Plonk score alongside total Karma on the user page might prove effective at making community’s perception of the user available.
(I presently have exactly two users I wish to “Plonk”, Tim one of them and the other I would rather only indicate anonymously, and I want a socially appropriate and persistent method of showing this opinion.)
Being plonked by a single user having a drastic effect on one’s comments’ visibility strikes me as having a lot of downsides.
I’m wondering (aside from that it would be nice to have killfiles) whether it would have a good effect if plonks were anonymous, but the number of plonks each person has received is public..
I don’t think plonks and accolades would be globally visible; users would affect what they see themselves, but other users would see it as just a regular vote, if at all.
I believe that we could find many ad hoc changes which seem good. But if you understand exactly what it is your ad hoc solutions are trying to accomplish, you may instead be able to find a real solution that actually does what you want, rather than dancing around the issue.
To give a (probably misguided) example of the sort of precise request I might make: I could wish to see as many things I would upvote as possible, and as few I would downvote. In addition to having a “Recent Comments” page to visit, I would have a “Things You Would Upvote” page to visit. My upvotes/downvotes not only serve to help the community, they define my own standards for moderation.
Of course there are more sophisticated approaches and subtle issues (if I never see posts I would have downvoted, doesn’t that interfere with the moderation system?), but hopefully that suggests the general flavor.
… I would have a “Things You Would Upvote” page...
If I get you correctly, you’d like the system to know the sorts of things you’d downvote and automatically show/hide comments based on your preferences.
This is a great idea.
Apologies if I got your idea wrong… but if not, then sadly, it’s not currently feasible.
After all, for most users actual downvoting preferences (eg “excessive profanity” or “religious intolerance” or even just “being wrong”) it would require the system to understand the content of comments. Maybe the excessive profanity could be easily picked, but the other two would require an actual AI… AFAIK we’re still working on that one ;)
But even if we only had simpler requirements (eg a profanity filter), It’d also be extremely resource-intensive—especially if every single user on the system required this kind of processing. Currently, the lesswrong site is just simple server software. It’s not an AI and does not understand the content of posts. It just displays the posts/comments without digesting their contents in any way. Karma works because somebody else (ie the humans out here) are the ones digesting and understanding the posts… then they turn their judgement into simple number (+1 for upvote, −1 for downvote), so that’s all the system has to remember.
Anything else would require text-processing of every single comment… every time the page is displayed. With 50-100 comments on every page, this would be a noticeable increase in the processing-time for each page, for only a limited utility increase.
Of course, as I said—I may have misinterpreted your idea.
If so—let me know what you had in mind.
The point isn’t to determine if you will like a post by applying sophisticated language processing etc. Its to determine if you will like a post by looking at the people who have upvoted/downvoted it and learning how to extrapolate.
For example, suppose Alice always upvotes/downvotes identically to Bob. Of particular interest to Alice are posts Bob has already upvoted. In real life you are looking for significantly more subtle patterns (if you only looked directly at correlations between users’ feedback you wouldn’t get too much advantage, at least not in theory) and you need to be able to do it automatically and quickly, but hopefully it seems plausible that you can use the pattern of upvotes/downvotes to practically and effectively predict what will interest any particular user or the average guest.
(nods) I’ve contemplated in other contexts a fully collaboratively-filtered forum… that is, one in which the sort-order for threads to read is controlled by their popularity (karma) weighted by a similarity factor—where an upvote given by someone whose prior voting patterns perfectly match yours is worth 10 points, say, and given by someone whose priors perfectly anti-match yours is worth −10 points, and prorated accordingly for less-perfect matches.
But mostly, I think that’s a very useful way to allow large numbers of users with heterogenous values and preferences to use the same system without getting in each others’ way. It would make sense for a popular politics discussion site, for example.
(The simple version just creates a series of echo chambers, of course. Though some people seem to like that. But further refinements can ameliorate that if desired.)
LW doesn’t seem to have that goal at all. Instead, it endorses particular values and preferences and rejects others, and when discussions of filtering come up they are framed as how to more efficiently implement those particular rejections and endorsements.
So mostly, collaborative filtering seems like it solves a problem this site hasn’t got.
You can use collaborative learning for other purposes. For example, suppose I wanted to show a user posts which Eliezer Yudkowsky would upvote (a “Things EY would Upvote” tab...), rather than posts they personally would upvote. This allows a moderator to implicitly choose which component of users has the “right” taste, without having to explicitly upvote/downvote every individual post.
I don’t know if imposing one individual’s taste is such a good idea, but it is an option. It seems like you should think for a while about what exactly you want, rather than just proposing mechanisms and then evaluating whether you like them or not. Once you know what you want, then we have the theoretical machinery to build a mechanism which implements your goal well (or, we can sit down for a while and develop it).
Also, it is worth pointing out that you can do much better than just weighting votes by similarity factors. In general, it may be the case that Alice and Bob have never voted on the same comment, and yet Alice still learns interesting information from Bob’s vote. (And there are situations where weighting by similarity breaks down quite explicitly.) My point is that instead of doing something ad-hoc, you can employ a predictor which is actually approximately optimal.
It seems like you should think for a while about what exactly you want, rather than just proposing mechanisms and then evaluating whether you like them or not.
Fair enough. Apologies for wasting your time with undirected musings.
In terms of what I want, everything I can think of shares the property of being more useful in a more heterogenous environment. I put together a wishlist along these lines some months ago.But within an environment as homogenous as LW, none of that seems worth the effort.
That said, I would find it at least idly interesting to be able to switch among filters (e.g., “Things EY would upvote”, “Things Yvain would upvote”, etc.), especially composite filters (e.g., “Things EY would upvote that aren’t things Yvain would upvote,” “90% things EY would upvote and 10% things he wouldn’t”, etc.).
But if you understand exactly what it is your ad hoc solutions are trying to accomplish, you may instead be able to find a real solution that actually does what you want, rather than dancing around the issue.
I would love this to be the case. Unfortunately, we’re talking about human behaviour here, and specifically, talking about the fact that, for some people, that behaviour doesn’t change even though other attempts have been made to actually address the real issue.
From having been present in forums that drowned under the weight of such people, I think it’s also a good idea to have a backup plan. Especially one where the noise can still exist, but can be “filtered out” at will.
if I never see posts I would have downvoted, doesn’t that interfere with the moderation system
Right now, the downvoted comments are hidden if they reach a certain threshold.
The sorts of posts that are downvoted to this level are rude and uselessly inflammatory. Still—they are not “totally hidden”. They are shown, in place, just as a “there is a hidden comment” link. If you want to see them, all you have to do is click on the link - and you can decide for yourself if that post deserved the harsh treatment (ie it does not interfere with moderation).
You can also adjust your own downvote threshold eg to hide all comments downvoted anywhere from −1 down… or to show them all until they’re −10, which is actually what I’ve done. If you want, you can choose a sufficiently large negative value and will probably never see a hidden comment.
Another solution is to change the karma system to remove the psychological obstacles that may keep people from downvoting. It feels a little mean to directly cause a comment to be filtered, even when it would probably improve the mean quality of discourse. It may be a little easier to express your opinion that a comment is not constructive, and have a less direct mechanism responsible for converting a consensus into moderation / karma penalty.
I do this by keeping kibitzing turned off most of the time, and always showing all comments regardless of karma. This won’t work for everyone, but it works for me: I think my upvotes and downvotes are less biased this way.
Something I’ve thought about (in the context of other venues) is a rating system where vote-totals V are stored but not displayed. Instead what gets displayed is a ranking R, where R=f(V)… a comment utility function, of sorts. That way a single vote in isolation does not cause a state transition.
The same function can apply to user “scores.” Which may reduce the inclination to stress about one’s karma, if one is inclined to that.
To pick a simple example, suppose (R=V^.37). So 1 upvote gets R1, but a second upvote stays R1. The 3rd upvote gets R2; the 12th gets R3; the 30th gets R4. Downvoting an R2 comment might bring it to R1, or it might not. Eliezer is R68, Yvain is R45, Alicorn is R37, cousin_it and wei_dai and annasolomon are all R30, I’m R13, and so forth.
(The specific function is just intended for illustrative purposes.)
The function needn’t be symmetrical around zero… e.g., if R=(-1*ABS(V)^.37) when V<0, rankings go negative faster than they go positive.
====
Along a different axis, it might be interesting to allow voters to be nymous… for example, a preferences setting along those lines, or a mechanism for storing notes along with votes if one chooses to, visible on a different channel. The idea being that you can provide more detailed feedback without cluttering the comment-channel with meta-discussions about other comments.
That is, someone with −3 (or whatever the threshold is) or less is still hidden, but if they have −1 karma it just displays “no points” or something?
That does seem like it would be somewhat useful- I tend to vote up or down almost regardless of karma (the only effect I’ve noticed is a slight increase in the chance I vote up if the karma is negative and I like it) but I know some other people do it to hit some karma target they have in mind.
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything. I know people don’t like it when I say this sort of thing, but seriously, people like that can lower the perceived quality of a whole website.
Given that Tim has a positive karma score that is around 1200 it is difficulty to declare that he is so consistently wrong that he is causing a problem (although as I’ve said before, it would be more useful to have access to average karma per a comment to measure that sort of thing.) Speaking personally, I do occasionally downvote Tim, and do so more frequently than I downvote other people (and I suspect that that isn’t just connected to to Tim being a very frequent commentor), but I also do upvote him sometimes to. Overall, I suspect that Tim’s presence is a net benefit.
I personally haven’t downvoted Tim, although I now feel like I ought to have, simply because it felt like bad etiquette to downvote someone else while in a sustained argument with them, even if you feel like they’re engaging in bad reasoning. I should probably be more liberal with my downvotes in future than I have been.
If a person is making positive contributions to the board with some regularity though, I think it’s worth having them around even if they are also frequently making negative contributions. At least the karma system gives people a mechanism to filter posts so that they can ignore the less worthwhile ones if they so choose.
Thanks. I’m not sure votes have that much to do with being right, though. My perception is more that people vote up things they like seeing—and vote down things they don’t. It sometimes seems more like applause.
I’m not sure votes have that much to do with being right, though.
They may be better correlated with being convincing than with being right.
One reason why I find much of your contrarianism unconvincing, Tim, is that you rarely actually engage in a debate. Instead you simply reiterate your own position rather than pointing out flaws or hidden assumptions in the arguments of your interlocutors.
He has a lot of what I call ‘bait’ comments, trying to get people to respond in a way that allows him to tear them down. He already knows how he’s going to answer the next step in the conversation, having prepared the material long ago. Though it’s not quite copy/paste, it’s close, kind of like a telemarketing script. I hardly see anything constructive, and find myself often downvoting him due to repetitive baiting with no end in sight.
There’s no question that many of his comments aren’t helpful. And he does talk about issues that are outside his expertise and doesn’t listen to people telling him otherwise (one of the more egregious examples would be in the comments to this post), and Tim responds negatively to people telling him that he is not informed about topics. But Tim does make helpful remarks. Examples of recent unambiguously productive remarks include this one, and this one. I don’t see enough here to conclude that Tim is in general a problem.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
Note to self: never use the word “sorry” near Tim Tyler.
And you threw the apology back in my face, introduced some more incorrect arguments, then cited the exchange here as evidence that you were right. Gee, thanks. I will not reply to you again without concrete evidence that you’ve changed.
I am tentatively against this option (as it applies to timtyler only; I don’t know if you had anyone else in mind). While you are entirely correct in your description and your concern of lowered quality, I have found that reflecting on my discussions with tim have had some positive effects for me (learning that the function that describes the quality of an explanation has a larger term for the listener than the explainer, making arguments explicit tends to bring out hidden premises and expose weak inferences).
Conditional on other posters having similar experiences, I suggest treating timtyler as a koan of sorts.
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything.
Hmm. This seems like the easiest property to target. A community norm of being confrontational about this? Although, changing community norms might be significantly more difficult than adding a new mechanism.
That said, there seems to be a negative reaction to suggestions like this, and possibly these posters will give timtyler more leeway for a time—that being the obvious way to express displeasure at this concept. Maybe make it clear that the community can obviate the need for the mechanism by being more critical?
As an alternative to you just making the decision to ban users in order to improve site’s quality, maybe establish a user-banning board? Known users on the board, with information about individual voting not publicly available. These cases are sufficiently rare and different for development of an adequate automated system being very difficult to impossible.
(Another alternative is to open such decisions to an open vote and discussion, but this can be noisy and with other undesirable consequences, probably even worse than authoritarian system.)
Diversifying charitable investments maximizes your chance of getting at least some warm glow from “having helped”. It does not help people as best they can be helped.
Nor does diversifying investments make as much money as possible.
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything.
A logic which applies only to people who are interested in getting a warm glow and not for people interested in helping. Diversifying charitable investments maximizes your chance of getting at least some warm glow from “having helped”. It does not help people as best they can be helped.
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything. I know people don’t like it when I say this sort of thing, but seriously, people like that can lower the perceived quality of a whole website.
The problem is quite simple. Tim, and the rest of the class of commenters to which you refer, simply haven’t learned how to lose. This can be fixed by making it clear that this community’s respect is contingent on retracting any inaccurate positions. Posts in which people announce that they have changed their mind are usually upvoted (in contrast to other communities), but some people don’t seem to have noticed.
Therefore, I propose adding a “plonk” button on each comment. Pressing it would hide all posts from that user for a fixed duration, and also send them an anonymous message (red envelope) telling them that someone plonked them, which post they were plonked for, and a form letter reminder that self-consistency is not a virtue and a short guide to losing gracefully.
Eliezer has really got to do something about his fictional villains escaping into real life. First Clippy, now you too?
Meh. The villains seem a lot less formidable in real life, like they left something essential behind in the fiction.
Hey, be patient. I haven’t been here very long, and building up power takes time.
That is a problem with demiurges, yes.
As a total newbie to this site, I applaud this sentiment, but have just gone through an experience where this has not, in fact, happened.
After immediately retracting my erroneous statement (and explaining exactly where and why I’d gone wrong), I continued to be hammered over arguments that I had not actually made. My retracted statements (which I’ve left in place, along with the edits explaining why they’re wrong) stay just as down-voted as before...
My guess is that some of the older members of this site may realise that this is how it’s supposed to work… but it’s certainly not got through to us newbies yet ;)
Perhaps it should be added to the etiquette section in the newbie pages (eg the karma-section in the FAQ) ?
I hereby suggest once again that “Vote up” and “Vote down” be changed to “More like this” and “Less like this” in the interface.
OTOH, there’s the reasonable counterargument that anyone who needs to be told this won’t change their behaviour because of it—i.e., rules against cluelessness don’t have anything to work via.
Translation: I haven’t managed to convince you therefore you must be punished for your insolent behaviour of not being convinced by my arguments. I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement.
Who did you say hasn’t learned how to lose?
Warn, then ban the people involved.
If you decide that refusing to be convinced by evidence while also unable to convincingly counter it, and at the same time continuing to argue is bad form for the LW that you want to create, then stand by that decision and act on it.
On a site called “Less Wrong,” is that terribly surprising?
I never said anything about using force. Not that there’s anything wrong with that, but it’s a different position, not a translation.
If you can clarify the distinction you draw between the use of force and the use of punishments to modify behavior and why that distinction is important, I’d be interested.
Of course. The defining difference is that force can’t be ignored, so threatening a punishment only constitutes force if the punishment threatened is strong enough; condemnation doesn’t count unless it comes with additional consequences. Force is typically used in the short term to ensure conformance with plans, while behaviour modification is more like long-term groundwork. Well executed behaviour modifications stay in place with minimal maintenance, but the targets of force will become more hostile with each application. If you use a behaviour modification strategy when you should be using force, people may defy you when you can ill afford it. If you use force when you should be using behavior modification strategies, you will accumulate enemies you don’t need.
Makes sense.
So, if sfb edits the parent to read “then we will rely on punishment to modify your behavior so you profess agreement” instead of “then you will be forced to profess agreement,” that addresses your objection?
What is your opinion on the use of memory charms to modify behavior?
Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.
Interesting, I would have guessed that memory modification would be easier when minds aren’t ontologically basic mental entities because there are then actual parts of the mind that one can target.
We don’t have tools sharp enough to get a grip on those parts, yet.
You (probably) know what I meant, and whether or not you mentioned force specifically—either way doesn’t change the gist of the “translation”. A weasely objection.
From the username, I was expecting that the suggestion was going to be to say avada kedavra.
I’d never say that on a forum that would generate a durable record of my comment.
Is changing professed beliefs to something else without understanding / agreeing with the new position, but just doing it to gain community respect, a virtue?
Or still isn’t convinced that he is wrong by the time you have passed your tolerance of explaining so you give up and decide he must be broken. Your proposed ‘solution’ is a hack so you can give up on convincing him but still have him act convinced for the benefit of appearances—maybe you are simply expecting far far too short inferential distances?
Escalating punishment so someone “learns better” can work, but it requires real punishments, not symbolic ones. It’s not clear to me that “plonking” would accomplish that.
And, of course, it has all the same problems that punishment-based behavior modification always has.
As long as you are no fooming FAI...
online communities, being largely by and for geeks, dislike overt exclusionary tactics because it brings up painful associations. I think well established communities often have more to gain from elitism than they stand to lose.
These two statements are contradictory. Did you swap “gain” and “lose” in the second statement?
I think he meant in the second sentence that observation about what communities actually would benefit from. The first sentence is an observation of what preferences people have due to cultural issues. In this case, he is implying that general preferences don’t fit what is actually optimal.
One solution is to try and convince people to downvote more aggressively.
A second solution, which is one of my current research projects, is to develop a more effective automatic moderation mechanism than screening out low karma posts. If there is enough interest, it may be worthwhile to discuss exactly what the community would like automatic moderation to accomplish and the feasibility of modifying it to meet those goals (preferably in a way that remains compatible with the current karma system). Depending on the outcome, I may be interested in helping with such an effort (and there is a chance I could get some funding to work on it, as an application of the theory).
Another solution is to change the karma system to remove the psychological obstacles that may keep people from downvoting. It feels a little mean to directly cause a comment to be filtered, even when it would probably improve the mean quality of discourse. It may be a little easier to express your opinion that a comment is not constructive, and have a less direct mechanism responsible for converting a consensus into moderation / karma penalty.
Or at least to downvote established users more aggressively, given that new people have said they felt intimidated.
Alternatively just allow people to have an “ignored users” file. You can “click to ignore this user” on anybody that you find to be continuously less worthwhile on average.
Or, even better, you can apply a “handicap” to certain people. eg that you will only view comments by a certain person if the comment has been upvoted to at least 4 (or whatever).
Hm. Right now, you can’t downvote more than you’ve been upvoted. Suppose a Plonk costs 1000 downvotes, could only be applied once per user-pair, and increased the minimum viewability threshold of a user by 1. So if two people Plonked timtyler, his comments would start disappearing once they’d been voted down to −1, instead of −3. The opposite of a Plonk would be an Accolade and that would make comments harder to hide, lowering the threshold by 1?
Doesn’t actually sound like a good idea to me, but I do sometimes get the sense that there ought to be better incentives for people to take hints.
Automatic threshold effect seems like a bad idea, but displaying the Plonk score alongside total Karma on the user page might prove effective at making community’s perception of the user available.
(I presently have exactly two users I wish to “Plonk”, Tim one of them and the other I would rather only indicate anonymously, and I want a socially appropriate and persistent method of showing this opinion.)
Being plonked by a single user having a drastic effect on one’s comments’ visibility strikes me as having a lot of downsides.
I’m wondering (aside from that it would be nice to have killfiles) whether it would have a good effect if plonks were anonymous, but the number of plonks each person has received is public..
Note, of course, that threshold of hiding is editable in the first place, so this would have to act as a modifier on that.
I think so. But perhaps ability to plonk/accolade should only be given to people with a high level of karma.
To stop the pathological case where people can set up a hundred accounts and accolade themselves (or plonk a rival).
Also—people should be able to adjust their personal “plonk horizon” just as they can with the low-comment threshold at present.
I don’t think plonks and accolades would be globally visible; users would affect what they see themselves, but other users would see it as just a regular vote, if at all.
I believe that we could find many ad hoc changes which seem good. But if you understand exactly what it is your ad hoc solutions are trying to accomplish, you may instead be able to find a real solution that actually does what you want, rather than dancing around the issue.
To give a (probably misguided) example of the sort of precise request I might make: I could wish to see as many things I would upvote as possible, and as few I would downvote. In addition to having a “Recent Comments” page to visit, I would have a “Things You Would Upvote” page to visit. My upvotes/downvotes not only serve to help the community, they define my own standards for moderation.
Of course there are more sophisticated approaches and subtle issues (if I never see posts I would have downvoted, doesn’t that interfere with the moderation system?), but hopefully that suggests the general flavor.
If I get you correctly, you’d like the system to know the sorts of things you’d downvote and automatically show/hide comments based on your preferences.
This is a great idea.
Apologies if I got your idea wrong… but if not, then sadly, it’s not currently feasible.
After all, for most users actual downvoting preferences (eg “excessive profanity” or “religious intolerance” or even just “being wrong”) it would require the system to understand the content of comments. Maybe the excessive profanity could be easily picked, but the other two would require an actual AI… AFAIK we’re still working on that one ;)
But even if we only had simpler requirements (eg a profanity filter), It’d also be extremely resource-intensive—especially if every single user on the system required this kind of processing. Currently, the lesswrong site is just simple server software. It’s not an AI and does not understand the content of posts. It just displays the posts/comments without digesting their contents in any way. Karma works because somebody else (ie the humans out here) are the ones digesting and understanding the posts… then they turn their judgement into simple number (+1 for upvote, −1 for downvote), so that’s all the system has to remember.
Anything else would require text-processing of every single comment… every time the page is displayed. With 50-100 comments on every page, this would be a noticeable increase in the processing-time for each page, for only a limited utility increase.
Of course, as I said—I may have misinterpreted your idea. If so—let me know what you had in mind.
The point isn’t to determine if you will like a post by applying sophisticated language processing etc. Its to determine if you will like a post by looking at the people who have upvoted/downvoted it and learning how to extrapolate.
For example, suppose Alice always upvotes/downvotes identically to Bob. Of particular interest to Alice are posts Bob has already upvoted. In real life you are looking for significantly more subtle patterns (if you only looked directly at correlations between users’ feedback you wouldn’t get too much advantage, at least not in theory) and you need to be able to do it automatically and quickly, but hopefully it seems plausible that you can use the pattern of upvotes/downvotes to practically and effectively predict what will interest any particular user or the average guest.
(nods) I’ve contemplated in other contexts a fully collaboratively-filtered forum… that is, one in which the sort-order for threads to read is controlled by their popularity (karma) weighted by a similarity factor—where an upvote given by someone whose prior voting patterns perfectly match yours is worth 10 points, say, and given by someone whose priors perfectly anti-match yours is worth −10 points, and prorated accordingly for less-perfect matches.
But mostly, I think that’s a very useful way to allow large numbers of users with heterogenous values and preferences to use the same system without getting in each others’ way. It would make sense for a popular politics discussion site, for example.
(The simple version just creates a series of echo chambers, of course. Though some people seem to like that. But further refinements can ameliorate that if desired.)
LW doesn’t seem to have that goal at all. Instead, it endorses particular values and preferences and rejects others, and when discussions of filtering come up they are framed as how to more efficiently implement those particular rejections and endorsements.
So mostly, collaborative filtering seems like it solves a problem this site hasn’t got.
You can use collaborative learning for other purposes. For example, suppose I wanted to show a user posts which Eliezer Yudkowsky would upvote (a “Things EY would Upvote” tab...), rather than posts they personally would upvote. This allows a moderator to implicitly choose which component of users has the “right” taste, without having to explicitly upvote/downvote every individual post.
I don’t know if imposing one individual’s taste is such a good idea, but it is an option. It seems like you should think for a while about what exactly you want, rather than just proposing mechanisms and then evaluating whether you like them or not. Once you know what you want, then we have the theoretical machinery to build a mechanism which implements your goal well (or, we can sit down for a while and develop it).
Also, it is worth pointing out that you can do much better than just weighting votes by similarity factors. In general, it may be the case that Alice and Bob have never voted on the same comment, and yet Alice still learns interesting information from Bob’s vote. (And there are situations where weighting by similarity breaks down quite explicitly.) My point is that instead of doing something ad-hoc, you can employ a predictor which is actually approximately optimal.
Fair enough. Apologies for wasting your time with undirected musings.
In terms of what I want, everything I can think of shares the property of being more useful in a more heterogenous environment. I put together a wishlist along these lines some months ago.But within an environment as homogenous as LW, none of that seems worth the effort.
That said, I would find it at least idly interesting to be able to switch among filters (e.g., “Things EY would upvote”, “Things Yvain would upvote”, etc.), especially composite filters (e.g., “Things EY would upvote that aren’t things Yvain would upvote,” “90% things EY would upvote and 10% things he wouldn’t”, etc.).
Hmmm—so a kind of Amazon-style “people who liked posts by X also liked posts by Y ” idea. Could be interesting.
I would love this to be the case. Unfortunately, we’re talking about human behaviour here, and specifically, talking about the fact that, for some people, that behaviour doesn’t change even though other attempts have been made to actually address the real issue.
From having been present in forums that drowned under the weight of such people, I think it’s also a good idea to have a backup plan. Especially one where the noise can still exist, but can be “filtered out” at will.
Right now, the downvoted comments are hidden if they reach a certain threshold. The sorts of posts that are downvoted to this level are rude and uselessly inflammatory. Still—they are not “totally hidden”. They are shown, in place, just as a “there is a hidden comment” link. If you want to see them, all you have to do is click on the link - and you can decide for yourself if that post deserved the harsh treatment (ie it does not interfere with moderation).
You can also adjust your own downvote threshold eg to hide all comments downvoted anywhere from −1 down… or to show them all until they’re −10, which is actually what I’ve done. If you want, you can choose a sufficiently large negative value and will probably never see a hidden comment.
I do this by keeping kibitzing turned off most of the time, and always showing all comments regardless of karma. This won’t work for everyone, but it works for me: I think my upvotes and downvotes are less biased this way.
Something I’ve thought about (in the context of other venues) is a rating system where vote-totals V are stored but not displayed. Instead what gets displayed is a ranking R, where R=f(V)… a comment utility function, of sorts. That way a single vote in isolation does not cause a state transition.
The same function can apply to user “scores.” Which may reduce the inclination to stress about one’s karma, if one is inclined to that.
To pick a simple example, suppose (R=V^.37). So 1 upvote gets R1, but a second upvote stays R1. The 3rd upvote gets R2; the 12th gets R3; the 30th gets R4. Downvoting an R2 comment might bring it to R1, or it might not. Eliezer is R68, Yvain is R45, Alicorn is R37, cousin_it and wei_dai and annasolomon are all R30, I’m R13, and so forth.
(The specific function is just intended for illustrative purposes.)
The function needn’t be symmetrical around zero… e.g., if R=(-1*ABS(V)^.37) when V<0, rankings go negative faster than they go positive.
====
Along a different axis, it might be interesting to allow voters to be nymous… for example, a preferences setting along those lines, or a mechanism for storing notes along with votes if one chooses to, visible on a different channel. The idea being that you can provide more detailed feedback without cluttering the comment-channel with meta-discussions about other comments.
The way Hacker News does this is by not displaying comment scores that are below the filter threshold.
That is, someone with −3 (or whatever the threshold is) or less is still hidden, but if they have −1 karma it just displays “no points” or something?
That does seem like it would be somewhat useful- I tend to vote up or down almost regardless of karma (the only effect I’ve noticed is a slight increase in the chance I vote up if the karma is negative and I like it) but I know some other people do it to hit some karma target they have in mind.
Given that Tim has a positive karma score that is around 1200 it is difficulty to declare that he is so consistently wrong that he is causing a problem (although as I’ve said before, it would be more useful to have access to average karma per a comment to measure that sort of thing.) Speaking personally, I do occasionally downvote Tim, and do so more frequently than I downvote other people (and I suspect that that isn’t just connected to to Tim being a very frequent commentor), but I also do upvote him sometimes to. Overall, I suspect that Tim’s presence is a net benefit.
I personally haven’t downvoted Tim, although I now feel like I ought to have, simply because it felt like bad etiquette to downvote someone else while in a sustained argument with them, even if you feel like they’re engaging in bad reasoning. I should probably be more liberal with my downvotes in future than I have been.
If a person is making positive contributions to the board with some regularity though, I think it’s worth having them around even if they are also frequently making negative contributions. At least the karma system gives people a mechanism to filter posts so that they can ignore the less worthwhile ones if they so choose.
Thanks. I’m not sure votes have that much to do with being right, though. My perception is more that people vote up things they like seeing—and vote down things they don’t. It sometimes seems more like applause.
They may be better correlated with being convincing than with being right.
One reason why I find much of your contrarianism unconvincing, Tim, is that you rarely actually engage in a debate. Instead you simply reiterate your own position rather than pointing out flaws or hidden assumptions in the arguments of your interlocutors.
Some of the “applause” evidence is near the top of this very thread—if you sort by “Top”.
Yeah, but I can’t afford to buy that kind of applause. So I will just have to keep on sweet-talking people and trying to dazzle them with my wit. :)
Applause (and boos) is precisely what it is. There is nothing wrong with applause and boos. What matters is why the members of LW award them.
Look at his actual comments.
He has a lot of what I call ‘bait’ comments, trying to get people to respond in a way that allows him to tear them down. He already knows how he’s going to answer the next step in the conversation, having prepared the material long ago. Though it’s not quite copy/paste, it’s close, kind of like a telemarketing script. I hardly see anything constructive, and find myself often downvoting him due to repetitive baiting with no end in sight.
There’s no question that many of his comments aren’t helpful. And he does talk about issues that are outside his expertise and doesn’t listen to people telling him otherwise (one of the more egregious examples would be in the comments to this post), and Tim responds negatively to people telling him that he is not informed about topics. But Tim does make helpful remarks. Examples of recent unambiguously productive remarks include this one, and this one. I don’t see enough here to conclude that Tim is in general a problem.
Not my finest hour :-(
Fortunately, they did apologise for doing that.
The linked comment reads:
Note to self: never use the word “sorry” near Tim Tyler.
And you threw the apology back in my face, introduced some more incorrect arguments, then cited the exchange here as evidence that you were right. Gee, thanks. I will not reply to you again without concrete evidence that you’ve changed.
You appear to be misinterpreting :-(
Try “I appear to have been unclear” instead.
I am tentatively against this option (as it applies to timtyler only; I don’t know if you had anyone else in mind). While you are entirely correct in your description and your concern of lowered quality, I have found that reflecting on my discussions with tim have had some positive effects for me (learning that the function that describes the quality of an explanation has a larger term for the listener than the explainer, making arguments explicit tends to bring out hidden premises and expose weak inferences).
Conditional on other posters having similar experiences, I suggest treating timtyler as a koan of sorts.
Hmm. This seems like the easiest property to target. A community norm of being confrontational about this? Although, changing community norms might be significantly more difficult than adding a new mechanism.
That said, there seems to be a negative reaction to suggestions like this, and possibly these posters will give timtyler more leeway for a time—that being the obvious way to express displeasure at this concept. Maybe make it clear that the community can obviate the need for the mechanism by being more critical?
As an alternative to you just making the decision to ban users in order to improve site’s quality, maybe establish a user-banning board? Known users on the board, with information about individual voting not publicly available. These cases are sufficiently rare and different for development of an adequate automated system being very difficult to impossible.
(Another alternative is to open such decisions to an open vote and discussion, but this can be noisy and with other undesirable consequences, probably even worse than authoritarian system.)
Nor does diversifying investments make as much money as possible.
What the...?!?