That’s live now? Awesome! We’ve been needing that for… well, years, actually, but it got really bad a couple of months ago. “Troll feeding fee” is such a very apt description, too.
I don’t recall this being discussed by the community at all. It seems like a bad idea. Valuable conversations can extend from comments that are already negative. −3 is also not that negative. This also discourages people from actually explaining why someone is wrong if there are a lot of people who downvote the comment. This will both make it harder for that person to become less wrong and make it more likely that bystanders who are reading the conversation will not see any explanation for why the comment is downvoted. Overall this is at best a mixed idea that should have been discussed with the community before implementing.
I don’t recall this being discussed by the community at all.
This isn’t terribly relevant. Moderators that discuss every decision with the community and only act when they’ve built consensus fail prey to vocal minorities, e.g., Wikipedia. Then they tend to stagnate.
Instead of trying to build a consensus, Eliezer could have asked the community “Here are the consequences I intend/foresee with this change. Are there any important ones I may have overlooked?”, which has no obvious downsides that I can see, other than the opportunity cost of writing out the question.
Instead of trying to build a consensus, Eliezer could have asked the community “Here are the consequences I intend/foresee with this change. Are there any important ones I may have overlooked?”, which has no obvious downsides that I can see, other than the opportunity cost of writing out the question.
The obvious downside in such cases is that asking for and being told what the downsides are and then ignoring them is often perceived as even worse than not asking at all. If Eliezer anticipated that he would go ahead with his change regardless of what downsides are pointed out then it could be detrimental to ask.
(Note: That is the downside of asking, not a claim that asking would be a net negative.)
The obvious downside in such cases is that asking for and being told what the downsides are and then ignoring them is often perceived as even worse than not asking at all.
Do you think this is true even if one made it clear that one is not seeking a consensus but reserving the right to make the final cost/benefit judgement? If so, it’s contrary to my expectations (i.e., I don’t see why that would be perceived as being worse than not asking at all), and I would appreciate any further explanations you might have.
Moderators that discuss every decision with the community and only act when they’ve built consensus fail prey to vocal minorities, e.g., Wikipedia. Then they tend to stagnate.
Yes, discussing every decision with the community is probably a bad idea. But that doesn’t mean that specific, large scale changes shouldn’t be discussed.
Because the community has additional experience and may have thoughts about a proposal. The impression one gets when moderating something can be very different from the impression one gets in the general case. Discussing such issues in advance helps prevent severe unintended consequences from occurring.
Valuable conversations can extend from comments that are already negative.
If a reply to a downvoted comment is not downvoted, replies to that reply are not punished, so good subthreads are unaffected.
-3 is also not that negative.
5 Karma points is not that much as well, so if it’s really worth replying to, it’s possible to continue the conversation. It’s usually not worth replying though, and when not feeling any cost people would ignore that consideration, giving fuel to bad conversations as a result. The motivation I agree with is to stop bad conversations, not necessarily replies to individual bad comments, which is just the means.
So, if I post some honest argument but make a couple of stupid mistakes (I hope that such a post will get downvoted to around −5), anyone who explains me what I have missed will be punished?
Yes, this policy decision doesn’t happen to be one sided. What you describe seems to be a comparatively rare event though. If you actually want to get better, you’ll have opportunities other than particularly downvoted blunders to seek feedback, and there is an obvious solution of making a non-downvoted separate comment that asks for feedback in such cases, so that said feedback would not be punished.
If I’m hearing you correctly, the plan to limit trolls is push them to make fewer posts inside the auto-hidden areas and more posts outside of the auto-hidden areas?
Well, uh, I suppose that’s one way to deal with trolls.
But if saying something and creating a separate comment to ask for feedback would become accepatable, the trolls will create even more visible noise before they get into obviously malicious territory.
I agree that this is a failure mode, but it’s not an absolute one: people could explain to you via PM. Then you’d be free to edit the comment, and if its score floated back up discussion could ensue below.
PMd explanations are not publicly visible, so they don’t help others who read the thread and make the same mistake as the downvoted poster. They can’t be upvoted by others, which removes a big (in my experience) incentive to post the explanations.
Also, visible actions spread. Someone who posts a correction encourages other people to post corrections to other comments, while someone who PMs does not encourage that behavior. But it is also visible how many responses there are, so that people don’t overwhelm an individual with responses, while they might overwhelm with PMs.
Also, with visible explanations everybody knows what already has been explained and what hasn’t. If it were common to explain things via PM, some points would be raised multiple times while others not at all, depending on the number of readers estimating enough high probability that they are the first to comment on an issue and thus not wasting their time by making duplicite comments.
That’s live now? Awesome! We’ve been needing that for… well, years, actually, but it got really bad a couple of months ago. “Troll feeding fee” is such a very apt description, too.
Really? Isn’t hiding of down-voted comments enough? It protects people from seeing crap unless they explicitly ask for it (and they should be able to). There could still be a problem if comment threads were sprinkled with huge contiguous swathes of hidden comments but does that actually happen? (I think that the maximum number of contiguous hidden comments that I have seen is two.)
I find the fact that even obviously aggressive and stupid stuff tends to be replied to in a polite and informative manner one of impressive things about LessWrong. Some are concerned that this place appears dogmatic to newcomers. Having massive down-voting accompanied by explanations is one way in which those concerns can be alleviated.
Curiously, I initially wanted to downvote your comment but hesitated because I didn’t want people replying to it being punished. On reflection, your comment might not actually merit downvoting (I think there was a bit of ‘who the hell he thinks he is, treating LessWrong as his own private playground’ knee-jerk reaction here), but this is not at all a good sign on what effects this will have on troll-fighting. There are comments that are stupid but seem to be made in good faith and thus be worthy of a response. If I don’t feel like making that response, I might just downvote and be on my way. Now, downvoting will provide a disincentive to any potential educators which in turn discourages downvoting itself.
And finally, people who strongly dislike that change can decide to mentally subtract 5 points of karma from the score of each reply to a comment that’s at −3 or below and then base their voting decision on that (which will tend to push the scores of such replies to 5 + whatever they would have gotten otherwise), so nyah!
There could still be a problem if comment threads were sprinkled with huge contiguous swathes of hidden comments but does that actually happen?
It doesn’t happen often, and so doesn’t seem to be a particularly serious problem, but when it does it’s really bad and the growth of such subthreads is very hard to stop.
I find the fact that even obviously aggressive and stupid stuff tends to be replied to in a polite and informative manner one of impressive things about LessWrong.
See the hidden comments to this post for an example. Just one user causes the damage directly, but that wouldn’t happen to the extent it did without the polite and informative replies of others that fuel the conversation. Good contributions to bad conversations have negative net consequences.
See the hidden comments to this post for an example. Just one user causes the damage directly, but that wouldn’t happen to the extent it did without the polite and informative replies of others that fuel the conversation. Good contributions to bad conversations have negative net consequences.
So there is a dark part of LW archives, looking upon which might very well destroy your soul. Fortunately, you won’t accidentally see it—you have to consciously choose to unhide the top-level comment. I suppose the damage is that people are unreasonably drawn to that kind of thing and they are also unreasonably drawn to replying to stupid stuff even when it’s obviously hopeless (‘someone is wrong on the internet’ syndrome) so we have to protect them from wasting their time. I guess I dislike paternalism enough that such an argument doesn’t convince me (and less seriously, if someone feels inclined to waste time while they’re browsing the Internet, then they are already doomed anyway).
I actively disagree. You don’t have to read those threads, but polite and measured responses to dumb ideas is one of the best ways to get yourself out of those ideas. We literally have dumb question threads for exactly that purpose. I also think it’s good to encourage people to be patient and explain things. What “damage” was caused? A couple hundred posts about something that you don’t have to read, but which could very well be useful to other people.
You want to take a community of people that try to help others understand and instead silence all conversation along lines you disapprove of.
You don’t have to read those threads, but polite and measured responses to dumb ideas is one of the best ways to get yourself out of those ideas.
In the example I gave, this clearly didn’t apply.
We literally have dumb question threads for exactly that purpose.
There are important details that distinguish the conversations happening in stupid questions threads. These details also cause those threads to not be downvoted.
You want to take a community of people that try to help others understand and instead silence all conversation along lines you disapprove of.
You are throwing out relevant details again and distorting other details in the direction of your argument. The qualifier “all conversation” is inaccurate, for example. Alternatively, if disapproval is taken to be referring to a (value assignment) decision (rather than unreflective emotional response, say), it’s tautological that I’d be trying to get rid of things I disapprove of.
I don’t understand this point. Not punishing people in those cases would use the same information, so the amount of available information doesn’t characterize any given choice of the effect.
(Edited the grandparent. My point is that lack of blanket downvotes for replies to negative posts is equally insensitive to details about those posts. This consideration doesn’t help with the question of punish vs. not-punish.)
What are these negative net consequences? I enjoyed reading the good replies to that conversation. If you think the problem is the volume of conversation, then you have to explain why shutting down long “bad” conversations is worth losing shorter elegant responses to bad points.
Lesswrong puts a lot of stock in trying to fight human biases, which seems to me that saying “Don’t do that!” with negative karma, and then rewarding people explaining why not, is exactly what we should be doing.
(It’s much better now that all of the bad comments are removed either by the author or by moderators, so you are not looking at the problem as it presents itself in the wild, but you can imagine based on the number of downvotes.)
I don’t think LW should be used for arguing with people who make too many errors. It’s a different kind of activity completely from trying to obtain a better understanding of what constitutes good thinking.
Why is it a problem if they’re hidden to most users? It doesn’t put off newcomers and people can avoid them. Are you concerned about time the time of LW users being wasted?
In addition to what others said, people will be discouraged from explaining downvotes. (Or maybe encouraged to explain even minor downvotes.) Once a comment is at −3 without a (good) explanation in a reply to it, people will not want to pay a penalty to explain to a potentially well meaning poster what was wrong with their comment. Instead they will be incentivized to further downvote it without explanation.
Not all comments deserving −3 karma are trolls, some are merely stupid / insensitive / wrong / unoriginal.
This change will make people think: is this comment a troll? If it is, downvote it to −3 or beyond; if not, don’t downvote below −2. If that’s desirable behavior, and we come to agree about it, and −3 is the right level for it, then we will have many comments at −2 that previously would have been downvoted further, because people will not want to tell others “you’re trolling” unless they really think so.
(And then people would probably want comments hidden at −2, not −3: the karma level of bad, though not quite trolling, comments.)
The site was seriously going to hell due to long troll-started threads and troll-feeding. It’s not a good use-case when intelligent comments are hidden by default, either. And I now see that contrary to the feature request, it’s only asking for 5 karma for immediate descendants, not anywhere in the chain, so I shall go now and ask that to be updated.
I don’t want to train readers to unhide things by default just because they might miss intelligent conversation in subthreads, I don’t want intelligent conversation in places it’s hidden by default from readers trusting the site mechanics, I want this site to stop feeding its trolls and would prefer a community solution rather than moderators wielding banhammers, and I want this site to focus its efforts positively rather than in amazing impressive refutations of bad ideas which is a primary failure mode of any intelligent Internet site. Threads with heavily downvoted ancestors should almost always not exist, because of their opportunity costs, the behaviors they reinforce, and other long-term consequences.
If this particular effort proves insufficient, the next step will be to make it impossible for users less than three months old (or with less than 1000 karma or something) to see comments under −3 at all.
the next step will be to make it impossible for users less than three months old (or with less than 1000 karma or something) to see comments under −3 at all.
I am vehemently opposed to this. If the problem is out-of-control threads, make the newbies unable to reply to downvoted comments—don’t make them unable to look at them! Don’t they need negative examples too?
As someone who is a new user, I strongly agree with Alicorn.
More options don’t always make people better off, but seeing downvoted posts is an option that is actively useful for new users. One of my first comments initially got downvoted to −1, and on seeing this, I looked at other downvoted comments and was able to use what I learned to edit my post so it eventually got voted back into positive territory.
Mistake avoidance is worth learning and downvoted posts are helpful for this. I have benefited from looking at downvoted posts, and I have no reason to believe I’m atypical in this regard.
Negative examples, if I’m a newcomer, mean that I stop reading the site because the discussion is not consistently high-quality. And newbies looking at negative examples mean that elder posters feel obliged to respond to bad comments, just in case a newbie reads them and gets fooled; it makes it mentally harder to downvote and walk away. This is a change I would strongly consider in any case.
The site was seriously going to hell due to long troll-started threads and troll-feeding.
I really don’t see this. It looks like the main clause of decline is that spontaneous top-level postings are not enough to make up for the loss of the enormous subsidy of a good writer posting as a full-time job. 3 examples of hellish troll-feeding would be nice.
It looks like the main clause of decline is that spontaneous top-level postings are not enough to make up for the loss of the enormous subsidy of a good writer posting as a full-time job.
I think LW’s high standards make the activation energy for writing new posts really high. I have lots of ideas for new posts, but when it comes to actually writing them, I think to myself “is this really something LW wants to read”, “is this going to make me look like an idiot”, etc. I’ve written a few reddit self posts in the past few weeks, and it was interesting to notice how much lower my activation energy was for submitting to reddit than to LW. It’s almost as though I have an ugh field around writing LW posts.
Sure, you probably want people to have this high activation energy to a certain extent; it’s a good way to keep the quality high. But if we want more spontaneous top-level postings, maybe we should experiment with trying to shift the activation energy parameter downwards a bit and looking for a sweet spot.
For example, one idea is to frame the moderation system as more of a filtering system than a punishment/reward system: “It’s OK to write a lame post, because if you do, it’ll just get voted down and no one will read it.”
I think the punishment of getting voted down is way more salient for me than the reward of getting voted up, and maybe I’m not the only one who’s wired this way.
Would you mind sharing your reddit username? I generally like your writing and conclusions, and I’d hate to miss out on the long tail of them that may fall just below the LW margin.
Hey, thanks! I prefer to keep my reddit account mostly divorced from my real identity though, and I don’t think LW would find the self posts I mentioned especially interesting.
I will likely write a bunch for LW at some point, but currently I’m focusing on other stuff.
It looks like the main clause of decline is that spontaneous top-level postings are not enough to make up for the loss of the enormous subsidy of a good writer posting as a full-time job.
Why don’t SI people post more paper drafts and other writings here for discussion? Seems like a cheap way to both help improve the SNR here and give SI more ideas and feedback.
That’s not rationality content. AI content is sort of grandfathered in because of the SI sponsorship and Eliezer’s posting on it, but most of the LW audience is attracted by the rationality content, I think.
AI content is sort of grandfathered in because of the SI sponsorship and Eliezer’s posting on it
I thought AI content is considered on-topic here more because there is a strong argument, based on our current best understand of rationality, that we should make a significant effort to push the Singularity and hence the entire future of the accessible universe in a positive direction. I guess it’s understandable that you might not want to overplay this and end up alienating people who are more interested in other rationality topics, but we seem still far from that point, judging from the relative lack of complaints and recent voting on AI and Singularity-related posts.
I’ve been doing just that, and it often has been done by others—for example, Luke & Anna’s “Intelligence Explosion: Evidence and Import” was posted several times, I believe. They may have improved the SNR, but I can’t say there seem to be very much feedback or ideas...
I’m thinking of thesepapers which were posted here only after they were finished and published. Also this one which I posted here because Carl didn’t. Also Paul Christiano posting stuff on his own blog instead of LW.
They may have improved the SNR, but I can’t say there seem to be very much feedback or ideas...
That’s strange. I find LW feedback useful on my posts, and assumed that would be the case for others. Can you give an example of a post that didn’t gather useful feedback and ideas?
In the first link, for three papers, there’s exactly one substantive comment on a paper
The second link has roughly 3 or 4 comment threads which revolve around a specific point which seemed to cause changes in the paper, with the rest of the comments being relatively unrelated.
The third link contains some interesting comments about the paper on a meta level, but nothing that could be useful to the author, IMO.
the power post’s few comments are dominated by citation format, matriarchy and why anyone cares. None of these were useful to me except maybe the format carping.
the Sobel post has maybe 2 or 3 comments of value
the intelligence failures link garnered 1 comment of value
I guess it wasn’t clear, but I was suggesting that if those papers had been posted here while they were still in draft form (as opposed to “finished and published”), they would have received more discussions since people would have more incentives to participate and potentially influence the final output.
As for your posts, I think the reason for lack of useful feedback is that they are mostly summaries of many academic papers and it’s hard to give useful feedback without spending a lot of time to read those papers which nobody has an sufficient incentive to.
I got some comments for mydrafts. There were some valuable suggestions in both threads which I incorporated, but I had hoped for a little more feedback.
If you post more drafts in the future, I think it would help to add more context: Who is the target audience? What are you hoping to accomplish with the papers? (If we knew that we might care more about helping you to improve them.) Do they contain any ideas that are new to LW?
Thank you. I haven’t noticed an increasing problems with trolls and/or extremely low quality posts. Some of the worst seemed to be sincere posts by people with mental problems. I don’t know whether there’s a serious problem of LW potentially becoming a crank magnet.
That would’ve been hard to find, but thankfully Gabriel did the work to find one example. Thanks Gabriel!
If you go to Configurations and Amplitude and scroll down… then you’ll suddenly find this really amazingly huge thread, much much larger than anything around it. What is this wonderful huge thread, you wonder? Why, it’s this:
Finding this kind of conversation dominating Recent Comments, much less Top Comments, is something I find dishedonic and I don’t think it helps the site either.
I thought you had something different in mind, but if it is this, I don’t understand in what way is the solution of charging only for immediate replies to bad comments unsatisfactory. When I proposed this variant of the feature in the ticket, the thread you cited was exactly of the kind I was thinking about.
On the other hand, threads like this are rare, so (1) you seem to exaggerate their impact and (2) a month that you’ve suggested in the ticket won’t be enough to see whether the direct-reply-fee solution helps, as we only get a few of these in a year.
I saw that at the time. But as Vladimir_Nesov says, they seem rare enough to not much impair my reading experience. What is your estimate of their frequency per year or per month?
Of course this also indicates that the current countermeasure may be ineffective, or maybe it wasn’t below −3 when Yvain replied. But if the discussion cuts out after two steps, that might be good enough. Perhaps it should just be impossible to reply to anything if there’s more than two ancestors at −3 or below.
As far as I can tell, all three replies to that comment were made before it hit −3.
(I know that my reply was made with no penalty, and Yvain’s reply was already there at the time; wedrifid’s later comment also suggests that his reply wasn’t penalized.)
And I now see that contrary to the feature request, it’s only asking for 5 karma for immediate descendants, not anywhere in the chain, so I shall go now and ask that to be updated.
Please clarify this for me. If I am reading correctly it indicates that currently only the immediate descendent is punished but that your orders are that all descendents of that comment shall be punished too. If so that strikes me as ridiculously shortsighted. This makes us obliged to go through the entire ancestor history of a comment every time we wish to make a reply if we wish to avoid being arbitrarily punished.
If this particular effort proves insufficient, the next step will be to make it impossible for users less than three months old (or with less than 1000 karma or something) to see comments under −3 at all.
Eliezer, you should stop personally exercising your power over the forum. Your interventions are reactionary, short sighted, tend to do more harm than good and don’t adequately incorporate feedback received. Consider telling someone else at SingInst what your desired outcome is and ask them to come up with a temperate, strategically sane solution that doesn’t make you look silly.
Eliezer, I would take wedrifid’s suggestion incredibly seriously. You have gone from problem diagnosis (not shared by most of the community it seems), to designing a solution (not agreed to be effective by most, even if the problem stood), to marshalling the extremely limited development resources this website has at its disposal to implement it. None of these steps seem to have had any agreement by the community, and if it wasn’t for the bug dug out by Akis, we may not have had a chance to even discuss it after the fact.
Pacifism isn’t the only failure mode for well-kept gardens. Moderator arbitrariness is a well-known other.
I agree that well-kept gardens are better, but that means MODERATION. It doesn’t mean indiscriminately spraying parts of your garden with herbicide to get rid of weeds.
Do arbitrary moderators kill gardens? I’ve seen that happen only once, and there were many contributing factors—an exact clone people could switch to easily, moderators keeping their debater hat on, focus on punishment of specific instances rather than good generic policies, the venue being for socializing/kvetching which clashed with severity.
This makes us obliged to go through the entire ancestor history of a comment every time we wish to make a reply if we wish to avoid being arbitrarily punished.
Since the system, as it works now, asks whether we really wish to spend karma, we wouldn’t need to go through. Nevertheless I agree with the latter part of your comment.
If so that strikes me as ridiculously shortsighted. This makes us obliged to go through the entire ancestor history of a comment every time we wish to make a reply if we wish to avoid being arbitrarily punished.
Actually, you get warned as soon as you hit the Reply button.
And I now see that contrary to the feature request, it’s only asking for 5 karma for immediate descendants, not anywhere in the chain, so I shall go now and ask that to be updated.
Can you explain what this would accomplish at all? I’m not seeing anything that it accomplishes. If anything, it actively makes the problem of good threads that happen to have been started in a negatively downvoted comment worse. Moreover, it would lead to the situation where people are replying to a long-thread and then get a karma hit because it happens that way back up in the thread the initial bit got downvoted. That means that among other things, replying to threads where one is looking at single post or with a permalink becomes essentially a karma trap. This accomplishes nothing. The primary problem with trolling is that it clogs up the recent comments sections. High quality comments downthread of a bad comment don’t have this problem. This seems like an even worse idea than the already implemented change by such an order of magntude that part of me is wondering if this is a deliberate use of the Dark Arts to make the current change more palatable in comparison.
What are these opportunity costs, what behaviors are they reinforcing, and what are the long-term consequences you are trying to avoid?
When I respond to someone who is getting downvoted, do you think I’m likely to have been spending my time doing something better? I can’t contribute usefully to a conversation about decision theory, but I can talk about plenty of other things to other people. Exactly what opportunities are being wasted, and why are they all of a sudden being wasted now it’s not whatever golden age there was before the site was going to hell? Are you trying to say intelligent posters are not posting because somewhere else in some comment thread some idiot is being talked to?
Is the end goal of this simply to have any conversation stop as soon as something gets voted to −3? Really? Three random people or 1 person with 2 sockpuppets can just end a discussion? I don’t understand why you can’t trust people to have conversations but you can trust them to downvote wisely.
It may be worth considering whether your intuitions and priors about how serious a problem trolling is is at odds with the impression of the rest of the community. Or, it may be that most of the people you have attracted here are somewhat more tolerant of some amount of trolling. It seems at least from the general voting in this thread that most of the community is not happy with even this change, let alone the other changes you are suggesting.
Biased sample if those who flee the long-replies-to-downvoted-comments threads have already left. At the point where LW starts being unfun for me to read, I panic. If my standards are too high… well, there’s worse things that could happen to a site, like my threshold for alarm being set too low.
Personally, it seems to me that it is, but that it might well be justified anyway. I’m not a big fan of the approach taken, but I’m not yet completely against it either. I’m disappointed that it was implemented unilaterally.
Biased sample if those who flee the long-replies-to-downvoted-comments threads have already left
Valid point. How can we test this?
At the point where LW starts being unfun for me to read, I panic.
Being concerned about the signal to noise ratio is reasonable, but yes this sounds like panicking. Deciding that there’s a problem is not the same thing as deciding that a specific course of action is a good solution to the problem. (I shouldn’t need to tell you that.)
The mental model being applied appears to be sculpting the community in the manner of sculpting marble with a hammer and chisel. Whereas how it’ll work will be rather more like sculpting flesh with a hammer and chisel, giving rather a lot of side effects and not quite achieving the desired aims. Sculpting online communities really doesn’t work very well.
I don’t want to train readers to unhide things by default just because they might miss intelligent conversation in subthreads
Another way of doing this would be a five second delay to unhide hidden comments. Waiting isn’t fun and it prevents hyperbolic discounting from magnifying the positive reinforcement of reading something that someone doesn’t want you to read.
This is a really good idea. It’s incentivizing, noncoercive, and could possibly even have the look-and-feel of ordinary site delay rather than censorship and avoid getting people’s hackles up.
There’s a message warning about the impending karma loss that pops up before posting, right? Maybe the message alone would do the trick if it warned people that their contribution is going to be buried by default, informed them of the negative consequences of replying to crap and implored them to reconsider?
And I now see that contrary to the feature request, it’s only asking for 5 karma for immediate descendants, not anywhere in the chain, so I shall go now and ask that to be updated.
A lot of discussion happens without much use of the context in which it started. If a good conversation starts under (perhaps 4 levels lower than) a comment that will in the future sink to −3 or lower, that stops the conversation, without any convenient way of extracting it outside that thread. I don’t believe the conversation should be discouraged in such cases. (Do you think it should? I expect it would be very inconvenient and annoying without the additional subthread-extraction feature.)
On the other hand, typical clueless-feeding conversations are mostly back-and-forth between a user in a failure mode and those who reply to them directly. The clueless normally gets downvoted, but those who reply to them are not, and the measure of Karma-punishing those who directly reply to downvoted comments would address that.
I don’t want people to learn the habit of unhiding comments! Comments that will end up being hidden by default mostly shouldn’t exist. If there’s something amazingly intelligent to say, put it in a top-level comment to begin with, not somewhere it will be hidden by default!
I would simply like to point out the irony of having this discussion in a thread that is hidden by default due to being below a comment currently at −9.
And: Did anyone take a karma hit for this to happen? Or does it turn out that we’re just incentivizing being quick on the trigger—so whoever’s camping out on the site and can get to a comment before its score plummets gets to talk about it and no one else can without accepting the ding?
I paid 5 karma for making this comment. But if everyone in the subthread had to pay 5 karma, or if people below 1000 karma couldn’t participate at all, then this thread would be much smaller. Comments of minor significance, like this one and others, would probably not exist. This ceteris paribus I would see as a loss.
Meta-discussion is also a horrible slime-dripping cancer on a forum
Meta-discussion has to occur on fora if fora are going to function. It may be that non-functioning fora have more meta-discussion, but there are obvious correlation v. causation issues.
Meta-discussion has to occur on fora if fora are going to function.
You have some evidence for this?
In this thread and the perfectly superfluous other thread you made for this topic, I have observed a tendency to state ex cathedra beliefs on the nature of communities and what mechanisms are necessary for their survival.
Only some personal experience and general intuition. I don’t think anyone, even Eliezer, is going to argue that zero meta discussion is optimal. The question then is how much is optimal. It is possible that a weaker version of my statement like starting it with “it seems that” might have been helpful.
In this thread and the perfectly superfluous other thread you made for this topic, I have observed a tendency to state ex cathedra beliefs on the nature of communities and what mechanisms are necessary for their survival.
I agree that there’s a fair bit of stated beliefs without much evidence all around, although I’m puzzled by your description of the other thread as superfluous.
I agreed with this as a general principle strongly enough to pay a 5 karma penalty to say so. I don’t think it should be as down voted as it is.
I can’t recall having ever participated in a forum or blog and have the pay offs of meta-discussion be higher than discussing something else. More problematically it is way too engaging than it should be and is an attention sink.
Er, I unhid all comments because I was curious. I know I’ve made my share of hidden comments over my time here. I was so glad when I learned there was the option to get rid of hiding by default.
I for one don’t want a mess of top level comments responding to posts that have been hidden, with no organization. There’s a reason this sort of thing is divided into threads.
Comments that will end up being hidden by default mostly shouldn’t exist.
Then why don’t the grand-high muckity-mucks just censor the posts honestly? I do not see how that could possibly be less effective than this crowd-sourced star chamber scheme, which manages to be simultaneously opaque, unaccountable, and open to abuse by the trolls it’s supposed to be suppressing.
I agree with this subgoal, but the inconvenience and annoyance of having your whole (good) discussion starting to get punished after it is well underway because of the properties of some grand-grand...-parent comment on an unrelated topic seems like a strong argument against. I think this shouldn’t be done until a way of mitigating this problem is found.
I’d love to have a way to move comments. If anyone’s willing to donate enough money, this site could hire a full-time programmer and have all kinds of amazing new features. Meanwhile the development resources just don’t exist.
Threads with downvoted ancestors were already being punished. They got hidden by default with no warning to commenters that this is the case. Unless people have already learned to unhide by reflex—and then the site has no visual filter mechanism!
That it’s difficult to do this right is not an argument for doing it poorly. My point is that it’ll have a negative effect on net if implemented without thread-moving, with the correct goal of discouraging bad conversations getting obscured by the problem I’ve pointed out. Only if the problem is mitigated (by thread-moving or something else) will it be a good idea to implement what you suggest. If it can’t be mitigated with available resources, then nothing more should be done for now.
I’d love to have a way to move comments. If anyone’s willing to donate enough money, this site could hire a full-time programmer and have all kinds of amazing new features. Meanwhile the development resources just don’t exist.
How much would part-time or one-off single feature development work cost? If you are going to tell the public that a problem is easily solved with money, you should aim to give the public a sense of the problem’s scope.
A web developer volunteered to help improve the site. Sorry that the link to the volunteer offer goes to a slime-dripping cancer meta thread, but that is where it happens to be. The link. drinks a chaser for my −5 karma points
I reply to you post because the system doesn’t allow me to reply directly to Yudkowsky since I don’t have enough karma (karma can become negative due to downvotes but not by paying the penality, apparently).
You might want to consider splitting LW off SI and operating it a a separate charity, because there might be people who would wish to donate to LW but not to SI.
I’m proceeding to answer anyway. I have karma to burn.
Does the karma subtraction happen if for answers to comments which are −3 or below when the comment is posted, or does that −5 cost come and go depending on the karma of the comment being answered? Or is the loss permanent regardless of what the karma of the comment being answered becomes?
I think it was Grognor is right when he pointed out in a different thread that LWers pay lipservice to gardening but don’t engage in it. We’ve developed a very strong aversion to being down voted and as a result don’t down vote enough.
A polite, reasonable but utterly useless or inane comment should be at −1 or −2 or −3, so people who want to make good use of their time don’t waste it on that.
Whatever future changes you consider I think really should be geared at getting LWers to start behaving like this. Perhaps make it so that posters below or above a certain karma score have to make about as many up votes as down votes. Or maybe in the same way we already limit the number of down votes to equal to the persons positive karma, why not have the same limit for up votes?
That would only make sense for posters with, say, negative karma in the last month. Otherwise this results in (self-)censoring of controversial comments.
It’s almost always possible to package controversial claims so that the posts/comments communicating them would be upvoted (and would be better for that).
True, though I hoped that this forum would not demand as high a level of political correctness. Especially given that there is a simple technical solution.
Hence my suggestion of only applying it to those with negative 30-day karma. This excludes spuriously downvoted comments and prevents most malicious sniping strategies.
Your policy looks well targeted to people I’d consider trolls. The thing is, I think the people in favor of the original policy have a much broader view of what constitutes a troll.
Seems like a sizable minority want a lot of other people to shut up.
Well then impress upon Eliezer how much of an idiot he is, Vladimir, instead of getting snippy with army1987. Eliezer is the one who’s using the word so much, Vladimir.
In general, I’m opposed to automated karma modification. I’m pleased with my relatively high karma, and it’s because I respect this community and the karma score is the result of upvotes (and rather few downvotes) from human beings.
If we ever get ems (and possibly AIs) on LW, my default would be to get their up and down votes the same weight.
That’s live now? Awesome! We’ve been needing that for… well, years, actually, but it got really bad a couple of months ago. “Troll feeding fee” is such a very apt description, too.
I don’t recall this being discussed by the community at all. It seems like a bad idea. Valuable conversations can extend from comments that are already negative. −3 is also not that negative. This also discourages people from actually explaining why someone is wrong if there are a lot of people who downvote the comment. This will both make it harder for that person to become less wrong and make it more likely that bystanders who are reading the conversation will not see any explanation for why the comment is downvoted. Overall this is at best a mixed idea that should have been discussed with the community before implementing.
This isn’t terribly relevant. Moderators that discuss every decision with the community and only act when they’ve built consensus fail prey to vocal minorities, e.g., Wikipedia. Then they tend to stagnate.
Instead of trying to build a consensus, Eliezer could have asked the community “Here are the consequences I intend/foresee with this change. Are there any important ones I may have overlooked?”, which has no obvious downsides that I can see, other than the opportunity cost of writing out the question.
The obvious downside in such cases is that asking for and being told what the downsides are and then ignoring them is often perceived as even worse than not asking at all. If Eliezer anticipated that he would go ahead with his change regardless of what downsides are pointed out then it could be detrimental to ask.
(Note: That is the downside of asking, not a claim that asking would be a net negative.)
Do you think this is true even if one made it clear that one is not seeking a consensus but reserving the right to make the final cost/benefit judgement? If so, it’s contrary to my expectations (i.e., I don’t see why that would be perceived as being worse than not asking at all), and I would appreciate any further explanations you might have.
Yes, discussing every decision with the community is probably a bad idea. But that doesn’t mean that specific, large scale changes shouldn’t be discussed.
Very well, then: why should specific, large scale changes be discussed?
I’m intentionally ignoring the implication that this specific change was a “large scale” one.
Because a community is made up of its users, and if people find the changes negative enough, they will stop using the site.
Because the community has additional experience and may have thoughts about a proposal. The impression one gets when moderating something can be very different from the impression one gets in the general case. Discussing such issues in advance helps prevent severe unintended consequences from occurring.
In short, you’re hoping for the positive part of WWIC, while hoping the negative half doesn’t happen.
See references therein for applications to social websites.
If a reply to a downvoted comment is not downvoted, replies to that reply are not punished, so good subthreads are unaffected.
5 Karma points is not that much as well, so if it’s really worth replying to, it’s possible to continue the conversation. It’s usually not worth replying though, and when not feeling any cost people would ignore that consideration, giving fuel to bad conversations as a result. The motivation I agree with is to stop bad conversations, not necessarily replies to individual bad comments, which is just the means.
So, if I post some honest argument but make a couple of stupid mistakes (I hope that such a post will get downvoted to around −5), anyone who explains me what I have missed will be punished?
Yes, this policy decision doesn’t happen to be one sided. What you describe seems to be a comparatively rare event though. If you actually want to get better, you’ll have opportunities other than particularly downvoted blunders to seek feedback, and there is an obvious solution of making a non-downvoted separate comment that asks for feedback in such cases, so that said feedback would not be punished.
If I’m hearing you correctly, the plan to limit trolls is push them to make fewer posts inside the auto-hidden areas and more posts outside of the auto-hidden areas?
Well, uh, I suppose that’s one way to deal with trolls.
The idea is that they’ll make fewer posts if the non-trolls don’t respond.
But if saying something and creating a separate comment to ask for feedback would become accepatable, the trolls will create even more visible noise before they get into obviously malicious territory.
I agree that this is a failure mode, but it’s not an absolute one: people could explain to you via PM. Then you’d be free to edit the comment, and if its score floated back up discussion could ensue below.
PMd explanations are not publicly visible, so they don’t help others who read the thread and make the same mistake as the downvoted poster. They can’t be upvoted by others, which removes a big (in my experience) incentive to post the explanations.
Also, visible actions spread. Someone who posts a correction encourages other people to post corrections to other comments, while someone who PMs does not encourage that behavior. But it is also visible how many responses there are, so that people don’t overwhelm an individual with responses, while they might overwhelm with PMs.
Also, with visible explanations everybody knows what already has been explained and what hasn’t. If it were common to explain things via PM, some points would be raised multiple times while others not at all, depending on the number of readers estimating enough high probability that they are the first to comment on an issue and thus not wasting their time by making duplicite comments.
Really? Isn’t hiding of down-voted comments enough? It protects people from seeing crap unless they explicitly ask for it (and they should be able to). There could still be a problem if comment threads were sprinkled with huge contiguous swathes of hidden comments but does that actually happen? (I think that the maximum number of contiguous hidden comments that I have seen is two.)
I find the fact that even obviously aggressive and stupid stuff tends to be replied to in a polite and informative manner one of impressive things about LessWrong. Some are concerned that this place appears dogmatic to newcomers. Having massive down-voting accompanied by explanations is one way in which those concerns can be alleviated.
Curiously, I initially wanted to downvote your comment but hesitated because I didn’t want people replying to it being punished. On reflection, your comment might not actually merit downvoting (I think there was a bit of ‘who the hell he thinks he is, treating LessWrong as his own private playground’ knee-jerk reaction here), but this is not at all a good sign on what effects this will have on troll-fighting. There are comments that are stupid but seem to be made in good faith and thus be worthy of a response. If I don’t feel like making that response, I might just downvote and be on my way. Now, downvoting will provide a disincentive to any potential educators which in turn discourages downvoting itself.
And finally, people who strongly dislike that change can decide to mentally subtract 5 points of karma from the score of each reply to a comment that’s at −3 or below and then base their voting decision on that (which will tend to push the scores of such replies to 5 + whatever they would have gotten otherwise), so nyah!
It doesn’t happen often, and so doesn’t seem to be a particularly serious problem, but when it does it’s really bad and the growth of such subthreads is very hard to stop.
See the hidden comments to this post for an example. Just one user causes the damage directly, but that wouldn’t happen to the extent it did without the polite and informative replies of others that fuel the conversation. Good contributions to bad conversations have negative net consequences.
So there is a dark part of LW archives, looking upon which might very well destroy your soul. Fortunately, you won’t accidentally see it—you have to consciously choose to unhide the top-level comment. I suppose the damage is that people are unreasonably drawn to that kind of thing and they are also unreasonably drawn to replying to stupid stuff even when it’s obviously hopeless (‘someone is wrong on the internet’ syndrome) so we have to protect them from wasting their time. I guess I dislike paternalism enough that such an argument doesn’t convince me (and less seriously, if someone feels inclined to waste time while they’re browsing the Internet, then they are already doomed anyway).
I actively disagree. You don’t have to read those threads, but polite and measured responses to dumb ideas is one of the best ways to get yourself out of those ideas. We literally have dumb question threads for exactly that purpose. I also think it’s good to encourage people to be patient and explain things. What “damage” was caused? A couple hundred posts about something that you don’t have to read, but which could very well be useful to other people.
You want to take a community of people that try to help others understand and instead silence all conversation along lines you disapprove of.
In the example I gave, this clearly didn’t apply.
There are important details that distinguish the conversations happening in stupid questions threads. These details also cause those threads to not be downvoted.
You are throwing out relevant details again and distorting other details in the direction of your argument. The qualifier “all conversation” is inaccurate, for example. Alternatively, if disapproval is taken to be referring to a (value assignment) decision (rather than unreflective emotional response, say), it’s tautological that I’d be trying to get rid of things I disapprove of.
Hi! I just want to test the new system.
And you are throwing out relevant details whenever you punish people for responding to downvoted comments.
I don’t understand this point. Not punishing people in those cases would use the same information, so the amount of available information doesn’t characterize any given choice of the effect.
Individual downvotes for bad posts are sensitive to details about those posts. Blanket downvotes for replies to negative posts are not.
(Edited the grandparent. My point is that lack of blanket downvotes for replies to negative posts is equally insensitive to details about those posts. This consideration doesn’t help with the question of punish vs. not-punish.)
What are these negative net consequences? I enjoyed reading the good replies to that conversation. If you think the problem is the volume of conversation, then you have to explain why shutting down long “bad” conversations is worth losing shorter elegant responses to bad points.
Lesswrong puts a lot of stock in trying to fight human biases, which seems to me that saying “Don’t do that!” with negative karma, and then rewarding people explaining why not, is exactly what we should be doing.
(It’s much better now that all of the bad comments are removed either by the author or by moderators, so you are not looking at the problem as it presents itself in the wild, but you can imagine based on the number of downvotes.)
I don’t think LW should be used for arguing with people who make too many errors. It’s a different kind of activity completely from trying to obtain a better understanding of what constitutes good thinking.
Why is it a problem if they’re hidden to most users? It doesn’t put off newcomers and people can avoid them. Are you concerned about time the time of LW users being wasted?
Well he is LWs benevolent dictator.
In addition to what others said, people will be discouraged from explaining downvotes. (Or maybe encouraged to explain even minor downvotes.) Once a comment is at −3 without a (good) explanation in a reply to it, people will not want to pay a penalty to explain to a potentially well meaning poster what was wrong with their comment. Instead they will be incentivized to further downvote it without explanation.
Not all comments deserving −3 karma are trolls, some are merely stupid / insensitive / wrong / unoriginal.
This change will make people think: is this comment a troll? If it is, downvote it to −3 or beyond; if not, don’t downvote below −2. If that’s desirable behavior, and we come to agree about it, and −3 is the right level for it, then we will have many comments at −2 that previously would have been downvoted further, because people will not want to tell others “you’re trolling” unless they really think so.
(And then people would probably want comments hidden at −2, not −3: the karma level of bad, though not quite trolling, comments.)
Not all comments receiving −3 karma are trolls either.
The site was seriously going to hell due to long troll-started threads and troll-feeding. It’s not a good use-case when intelligent comments are hidden by default, either. And I now see that contrary to the feature request, it’s only asking for 5 karma for immediate descendants, not anywhere in the chain, so I shall go now and ask that to be updated.
I don’t want to train readers to unhide things by default just because they might miss intelligent conversation in subthreads, I don’t want intelligent conversation in places it’s hidden by default from readers trusting the site mechanics, I want this site to stop feeding its trolls and would prefer a community solution rather than moderators wielding banhammers, and I want this site to focus its efforts positively rather than in amazing impressive refutations of bad ideas which is a primary failure mode of any intelligent Internet site. Threads with heavily downvoted ancestors should almost always not exist, because of their opportunity costs, the behaviors they reinforce, and other long-term consequences.
If this particular effort proves insufficient, the next step will be to make it impossible for users less than three months old (or with less than 1000 karma or something) to see comments under −3 at all.
I am vehemently opposed to this. If the problem is out-of-control threads, make the newbies unable to reply to downvoted comments—don’t make them unable to look at them! Don’t they need negative examples too?
As someone who is a new user, I strongly agree with Alicorn.
More options don’t always make people better off, but seeing downvoted posts is an option that is actively useful for new users. One of my first comments initially got downvoted to −1, and on seeing this, I looked at other downvoted comments and was able to use what I learned to edit my post so it eventually got voted back into positive territory.
Mistake avoidance is worth learning and downvoted posts are helpful for this. I have benefited from looking at downvoted posts, and I have no reason to believe I’m atypical in this regard.
Negative examples, if I’m a newcomer, mean that I stop reading the site because the discussion is not consistently high-quality. And newbies looking at negative examples mean that elder posters feel obliged to respond to bad comments, just in case a newbie reads them and gets fooled; it makes it mentally harder to downvote and walk away. This is a change I would strongly consider in any case.
I really don’t see this. It looks like the main clause of decline is that spontaneous top-level postings are not enough to make up for the loss of the enormous subsidy of a good writer posting as a full-time job. 3 examples of hellish troll-feeding would be nice.
I think LW’s high standards make the activation energy for writing new posts really high. I have lots of ideas for new posts, but when it comes to actually writing them, I think to myself “is this really something LW wants to read”, “is this going to make me look like an idiot”, etc. I’ve written a few reddit self posts in the past few weeks, and it was interesting to notice how much lower my activation energy was for submitting to reddit than to LW. It’s almost as though I have an ugh field around writing LW posts.
Sure, you probably want people to have this high activation energy to a certain extent; it’s a good way to keep the quality high. But if we want more spontaneous top-level postings, maybe we should experiment with trying to shift the activation energy parameter downwards a bit and looking for a sweet spot.
For example, one idea is to frame the moderation system as more of a filtering system than a punishment/reward system: “It’s OK to write a lame post, because if you do, it’ll just get voted down and no one will read it.”
Another idea is to recognize that a given user’s prediction of how much LW will like their post is probably going to be terrible, and tell people that if you never get voted down, you’re not submitting enough.
I think the punishment of getting voted down is way more salient for me than the reward of getting voted up, and maybe I’m not the only one who’s wired this way.
Would you mind sharing your reddit username? I generally like your writing and conclusions, and I’d hate to miss out on the long tail of them that may fall just below the LW margin.
Hey, thanks! I prefer to keep my reddit account mostly divorced from my real identity though, and I don’t think LW would find the self posts I mentioned especially interesting.
I will likely write a bunch for LW at some point, but currently I’m focusing on other stuff.
Why don’t SI people post more paper drafts and other writings here for discussion? Seems like a cheap way to both help improve the SNR here and give SI more ideas and feedback.
That’s not rationality content. AI content is sort of grandfathered in because of the SI sponsorship and Eliezer’s posting on it, but most of the LW audience is attracted by the rationality content, I think.
I thought AI content is considered on-topic here more because there is a strong argument, based on our current best understand of rationality, that we should make a significant effort to push the Singularity and hence the entire future of the accessible universe in a positive direction. I guess it’s understandable that you might not want to overplay this and end up alienating people who are more interested in other rationality topics, but we seem still far from that point, judging from the relative lack of complaints and recent voting on AI and Singularity-related posts.
I don’t know how much paper content CFAR is planning to produce, but it would escape this objection.
I’ve been doing just that, and it often has been done by others—for example, Luke & Anna’s “Intelligence Explosion: Evidence and Import” was posted several times, I believe. They may have improved the SNR, but I can’t say there seem to be very much feedback or ideas...
I’m thinking of these papers which were posted here only after they were finished and published. Also this one which I posted here because Carl didn’t. Also Paul Christiano posting stuff on his own blog instead of LW.
That’s strange. I find LW feedback useful on my posts, and assumed that would be the case for others. Can you give an example of a post that didn’t gather useful feedback and ideas?
Well, look at your own links.
In the first link, for three papers, there’s exactly one substantive comment on a paper
The second link has roughly 3 or 4 comment threads which revolve around a specific point which seemed to cause changes in the paper, with the rest of the comments being relatively unrelated.
The third link contains some interesting comments about the paper on a meta level, but nothing that could be useful to the author, IMO.
As for my own feedback, I keep a public list in http://www.gwern.net/Links#fn2 Going backwards through the last 3:
the power post’s few comments are dominated by citation format, matriarchy and why anyone cares. None of these were useful to me except maybe the format carping.
the Sobel post has maybe 2 or 3 comments of value
the intelligence failures link garnered 1 comment of value
I guess it wasn’t clear, but I was suggesting that if those papers had been posted here while they were still in draft form (as opposed to “finished and published”), they would have received more discussions since people would have more incentives to participate and potentially influence the final output.
As for your posts, I think the reason for lack of useful feedback is that they are mostly summaries of many academic papers and it’s hard to give useful feedback without spending a lot of time to read those papers which nobody has an sufficient incentive to.
I got some comments for my drafts. There were some valuable suggestions in both threads which I incorporated, but I had hoped for a little more feedback.
If you post more drafts in the future, I think it would help to add more context: Who is the target audience? What are you hoping to accomplish with the papers? (If we knew that we might care more about helping you to improve them.) Do they contain any ideas that are new to LW?
Thanks, that’s a good suggestion.
Thank you. I haven’t noticed an increasing problems with trolls and/or extremely low quality posts. Some of the worst seemed to be sincere posts by people with mental problems. I don’t know whether there’s a serious problem of LW potentially becoming a crank magnet.
That would’ve been hard to find, but thankfully Gabriel did the work to find one example. Thanks Gabriel!
If you go to Configurations and Amplitude and scroll down… then you’ll suddenly find this really amazingly huge thread, much much larger than anything around it. What is this wonderful huge thread, you wonder? Why, it’s this:
http://lesswrong.com/lw/pd/configurations_and_amplitude/6bwo
Finding this kind of conversation dominating Recent Comments, much less Top Comments, is something I find dishedonic and I don’t think it helps the site either.
I thought you had something different in mind, but if it is this, I don’t understand in what way is the solution of charging only for immediate replies to bad comments unsatisfactory. When I proposed this variant of the feature in the ticket, the thread you cited was exactly of the kind I was thinking about.
On the other hand, threads like this are rare, so (1) you seem to exaggerate their impact and (2) a month that you’ve suggested in the ticket won’t be enough to see whether the direct-reply-fee solution helps, as we only get a few of these in a year.
I saw that at the time. But as Vladimir_Nesov says, they seem rare enough to not much impair my reading experience. What is your estimate of their frequency per year or per month?
Here’s a nice trollfeeding from today:
http://lesswrong.com/lw/ece/rationality_quotes_september_2012/7bbl
Of course this also indicates that the current countermeasure may be ineffective, or maybe it wasn’t below −3 when Yvain replied. But if the discussion cuts out after two steps, that might be good enough. Perhaps it should just be impossible to reply to anything if there’s more than two ancestors at −3 or below.
You know what would have prevented this?
If you’d told me in June, when I asked you for moderation guidelines beyond “kill shoe ads”, that I should ban comments like that.
As far as I can tell, all three replies to that comment were made before it hit −3.
(I know that my reply was made with no penalty, and Yvain’s reply was already there at the time; wedrifid’s later comment also suggests that his reply wasn’t penalized.)
But not all the subcomments.
(paid a karma cost to respond to this comment)
But then, the circumvention will be to stop using threaded comments properly and start new comment threads to reply to comments below the threshold.
Please clarify this for me. If I am reading correctly it indicates that currently only the immediate descendent is punished but that your orders are that all descendents of that comment shall be punished too. If so that strikes me as ridiculously shortsighted. This makes us obliged to go through the entire ancestor history of a comment every time we wish to make a reply if we wish to avoid being arbitrarily punished.
Eliezer, you should stop personally exercising your power over the forum. Your interventions are reactionary, short sighted, tend to do more harm than good and don’t adequately incorporate feedback received. Consider telling someone else at SingInst what your desired outcome is and ask them to come up with a temperate, strategically sane solution that doesn’t make you look silly.
Eliezer, I would take wedrifid’s suggestion incredibly seriously. You have gone from problem diagnosis (not shared by most of the community it seems), to designing a solution (not agreed to be effective by most, even if the problem stood), to marshalling the extremely limited development resources this website has at its disposal to implement it. None of these steps seem to have had any agreement by the community, and if it wasn’t for the bug dug out by Akis, we may not have had a chance to even discuss it after the fact.
Pacifism isn’t the only failure mode for well-kept gardens. Moderator arbitrariness is a well-known other.
I agree that well-kept gardens are better, but that means MODERATION. It doesn’t mean indiscriminately spraying parts of your garden with herbicide to get rid of weeds.
To clarify: HALT, MELT AND CATCH FIRE, OR THE SITE WILL DIE!
Do arbitrary moderators kill gardens? I’ve seen that happen only once, and there were many contributing factors—an exact clone people could switch to easily, moderators keeping their debater hat on, focus on punishment of specific instances rather than good generic policies, the venue being for socializing/kvetching which clashed with severity.
Death isn’t the only type of failure mode.
Since the system, as it works now, asks whether we really wish to spend karma, we wouldn’t need to go through. Nevertheless I agree with the latter part of your comment.
Actually, you get warned as soon as you hit the Reply button.
Can you explain what this would accomplish at all? I’m not seeing anything that it accomplishes. If anything, it actively makes the problem of good threads that happen to have been started in a negatively downvoted comment worse. Moreover, it would lead to the situation where people are replying to a long-thread and then get a karma hit because it happens that way back up in the thread the initial bit got downvoted. That means that among other things, replying to threads where one is looking at single post or with a permalink becomes essentially a karma trap. This accomplishes nothing. The primary problem with trolling is that it clogs up the recent comments sections. High quality comments downthread of a bad comment don’t have this problem. This seems like an even worse idea than the already implemented change by such an order of magntude that part of me is wondering if this is a deliberate use of the Dark Arts to make the current change more palatable in comparison.
Can you give a few examples of that that you think are are particularly bad?
Responding to the edit:
What are these opportunity costs, what behaviors are they reinforcing, and what are the long-term consequences you are trying to avoid?
When I respond to someone who is getting downvoted, do you think I’m likely to have been spending my time doing something better? I can’t contribute usefully to a conversation about decision theory, but I can talk about plenty of other things to other people. Exactly what opportunities are being wasted, and why are they all of a sudden being wasted now it’s not whatever golden age there was before the site was going to hell? Are you trying to say intelligent posters are not posting because somewhere else in some comment thread some idiot is being talked to?
Is the end goal of this simply to have any conversation stop as soon as something gets voted to −3? Really? Three random people or 1 person with 2 sockpuppets can just end a discussion? I don’t understand why you can’t trust people to have conversations but you can trust them to downvote wisely.
It may be worth considering whether your intuitions and priors about how serious a problem trolling is is at odds with the impression of the rest of the community. Or, it may be that most of the people you have attracted here are somewhat more tolerant of some amount of trolling. It seems at least from the general voting in this thread that most of the community is not happy with even this change, let alone the other changes you are suggesting.
Biased sample if those who flee the long-replies-to-downvoted-comments threads have already left. At the point where LW starts being unfun for me to read, I panic. If my standards are too high… well, there’s worse things that could happen to a site, like my threshold for alarm being set too low.
Do you feel that this is an example of you being intolerant of other posters’ tolerance of trolls? If not, why?
Personally, it seems to me that it is, but that it might well be justified anyway. I’m not a big fan of the approach taken, but I’m not yet completely against it either. I’m disappointed that it was implemented unilaterally.
Valid point. How can we test this?
Being concerned about the signal to noise ratio is reasonable, but yes this sounds like panicking. Deciding that there’s a problem is not the same thing as deciding that a specific course of action is a good solution to the problem. (I shouldn’t need to tell you that.)
The mental model being applied appears to be sculpting the community in the manner of sculpting marble with a hammer and chisel. Whereas how it’ll work will be rather more like sculpting flesh with a hammer and chisel, giving rather a lot of side effects and not quite achieving the desired aims. Sculpting online communities really doesn’t work very well.
Another way of doing this would be a five second delay to unhide hidden comments. Waiting isn’t fun and it prevents hyperbolic discounting from magnifying the positive reinforcement of reading something that someone doesn’t want you to read.
This is a really good idea. It’s incentivizing, noncoercive, and could possibly even have the look-and-feel of ordinary site delay rather than censorship and avoid getting people’s hackles up.
There’s a message warning about the impending karma loss that pops up before posting, right? Maybe the message alone would do the trick if it warned people that their contribution is going to be buried by default, informed them of the negative consequences of replying to crap and implored them to reconsider?
A lot of discussion happens without much use of the context in which it started. If a good conversation starts under (perhaps 4 levels lower than) a comment that will in the future sink to −3 or lower, that stops the conversation, without any convenient way of extracting it outside that thread. I don’t believe the conversation should be discouraged in such cases. (Do you think it should? I expect it would be very inconvenient and annoying without the additional subthread-extraction feature.)
On the other hand, typical clueless-feeding conversations are mostly back-and-forth between a user in a failure mode and those who reply to them directly. The clueless normally gets downvoted, but those who reply to them are not, and the measure of Karma-punishing those who directly reply to downvoted comments would address that.
I don’t want people to learn the habit of unhiding comments! Comments that will end up being hidden by default mostly shouldn’t exist. If there’s something amazingly intelligent to say, put it in a top-level comment to begin with, not somewhere it will be hidden by default!
I would simply like to point out the irony of having this discussion in a thread that is hidden by default due to being below a comment currently at −9.
And: Did anyone take a karma hit for this to happen? Or does it turn out that we’re just incentivizing being quick on the trigger—so whoever’s camping out on the site and can get to a comment before its score plummets gets to talk about it and no one else can without accepting the ding?
I paid 5 karma for making this comment. But if everyone in the subthread had to pay 5 karma, or if people below 1000 karma couldn’t participate at all, then this thread would be much smaller. Comments of minor significance, like this one and others, would probably not exist. This ceteris paribus I would see as a loss.
I have taken at least 3 karma hits to talk about this.
Or worse, if someone wants to reply to a comment at −3, they will first upvote it to −2 just to avoid the penalty.
Well, they can undo the up-vote afterwards.
Well upvote the grandparent so that there can be more responses, then.
Round and round it goes …
Meta-discussion is also a horrible slime-dripping cancer on a forum, so I’m okay with nobody ever seeing it again.
Meta-discussion has to occur on fora if fora are going to function. It may be that non-functioning fora have more meta-discussion, but there are obvious correlation v. causation issues.
You have some evidence for this?
In this thread and the perfectly superfluous other thread you made for this topic, I have observed a tendency to state ex cathedra beliefs on the nature of communities and what mechanisms are necessary for their survival.
Only some personal experience and general intuition. I don’t think anyone, even Eliezer, is going to argue that zero meta discussion is optimal. The question then is how much is optimal. It is possible that a weaker version of my statement like starting it with “it seems that” might have been helpful.
I agree that there’s a fair bit of stated beliefs without much evidence all around, although I’m puzzled by your description of the other thread as superfluous.
Do we have any reliable authorities on the sociology of internet forums yet?
I agreed with this as a general principle strongly enough to pay a 5 karma penalty to say so. I don’t think it should be as down voted as it is.
I can’t recall having ever participated in a forum or blog and have the pay offs of meta-discussion be higher than discussing something else. More problematically it is way too engaging than it should be and is an attention sink.
If you really believe meta-discussions are inappropriate, delete the parent comment.
Er, I unhid all comments because I was curious. I know I’ve made my share of hidden comments over my time here. I was so glad when I learned there was the option to get rid of hiding by default.
Whatever you WANT to be the case, it’s just not true that there are no worthwhile comments that end up hidden.
(bit of irony here :P)
Perhaps acceptable casualties.
I for one don’t want a mess of top level comments responding to posts that have been hidden, with no organization. There’s a reason this sort of thing is divided into threads.
Then why don’t the grand-high muckity-mucks just censor the posts honestly? I do not see how that could possibly be less effective than this crowd-sourced star chamber scheme, which manages to be simultaneously opaque, unaccountable, and open to abuse by the trolls it’s supposed to be suppressing.
I agree with this subgoal, but the inconvenience and annoyance of having your whole (good) discussion starting to get punished after it is well underway because of the properties of some grand-grand...-parent comment on an unrelated topic seems like a strong argument against. I think this shouldn’t be done until a way of mitigating this problem is found.
I’d love to have a way to move comments. If anyone’s willing to donate enough money, this site could hire a full-time programmer and have all kinds of amazing new features. Meanwhile the development resources just don’t exist.
Threads with downvoted ancestors were already being punished. They got hidden by default with no warning to commenters that this is the case. Unless people have already learned to unhide by reflex—and then the site has no visual filter mechanism!
That it’s difficult to do this right is not an argument for doing it poorly. My point is that it’ll have a negative effect on net if implemented without thread-moving, with the correct goal of discouraging bad conversations getting obscured by the problem I’ve pointed out. Only if the problem is mitigated (by thread-moving or something else) will it be a good idea to implement what you suggest. If it can’t be mitigated with available resources, then nothing more should be done for now.
How much would part-time or one-off single feature development work cost? If you are going to tell the public that a problem is easily solved with money, you should aim to give the public a sense of the problem’s scope.
A web developer volunteered to help improve the site. Sorry that the link to the volunteer offer goes to a slime-dripping cancer meta thread, but that is where it happens to be. The link. drinks a chaser for my −5 karma points
I reply to you post because the system doesn’t allow me to reply directly to Yudkowsky since I don’t have enough karma (karma can become negative due to downvotes but not by paying the penality, apparently).
You might want to consider splitting LW off SI and operating it a a separate charity, because there might be people who would wish to donate to LW but not to SI.
There seems to be a significant amount of people who browse with anti-kibitzer and full-unhide.
If you want us to stop using such option combinations, maybe putting a warning into preferences would be a reasonable first step?
I’m proceeding to answer anyway. I have karma to burn.
Does the karma subtraction happen if for answers to comments which are −3 or below when the comment is posted, or does that −5 cost come and go depending on the karma of the comment being answered? Or is the loss permanent regardless of what the karma of the comment being answered becomes?
If we distinguish filtering and feedback, that doesn’t work as a disincentive for people who participate.
Let’s say this gets downvoted
One can post a child comment like this, or a sibiling comment to get the answers without karma penalities on those who respond.
You may well end up encouraging forum mechanics abuse with these policies.
No, you can’t, which is why I just paid 5 karma.
[citation needed]
I think it was Grognor is right when he pointed out in a different thread that LWers pay lipservice to gardening but don’t engage in it. We’ve developed a very strong aversion to being down voted and as a result don’t down vote enough.
A polite, reasonable but utterly useless or inane comment should be at −1 or −2 or −3, so people who want to make good use of their time don’t waste it on that.
Whatever future changes you consider I think really should be geared at getting LWers to start behaving like this. Perhaps make it so that posters below or above a certain karma score have to make about as many up votes as down votes. Or maybe in the same way we already limit the number of down votes to equal to the persons positive karma, why not have the same limit for up votes?
That would only make sense for posters with, say, negative karma in the last month. Otherwise this results in (self-)censoring of controversial comments.
It’s almost always possible to package controversial claims so that the posts/comments communicating them would be upvoted (and would be better for that).
True, though I hoped that this forum would not demand as high a level of political correctness. Especially given that there is a simple technical solution.
Censoring seems to be the point.
Censoring trolls seems to be the point, not censoring discussions of potentially controversial comments left by the respected forum regulars.
Where you see a troll, I may see an insightful fellow.
I always wonder why so many people assume that the censoring gun can only blast people they want censored.
Hence my suggestion of only applying it to those with negative 30-day karma. This excludes spuriously downvoted comments and prevents most malicious sniping strategies.
Ok, I was speaking to the original policy.
Your policy looks well targeted to people I’d consider trolls. The thing is, I think the people in favor of the original policy have a much broader view of what constitutes a troll.
Seems like a sizable minority want a lot of other people to shut up.
Guess whose comment I just had to pay 5 karma to respond to? Yours, Eliezer. Yours.
Don’t feed the troll.
All of a sudden you are one of my favorite humans. (I also dig your maths.)
So, if “score ⇐ −3” means one is a troll…
A poorly-fitting word doesn’t mean much and shouldn’t be a topic of discussion.
Well then impress upon Eliezer how much of an idiot he is, Vladimir, instead of getting snippy with army1987. Eliezer is the one who’s using the word so much, Vladimir.
If it fits so poorly, why is it the definition used by the site?
So Quirrel points exchange for karma at a rate of 5 to 1.
In general, I’m opposed to automated karma modification. I’m pleased with my relatively high karma, and it’s because I respect this community and the karma score is the result of upvotes (and rather few downvotes) from human beings.
If we ever get ems (and possibly AIs) on LW, my default would be to get their up and down votes the same weight.