There could still be a problem if comment threads were sprinkled with huge contiguous swathes of hidden comments but does that actually happen?
It doesn’t happen often, and so doesn’t seem to be a particularly serious problem, but when it does it’s really bad and the growth of such subthreads is very hard to stop.
I find the fact that even obviously aggressive and stupid stuff tends to be replied to in a polite and informative manner one of impressive things about LessWrong.
See the hidden comments to this post for an example. Just one user causes the damage directly, but that wouldn’t happen to the extent it did without the polite and informative replies of others that fuel the conversation. Good contributions to bad conversations have negative net consequences.
See the hidden comments to this post for an example. Just one user causes the damage directly, but that wouldn’t happen to the extent it did without the polite and informative replies of others that fuel the conversation. Good contributions to bad conversations have negative net consequences.
So there is a dark part of LW archives, looking upon which might very well destroy your soul. Fortunately, you won’t accidentally see it—you have to consciously choose to unhide the top-level comment. I suppose the damage is that people are unreasonably drawn to that kind of thing and they are also unreasonably drawn to replying to stupid stuff even when it’s obviously hopeless (‘someone is wrong on the internet’ syndrome) so we have to protect them from wasting their time. I guess I dislike paternalism enough that such an argument doesn’t convince me (and less seriously, if someone feels inclined to waste time while they’re browsing the Internet, then they are already doomed anyway).
I actively disagree. You don’t have to read those threads, but polite and measured responses to dumb ideas is one of the best ways to get yourself out of those ideas. We literally have dumb question threads for exactly that purpose. I also think it’s good to encourage people to be patient and explain things. What “damage” was caused? A couple hundred posts about something that you don’t have to read, but which could very well be useful to other people.
You want to take a community of people that try to help others understand and instead silence all conversation along lines you disapprove of.
You don’t have to read those threads, but polite and measured responses to dumb ideas is one of the best ways to get yourself out of those ideas.
In the example I gave, this clearly didn’t apply.
We literally have dumb question threads for exactly that purpose.
There are important details that distinguish the conversations happening in stupid questions threads. These details also cause those threads to not be downvoted.
You want to take a community of people that try to help others understand and instead silence all conversation along lines you disapprove of.
You are throwing out relevant details again and distorting other details in the direction of your argument. The qualifier “all conversation” is inaccurate, for example. Alternatively, if disapproval is taken to be referring to a (value assignment) decision (rather than unreflective emotional response, say), it’s tautological that I’d be trying to get rid of things I disapprove of.
I don’t understand this point. Not punishing people in those cases would use the same information, so the amount of available information doesn’t characterize any given choice of the effect.
(Edited the grandparent. My point is that lack of blanket downvotes for replies to negative posts is equally insensitive to details about those posts. This consideration doesn’t help with the question of punish vs. not-punish.)
What are these negative net consequences? I enjoyed reading the good replies to that conversation. If you think the problem is the volume of conversation, then you have to explain why shutting down long “bad” conversations is worth losing shorter elegant responses to bad points.
Lesswrong puts a lot of stock in trying to fight human biases, which seems to me that saying “Don’t do that!” with negative karma, and then rewarding people explaining why not, is exactly what we should be doing.
(It’s much better now that all of the bad comments are removed either by the author or by moderators, so you are not looking at the problem as it presents itself in the wild, but you can imagine based on the number of downvotes.)
I don’t think LW should be used for arguing with people who make too many errors. It’s a different kind of activity completely from trying to obtain a better understanding of what constitutes good thinking.
Why is it a problem if they’re hidden to most users? It doesn’t put off newcomers and people can avoid them. Are you concerned about time the time of LW users being wasted?
It doesn’t happen often, and so doesn’t seem to be a particularly serious problem, but when it does it’s really bad and the growth of such subthreads is very hard to stop.
See the hidden comments to this post for an example. Just one user causes the damage directly, but that wouldn’t happen to the extent it did without the polite and informative replies of others that fuel the conversation. Good contributions to bad conversations have negative net consequences.
So there is a dark part of LW archives, looking upon which might very well destroy your soul. Fortunately, you won’t accidentally see it—you have to consciously choose to unhide the top-level comment. I suppose the damage is that people are unreasonably drawn to that kind of thing and they are also unreasonably drawn to replying to stupid stuff even when it’s obviously hopeless (‘someone is wrong on the internet’ syndrome) so we have to protect them from wasting their time. I guess I dislike paternalism enough that such an argument doesn’t convince me (and less seriously, if someone feels inclined to waste time while they’re browsing the Internet, then they are already doomed anyway).
I actively disagree. You don’t have to read those threads, but polite and measured responses to dumb ideas is one of the best ways to get yourself out of those ideas. We literally have dumb question threads for exactly that purpose. I also think it’s good to encourage people to be patient and explain things. What “damage” was caused? A couple hundred posts about something that you don’t have to read, but which could very well be useful to other people.
You want to take a community of people that try to help others understand and instead silence all conversation along lines you disapprove of.
In the example I gave, this clearly didn’t apply.
There are important details that distinguish the conversations happening in stupid questions threads. These details also cause those threads to not be downvoted.
You are throwing out relevant details again and distorting other details in the direction of your argument. The qualifier “all conversation” is inaccurate, for example. Alternatively, if disapproval is taken to be referring to a (value assignment) decision (rather than unreflective emotional response, say), it’s tautological that I’d be trying to get rid of things I disapprove of.
Hi! I just want to test the new system.
And you are throwing out relevant details whenever you punish people for responding to downvoted comments.
I don’t understand this point. Not punishing people in those cases would use the same information, so the amount of available information doesn’t characterize any given choice of the effect.
Individual downvotes for bad posts are sensitive to details about those posts. Blanket downvotes for replies to negative posts are not.
(Edited the grandparent. My point is that lack of blanket downvotes for replies to negative posts is equally insensitive to details about those posts. This consideration doesn’t help with the question of punish vs. not-punish.)
What are these negative net consequences? I enjoyed reading the good replies to that conversation. If you think the problem is the volume of conversation, then you have to explain why shutting down long “bad” conversations is worth losing shorter elegant responses to bad points.
Lesswrong puts a lot of stock in trying to fight human biases, which seems to me that saying “Don’t do that!” with negative karma, and then rewarding people explaining why not, is exactly what we should be doing.
(It’s much better now that all of the bad comments are removed either by the author or by moderators, so you are not looking at the problem as it presents itself in the wild, but you can imagine based on the number of downvotes.)
I don’t think LW should be used for arguing with people who make too many errors. It’s a different kind of activity completely from trying to obtain a better understanding of what constitutes good thinking.
Why is it a problem if they’re hidden to most users? It doesn’t put off newcomers and people can avoid them. Are you concerned about time the time of LW users being wasted?