(Tangentially) If users are allowed to ban other users from commenting on their posts, how can I tell when the lack of criticism in the comments of some post means that nobody wanted to criticize it (which is a very useful signal that I would want to update on), or that the author has banned some or all of their most prominent/frequent critics? In addition, I think many users may be mislead by lack of criticism if they’re simply not aware of the second possibility or have forgotten it. (I think I knew it but it hasn’t entered my conscious awareness for a while, until I read this post today.)
(Assuming there’s not a good answer to the above concerns) I think I would prefer to change this feature/rule to something like allowing the author of a post to “hide” commenters or individual comments, which means that those comments are collapsed by default (and marked as “hidden by the post author”) but can be individually expanded, and each user can set an option to always expand those comments for themselves.
Maybe a middle ground would be to give authors a double-strong downvote power for comments on their posts. A comment with low enough karma is already hidden by default, and repeated strong downvotes without further response would tend chill rather than inflame the ensuing discussion, or at least push the bulk of it away from the author’s arena, without silencing critics completely.
I think a problem that my proposal tries to solve, and this one doesn’t, is that some authors seem easily triggered by some commenters, and apparently would prefer not to see their comments at all. (Personally if I was running a discussion site I might not try so hard to accommodate such authors, but apparently they include some authors that the LW team really wants to keep or attract.)
To me it seems unlikely that there’d be enough banning to prevent criticism from surfacing. Skimming through https://www.lesswrong.com/moderation, the amount of bans seems to be pretty small. And if there is an important critique to be made I’d expect it to be something that more than the few banned users would think of and decide to post a comment on.
And if there is an important critique to be made I’d expect it to be something that more than the few banned users would think of and decide to post a comment on.
This may be true in some cases, but not all. My experience here comes from cryptography where it often takes hundreds of person-hours to find a flaw in a new idea (which can sometimes be completely fatal), and UDT, where I found a couple of issues in my own initial idea only after several months/years of thinking (hence going to UDT1.1 and UDT2). I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.
Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques. I have this impression for Duncan myself which may explain why I’ve rarely commented on any of his posts. I seem to remember once trying to talk him out of (what seemed to me like) overreacting to a critique and banning the critic on Facebook, and having an unpleasant experience (but didn’t get banned), then deciding to avoid interacting with him in the future. However I can’t find the actual interaction on FB so I’m not 100% sure this happened. FB has terrible search which probably explains it, but maybe I hallucinated this, or confused him with someone else, or did it with a pseudonym.
I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.
My impression is that there are some domains for which this is true, but those are the exception rather than the rule. However, this impression is just based off of, err, vaguely querying my brain? I’m not super confident in it. And your claim is one that I think is “important if true”. So then, it does seem worth an investigation. Maybe enumerating through different domains and asking “Is it true here? Is it true here?”.
One thing I’d like to point out is that, being a community, something very similar is happening. Only a certain type of person comes to LessWrong (this is true of all communities to some extent; they attract a subset of people). It’s not that “outsiders” are explicitly banned, they just don’t join and don’t thus don’t comment. So then, effectively, ideas presented here currently aren’t available to “outsiders” for critiques.
I think there is a trade off at play: the more you make ideas available to “outsiders” the lower the chance something gets overlooked, but it also has the downside of some sort of friction.
(Sorry if this doesn’t make sense. I feel like I didn’t articulate it very well but couldn’t easily think of a better way to say it.)
Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques.
Good point. I think that’s true and something to factor in.
While the current number of bans is pretty small, I think this is in part because lots of users don’t know about the option to ban people from their posts. (See here, for example.)
That makes sense. Still, even if it were more well known, I wouldn’t expect the number of bans to reach the point where it is causing real problems with respect to criticism surfacing.
One solution is to limit the number of banned users to a small fraction of overall commentors. I’ve written 297 posts so far and have banned only 3 users from commenting on them. (I did not ban Duncan or Said.)
My highest-quality criticism comes from users who I have never even considered banning. Their comments are consistently well-reasoned and factually correct.
Right now, you see total karma for posts and comments, and total vote count, but not the number of upvotes/downvotes. So you can’t actually tell when something is controversial.
One reason for this is because we (once) briefly tried turning this on, and immediately found it made the site much more stressful and anxiety inducing. Getting a single downvote felt like “something is WRONG!” which didn’t feel productive or useful. Another reason is that it can de-anonymize strong-votes because their voting power is a less common number.
But, an idea I just had was that maybe we should expose that sort of information once a post becomes popular enough. Like maybe over 75 karma. [Better idea: once a post has a certain number of votes. Maybe at least 25]. At that point you have more of a sense of the overall karma distribution so individual votes feel less weighty, and also hopefully it’s harder to infer individual voters.
I support exposing the number of upvotes/downvotes. (I wrote a userscript for GW to always show the total number of votes, which allows me to infer this somewhat.) However that doesn’t address the bulk of my concerns, which I’ve laid out in more detail in this comment. In connection with karma, I’ve observed that sometimes a post is initially upvoted a lot, until someone posts a good critique, which then causes the karma of the post to plummet. This makes me think that the karma could be very misleading (even with upvotes/downvotes exposed) if the critique had been banned or disincentivized.
We’ve been thinking about this for the EA Forum. I endorse Raemon’s thoughts here, I think, but I know I can’t pass the ITT of a more transparent side here.
(Tangentially) If users are allowed to ban other users from commenting on their posts, how can I tell when the lack of criticism in the comments of some post means that nobody wanted to criticize it (which is a very useful signal that I would want to update on), or that the author has banned some or all of their most prominent/frequent critics? In addition, I think many users may be mislead by lack of criticism if they’re simply not aware of the second possibility or have forgotten it. (I think I knew it but it hasn’t entered my conscious awareness for a while, until I read this post today.)
(Assuming there’s not a good answer to the above concerns) I think I would prefer to change this feature/rule to something like allowing the author of a post to “hide” commenters or individual comments, which means that those comments are collapsed by default (and marked as “hidden by the post author”) but can be individually expanded, and each user can set an option to always expand those comments for themselves.
Maybe a middle ground would be to give authors a double-strong downvote power for comments on their posts. A comment with low enough karma is already hidden by default, and repeated strong downvotes without further response would tend chill rather than inflame the ensuing discussion, or at least push the bulk of it away from the author’s arena, without silencing critics completely.
I think a problem that my proposal tries to solve, and this one doesn’t, is that some authors seem easily triggered by some commenters, and apparently would prefer not to see their comments at all. (Personally if I was running a discussion site I might not try so hard to accommodate such authors, but apparently they include some authors that the LW team really wants to keep or attract.)
To me it seems unlikely that there’d be enough banning to prevent criticism from surfacing. Skimming through https://www.lesswrong.com/moderation, the amount of bans seems to be pretty small. And if there is an important critique to be made I’d expect it to be something that more than the few banned users would think of and decide to post a comment on.
This may be true in some cases, but not all. My experience here comes from cryptography where it often takes hundreds of person-hours to find a flaw in a new idea (which can sometimes be completely fatal), and UDT, where I found a couple of issues in my own initial idea only after several months/years of thinking (hence going to UDT1.1 and UDT2). I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.
Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques. I have this impression for Duncan myself which may explain why I’ve rarely commented on any of his posts. I seem to remember once trying to talk him out of (what seemed to me like) overreacting to a critique and banning the critic on Facebook, and having an unpleasant experience (but didn’t get banned), then deciding to avoid interacting with him in the future. However I can’t find the actual interaction on FB so I’m not 100% sure this happened. FB has terrible search which probably explains it, but maybe I hallucinated this, or confused him with someone else, or did it with a pseudonym.
Hm, interesting points.
My impression is that there are some domains for which this is true, but those are the exception rather than the rule. However, this impression is just based off of, err, vaguely querying my brain? I’m not super confident in it. And your claim is one that I think is “important if true”. So then, it does seem worth an investigation. Maybe enumerating through different domains and asking “Is it true here? Is it true here?”.
One thing I’d like to point out is that, being a community, something very similar is happening. Only a certain type of person comes to LessWrong (this is true of all communities to some extent; they attract a subset of people). It’s not that “outsiders” are explicitly banned, they just don’t join and don’t thus don’t comment. So then, effectively, ideas presented here currently aren’t available to “outsiders” for critiques.
I think there is a trade off at play: the more you make ideas available to “outsiders” the lower the chance something gets overlooked, but it also has the downside of some sort of friction.
(Sorry if this doesn’t make sense. I feel like I didn’t articulate it very well but couldn’t easily think of a better way to say it.)
Good point. I think that’s true and something to factor in.
While the current number of bans is pretty small, I think this is in part because lots of users don’t know about the option to ban people from their posts. (See here, for example.)
That makes sense. Still, even if it were more well known, I wouldn’t expect the number of bans to reach the point where it is causing real problems with respect to criticism surfacing.
One solution is to limit the number of banned users to a small fraction of overall commentors. I’ve written 297 posts so far and have banned only 3 users from commenting on them. (I did not ban Duncan or Said.)
My highest-quality criticism comes from users who I have never even considered banning. Their comments are consistently well-reasoned and factually correct.
What exactly does “nobody wanted to criticize it” signal that you don’t get from high/low karma votes?
Some UI thoughts as I think about this:
Right now, you see total karma for posts and comments, and total vote count, but not the number of upvotes/downvotes. So you can’t actually tell when something is controversial.
One reason for this is because we (once) briefly tried turning this on, and immediately found it made the site much more stressful and anxiety inducing. Getting a single downvote felt like “something is WRONG!” which didn’t feel productive or useful. Another reason is that it can de-anonymize strong-votes because their voting power is a less common number.
But, an idea I just had was that maybe we should expose that sort of information
once a post becomes popular enough. Like maybe over 75 karma. [Better idea: once a post has a certain number of votes. Maybe at least 25]. At that point you have more of a sense of the overall karma distribution so individual votes feel less weighty, and also hopefully it’s harder to infer individual voters.Tagging @jp who might be interested.
I support exposing the number of upvotes/downvotes. (I wrote a userscript for GW to always show the total number of votes, which allows me to infer this somewhat.) However that doesn’t address the bulk of my concerns, which I’ve laid out in more detail in this comment. In connection with karma, I’ve observed that sometimes a post is initially upvoted a lot, until someone posts a good critique, which then causes the karma of the post to plummet. This makes me think that the karma could be very misleading (even with upvotes/downvotes exposed) if the critique had been banned or disincentivized.
We’ve been thinking about this for the EA Forum. I endorse Raemon’s thoughts here, I think, but I know I can’t pass the ITT of a more transparent side here.