I’m currently trying to come to a more comprehensive and principled view on how to think about this issue in relation to others. (i.e. this post is pointing at avoiding one particular rock, and there are other hard places you might hit instead)
One thing I do want to note is that while I think you’re pointing at a real phenomena, I don’t actually think the two examples you gave for my post are quite pointing at the right thing. I had asked Lauren specifically to crosspost her comment from facebook (where she’d been replying to a shorter version of the post, which I’d deliberately abridged to hit the most important points). And meanwhile, Qiaochu is my roommate and we’ve had a lot of extended discussions about the overall issue.
I consider both of their criticisms, while low-ish effort, to be engaging seriously with the question I was trying to orient around.
The failure mode I’m worried about is something more like “there’s a particular risk of low-effort criticism missing the point, or being wrong, or dragging the author into a conversation that isn’t worth their time.” I don’t have a good principled distinction between that failure mode and “criticism that is actually correctly noticing that the author has made some basic mistakes that invalidate the rest of their post.”
One thing I do want to note is that while I think you’re pointing at a real phenomena, I don’t actually think the two examples you gave for my post are quite pointing at the right thing.
This itself serves as an interesting example. Even if a particular author isn’t bothered by certain comments (due to an existing relationship, being unusually stoic, etc), it is still possible for others to perceive those comments as aversive/hostile/negative.
This is a feature of reality worth noticing, even before we determine what the correct response to it is. It would suggest you could have a world with many LessWrong members discussing in a way that they all enjoyed, yet it appears hostile and uncivil to the outside world who assume those participating are doing so despite being upset. This possibly has bad consequences for getting new people to join (those who aren’t here). You might expect this if a Nurture-native person was exposed to a Combat culture.
If that’s happening a lot, you might do any of the following:
1) shift your subculture to represent the dominant outside one
2) invest in “cultural-onboarding” so that new people learn to understand people aren’t unhappy with the comments they’re receiving (of course, we want this to be true)
3) create different spaces: ones for new people who are still acculturating, and others for the veterans who know that a blunt critical remark is a sign of respect.
The last one mirrors how most interpersonal relationships progress. At first you invest heavily in politeness to signal your positive intent and friendliness; progressively, as the prior of friendliness is established, fewer overt signals are required and politeness requirements drop; eventually, the prior of friendliness is so high that it’s possible to engage in countersignalling behaviors.
A fear I have is that veteran members of blunt and critical spaces (sometimes LW) have learnt that critical comments don’t have much interpersonal significance and pose little reputational or emotional risk to them. That might be the rational [1] prior from their perspective given their experience. A new member to the space who is bringing priors from the outside world may rationally infer hostility and attack when they read a casually and bluntly written critical comment. Rather than reading it as someone engaging positively with their post and wanting to discuss, they just feel slighted, unwelcome, and discouraged. This picture remains true even if a person is not usually sensitive or defensive to what they know is well-intentioned criticism. The perception of attack can be the result of appropriate priors about the significance [2] of different actions.
If this picture is correct and we want to recruit new people to LessWrong, we need to figure out some way of ensuring that people know they’re being productively engaged with.
--------------------
Coming back to this post. Here there was private information which shifted what state of affairs the cited comments were Bayesian evidence for. Most people wouldn’t know that Raemon had requested Unreal copy the comment moved from FB (where he’d posted it only partially) or that Raemon has been housemates with Qiaochu for years. In other words, Raemon has strongly established relationships with those commenters and knows them to be friendly to him—but that’s not universal knowledge. The OP’s assessment might be very reasonable if you lacked that private info (knowing it myself already, it’s hard for me to simulate not knowing it). This is also info it’s not at all reasonable to expect all readers of the site to know.
I think it’s very unfortunate if someone incorrectly thinks someone else is being attacked or disincentivized from contributing. It’s worth thinking about how one might avoid it. There are obviously bad solutions, but that doesn’t mean there aren’t better ones than just ignoring the problem.
--------------------
[1] Rational as in the sense of reaching the appropriate conclusion with the data available.
[2] By significance I mean what is it Bayesian evidence for.
One thing I do want to note is that while I think you’re pointing at a real phenomena, I don’t actually think the two examples you gave for my post are quite pointing at the right thing.
I want to note that, this fact having been pointed out, it is now incumbent upon anyone who thinks that the OP describes a real thing (whether OP himself, or you, or anyone else who agrees) to come up with new examples (see this comment for details).
I had asked Lauren specifically to crosspost her comment from facebook (where she’d been replying to a shorter version of the post, which I’d deliberately abridged to hit the most important points). And meanwhile, Qiaochu is my roommate and we’ve had a lot of extended discussions about the overall issue.
I think that it would, generally speaking, help if these sorts of facts about a comment’s provenance were noted explicitly. (Don’t get me wrong—there’s nothing at all wrong with this sort of thing! But knowing context like this is just very useful for avoiding confusion.)
The failure mode I’m worried about is something more like “there’s a particular risk of low-effort criticism missing the point, or being wrong, or dragging the author into a conversation that isn’t worth their time.” I don’t have a good principled distinction between that failure mode and “criticism that is actually correctly noticing that the author has made some basic mistakes that invalidate the rest of their post.”
Correct me if I’m wrong, but it seems to me that the karma system is both intended to, and actually (mostly) does, solve this problem.
To wit: if I write a post, and someone posts a low-effort comment which I think is simply a misunderstanding, or borne of not reading what I wrote, etc., I am free to simply not engage with it. No one can force me to reply, after all! But now suppose that just such a comment is upvoted, and has a high karma score. Now I stop and read it again and think about it more carefully—after all, a number of my peers among the Less Wrong commentariat seem to consider it worthy of their upvotes, so perhaps I’ve reflexively sorted it into the “low-effort nonsense” bin unjustly. And, even if I still end up with the same conclusion about the comment’s value, still I might (indeed, should) post at least a short reply—even if only to say “I don’t see how this isn’t addressed in the post; could you (or someone else) elaborate on this criticism?” (or something to that effect).
My current best guess [epistemic status: likely to change my mind about this] is something like “politeness norms, that are tailored for epistemic culture.”
The point of politeness norms is to provide a simple API that lets people interact reliably, without having to fully trust each other, and without spending lots of cognitive overhead on each interaction.
Default politeness norms aren’t really optimized for epistemic culture. So nerds (correctly) notice “hmm, if we limit ourselves to default politeness norms we seem epistemically screwed.”
I think it’s important that people be encouraged to write posts (both low and high effort) without stressing too much – as part of a longterm strategy causing them to grow and the site to flourish. It’s also important that people be encouraged to point out errors in the post (without stressing too much), as part of a longterm strategy that includes “make sure the people writing posts get feedback, and grow, as well as correct object-level errors they may have been making so readers don’t come away from the post with misinformation.”
The failure mode is “people writing low effort criticism that turns out to be wrong, or missing the point, etc”. (And sometimes, this being due to fairly deep differences in frame).
I think something like “epistemic status tags for both posts and comments” may help here.
A post might open with “Epistemic status: exploratory, low confidence” which sets the tone for what you should expect from it.
A comment might open with “Didn’t read the post in full, but a quick first impression: This point seems wrong” or “not sure if I quite understood your point, but this seemed incorrect” or something. [Note: not necessarily endorsing those particular phrasings]. The phrasing hedges slightly against the downside of misinterpretation, frame difference, or the critic-being-the-wrong-one.
I think “accidentally suppressing criticism” is a real concern, but it seems like there should be fairly standardizable conversation tools that make it easier to write criticism without as much downside risk.
I’m currently trying to come to a more comprehensive and principled view on how to think about this issue in relation to others. (i.e. this post is pointing at avoiding one particular rock, and there are other hard places you might hit instead)
One thing I do want to note is that while I think you’re pointing at a real phenomena, I don’t actually think the two examples you gave for my post are quite pointing at the right thing. I had asked Lauren specifically to crosspost her comment from facebook (where she’d been replying to a shorter version of the post, which I’d deliberately abridged to hit the most important points). And meanwhile, Qiaochu is my roommate and we’ve had a lot of extended discussions about the overall issue.
I consider both of their criticisms, while low-ish effort, to be engaging seriously with the question I was trying to orient around.
The failure mode I’m worried about is something more like “there’s a particular risk of low-effort criticism missing the point, or being wrong, or dragging the author into a conversation that isn’t worth their time.” I don’t have a good principled distinction between that failure mode and “criticism that is actually correctly noticing that the author has made some basic mistakes that invalidate the rest of their post.”
This itself serves as an interesting example. Even if a particular author isn’t bothered by certain comments (due to an existing relationship, being unusually stoic, etc), it is still possible for others to perceive those comments as aversive/hostile/negative.
This is a feature of reality worth noticing, even before we determine what the correct response to it is. It would suggest you could have a world with many LessWrong members discussing in a way that they all enjoyed, yet it appears hostile and uncivil to the outside world who assume those participating are doing so despite being upset. This possibly has bad consequences for getting new people to join (those who aren’t here). You might expect this if a Nurture-native person was exposed to a Combat culture.
If that’s happening a lot, you might do any of the following:
1) shift your subculture to represent the dominant outside one
2) invest in “cultural-onboarding” so that new people learn to understand people aren’t unhappy with the comments they’re receiving (of course, we want this to be true)
3) create different spaces: ones for new people who are still acculturating, and others for the veterans who know that a blunt critical remark is a sign of respect.
The last one mirrors how most interpersonal relationships progress. At first you invest heavily in politeness to signal your positive intent and friendliness; progressively, as the prior of friendliness is established, fewer overt signals are required and politeness requirements drop; eventually, the prior of friendliness is so high that it’s possible to engage in countersignalling behaviors.
A fear I have is that veteran members of blunt and critical spaces (sometimes LW) have learnt that critical comments don’t have much interpersonal significance and pose little reputational or emotional risk to them. That might be the rational [1] prior from their perspective given their experience. A new member to the space who is bringing priors from the outside world may rationally infer hostility and attack when they read a casually and bluntly written critical comment. Rather than reading it as someone engaging positively with their post and wanting to discuss, they just feel slighted, unwelcome, and discouraged. This picture remains true even if a person is not usually sensitive or defensive to what they know is well-intentioned criticism. The perception of attack can be the result of appropriate priors about the significance [2] of different actions.
If this picture is correct and we want to recruit new people to LessWrong, we need to figure out some way of ensuring that people know they’re being productively engaged with.
--------------------
Coming back to this post. Here there was private information which shifted what state of affairs the cited comments were Bayesian evidence for. Most people wouldn’t know that Raemon had requested Unreal copy the comment moved from FB (where he’d posted it only partially) or that Raemon has been housemates with Qiaochu for years. In other words, Raemon has strongly established relationships with those commenters and knows them to be friendly to him—but that’s not universal knowledge. The OP’s assessment might be very reasonable if you lacked that private info (knowing it myself already, it’s hard for me to simulate not knowing it). This is also info it’s not at all reasonable to expect all readers of the site to know.
I think it’s very unfortunate if someone incorrectly thinks someone else is being attacked or disincentivized from contributing. It’s worth thinking about how one might avoid it. There are obviously bad solutions, but that doesn’t mean there aren’t better ones than just ignoring the problem.
--------------------
[1] Rational as in the sense of reaching the appropriate conclusion with the data available.
[2] By significance I mean what is it Bayesian evidence for.
I want to note that, this fact having been pointed out, it is now incumbent upon anyone who thinks that the OP describes a real thing (whether OP himself, or you, or anyone else who agrees) to come up with new examples (see this comment for details).
I think that it would, generally speaking, help if these sorts of facts about a comment’s provenance were noted explicitly. (Don’t get me wrong—there’s nothing at all wrong with this sort of thing! But knowing context like this is just very useful for avoiding confusion.)
Correct me if I’m wrong, but it seems to me that the karma system is both intended to, and actually (mostly) does, solve this problem.
To wit: if I write a post, and someone posts a low-effort comment which I think is simply a misunderstanding, or borne of not reading what I wrote, etc., I am free to simply not engage with it. No one can force me to reply, after all! But now suppose that just such a comment is upvoted, and has a high karma score. Now I stop and read it again and think about it more carefully—after all, a number of my peers among the Less Wrong commentariat seem to consider it worthy of their upvotes, so perhaps I’ve reflexively sorted it into the “low-effort nonsense” bin unjustly. And, even if I still end up with the same conclusion about the comment’s value, still I might (indeed, should) post at least a short reply—even if only to say “I don’t see how this isn’t addressed in the post; could you (or someone else) elaborate on this criticism?” (or something to that effect).
My current best guess [epistemic status: likely to change my mind about this] is something like “politeness norms, that are tailored for epistemic culture.”
The point of politeness norms is to provide a simple API that lets people interact reliably, without having to fully trust each other, and without spending lots of cognitive overhead on each interaction.
Default politeness norms aren’t really optimized for epistemic culture. So nerds (correctly) notice “hmm, if we limit ourselves to default politeness norms we seem epistemically screwed.”
But the naive response to that causes a bunch of problems a la why our kind can’t cooperate and defecting by accident (hey, look, another post by Lionhearted)
I think it’s important that people be encouraged to write posts (both low and high effort) without stressing too much – as part of a longterm strategy causing them to grow and the site to flourish. It’s also important that people be encouraged to point out errors in the post (without stressing too much), as part of a longterm strategy that includes “make sure the people writing posts get feedback, and grow, as well as correct object-level errors they may have been making so readers don’t come away from the post with misinformation.”
The failure mode is “people writing low effort criticism that turns out to be wrong, or missing the point, etc”. (And sometimes, this being due to fairly deep differences in frame).
I think something like “epistemic status tags for both posts and comments” may help here.
A post might open with “Epistemic status: exploratory, low confidence” which sets the tone for what you should expect from it.
A comment might open with “Didn’t read the post in full, but a quick first impression: This point seems wrong” or “not sure if I quite understood your point, but this seemed incorrect” or something. [Note: not necessarily endorsing those particular phrasings]. The phrasing hedges slightly against the downside of misinterpretation, frame difference, or the critic-being-the-wrong-one.
I think “accidentally suppressing criticism” is a real concern, but it seems like there should be fairly standardizable conversation tools that make it easier to write criticism without as much downside risk.