I don’t agree that my bio stating I’m autistic[1] is strong/relevant* evidence that I assume the rest of the world is like me or LessWrong users, I’m very aware that this is not the case. I feel a lot of uncertainty about what happens inside the minds of neurotypical people (and most others), but I know they’re very different in various specific ways, and I don’t think the assumption you inferred is one I make; it was directly implied in my shortform that neurotypicals engage in politics in a really irrational way, are influentiable by such social pressures as you (and I) mentioned, etc.
*Technically, being a LessWrong user is some bayesian evidence that one makes that assumption, if that’s all you know about them, so I added the hedge “strong/relevant”, i.e. enough to reasonably cause one to write “I think you are making [clearly-wrong assumption x]” instead of using more uncertain phrasings.
I even more strongly oppose a norm that other users feeling pressured to respond should have a meaningful impact on whether a comment is proper or not.
I agree that there are cases where feeling pressured to respond is acceptable. E.g., if someone writes a counterargument which one think misunderstands their position, they might feel some internal pressure to respond to correct this; I think that’s okay, or at least unavoidable.
I don’t know how to define a general rule for determining when making-someone-feel-pressured is okay or not, but this seemed like a case where it was not okay: in my view, it was caused by an unfounded confident expression of belief about my mind.
If you internally believe you had enough evidence to infer what you wrote at the level of confidence to just be prefaced with ‘I think’, perhaps it should not be against LW norms, though; I don’t have strong opinions on what site norms should be, or how norms should differ when the subject is the internal mind of another user.
More on norms: the assertive writing style of your two comments here seems also possibly norm-violating as well.
As a moderator: I do think sunwillrise was being a bit obnoxious here. I think the norms they used here were fine for frontpage LW posts, but shortform is trying to do something that is more casual and more welcoming of early-stage ideas, and this kind of psychologizing I think has reasonably strong chilling-effects on people feeling comfortable with that.
I don’t think it’s a huge deal, my best guess is I would just ask sunwillrise to comment less on quila’s stuff in-particular, and if it becomes a recurring theme, to maybe more generally try to change how they comment on shortforms.
I do think the issue here is kind of subtle. I definitely notice an immune reaction to sunwillrise’s original comment, but I can’t fully put into words why I have that reaction, and I would also have that reaction if it was made as a comment on a frontpage post (but I would just be more tolerant of it).
I think the fact that you don’t expect this to happen is more due to you improperly generalizing from the community of LW-attracted people (including yourself), whose average psychological make-up appears to me to be importantly different from that of the broader public.
Like, I think my key issue here is that sunwillrise just started a whole new topic that quila had expressed no interest in talking about, which is the topic of “what are my biases on this topic, and if I am wrong, what would be the reason I am wrong?”, which like, IDK, is a fine topic, but it is just a very different topic that doesn’t really have anything to do with the object level. Like, whether quila is biased on this topic does not make a difference to question of whether this policy-esque proposal would be a good idea, and I think quila (and most other readers) are usually more interested in discussing that then meta-level bias stuff.
There is also a separate thing, where making this argument in some sense assumes that you are right, which I think is a fine thing to do, but does often make good discussion harder. Like, I think for comments, its usually best to focus on the disagreement, and not to invoke random other inferences about the world about what is true if you are right. There can be a place for that, especially if it helps illucidate your underlying world model, but I think in this case little of that happened.
I don’t agree that my bio stating I’m autistic[1] is strong/relevant* evidence that I assume the rest of the world is like me or LessWrong users, I’m very aware that this is not the case. I feel a lot of uncertainty about what happens inside the minds of neurotypical people (and most others), but I know they’re very different in various specific ways, and I don’t think the assumption you inferred is one I make; it was directly implied in my shortform that neurotypicals engage in politics in a really irrational way, are influentiable by such social pressures as you (and I) mentioned, etc.
*Technically, being a LessWrong user is some bayesian evidence that one makes that assumption, if that’s all you know about them, so I added the hedge “strong/relevant”, i.e. enough to reasonably cause one to write “I think you are making [clearly-wrong assumption x]” instead of using more uncertain phrasings.
I agree that there are cases where feeling pressured to respond is acceptable. E.g., if someone writes a counterargument which one think misunderstands their position, they might feel some internal pressure to respond to correct this; I think that’s okay, or at least unavoidable.
I don’t know how to define a general rule for determining when making-someone-feel-pressured is okay or not, but this seemed like a case where it was not okay: in my view, it was caused by an unfounded confident expression of belief about my mind.
If you internally believe you had enough evidence to infer what you wrote at the level of confidence to just be prefaced with ‘I think’, perhaps it should not be against LW norms, though; I don’t have strong opinions on what site norms should be, or how norms should differ when the subject is the internal mind of another user.
More on norms: the assertive writing style of your two comments here seems also possibly norm-violating as well.
Edit: I’m flagging this for moderator review.
the “~ ” you quoted is just a separator from the previous words, in case you thought it meant something else
As a moderator: I do think sunwillrise was being a bit obnoxious here. I think the norms they used here were fine for frontpage LW posts, but shortform is trying to do something that is more casual and more welcoming of early-stage ideas, and this kind of psychologizing I think has reasonably strong chilling-effects on people feeling comfortable with that.
I don’t think it’s a huge deal, my best guess is I would just ask sunwillrise to comment less on quila’s stuff in-particular, and if it becomes a recurring theme, to maybe more generally try to change how they comment on shortforms.
I do think the issue here is kind of subtle. I definitely notice an immune reaction to sunwillrise’s original comment, but I can’t fully put into words why I have that reaction, and I would also have that reaction if it was made as a comment on a frontpage post (but I would just be more tolerant of it).
Like, I think my key issue here is that sunwillrise just started a whole new topic that quila had expressed no interest in talking about, which is the topic of “what are my biases on this topic, and if I am wrong, what would be the reason I am wrong?”, which like, IDK, is a fine topic, but it is just a very different topic that doesn’t really have anything to do with the object level. Like, whether quila is biased on this topic does not make a difference to question of whether this policy-esque proposal would be a good idea, and I think quila (and most other readers) are usually more interested in discussing that then meta-level bias stuff.
There is also a separate thing, where making this argument in some sense assumes that you are right, which I think is a fine thing to do, but does often make good discussion harder. Like, I think for comments, its usually best to focus on the disagreement, and not to invoke random other inferences about the world about what is true if you are right. There can be a place for that, especially if it helps illucidate your underlying world model, but I think in this case little of that happened.