This is an example of why I support this kind of censorship. Lesswrong just isn’t capable of thinking about such things in a sane way anyhow.
The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don’t want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don’t expect any useful or sane discussion to occur on sensitive subjects like this.
(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)
This seems an excessively hostile and presumptuous way to state that you disagree with Anna’s conclusion.
No it isn’t, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.
The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don’t want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.
You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you’re wrong.
No it isn’t, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.
Well, taking your words seriously, you are claiming to be a Legilimens. Since you are not, maybe you are not as clear as you think you are.
It sure looks from what you wrote that you drew an inference from “Anna does not agree with me” to “Anna is running broken or disreputable inference rules, or is lying out of self-interest” without considering alternate hypotheses.
This also seems like an excessively hostile way of disagreeing! I think there’s some illusion of transparency going on.
I think
Sorry, I think you’ve misunderstood me. I don’t want to see bullshit on lesswrong. [Elaboation] The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship.
This also seems like an excessively hostile way of disagreeing!
It is unfortunate that the one word on your comment that you gave emphasis to is the one word that invalidates it (rather than being a mere subjective disagreement). Since I have already been quite clear that I consider fubarobfusco’s comment to be both epistemically flawed and an unacceptable violation of lesswrong’s (or at very least my) ideals you ought to be able to predict that this would make me dismiss you as merely supporting toxic behavior. It means that the full weight of the grandparent comment applies to you, with additional emphasis given that you are persisting despite the redundant explanation.
Sorry
Wedrifid writing ‘Sorry’ in response to fubarobfusco’s behavior—or anything else involving untenable misrepresentations of the words of another—would have been disingenuous. Moreover anyone who is remotely familiar with wedrifid would interpret him making that particular political move in that context as passive-aggressive dissembling… and would have been entirely correct in doing so.
Part of my point was that your words are not nearly as clear as you think they are. Merely telling people your words are clear doesn’t make people understand them.
I probably won’t respond further because this conversation quickly became frustrating for me.
The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don’t want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.
There are a lot of topics about which most people have only bullshit to say. The solution is to downvote bullshit instead of censoring potentially important topics. If not enough people can detect bullshit that’s an entirely different (and far worse) problem.
The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don’t want to see the (now) Executive Director of CFAR doing either of those things.
Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.
I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren’t mindkilled, so I think it is actually good that it achieves this much.
They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I’m comfortable with them being censored as a side effect of a policy with useful effects.
I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren’t mindkilled, so I think it is actually good that it achieves this much.
Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn’t something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.
This is an example of why I support this kind of censorship. Lesswrong just isn’t capable of thinking about such things in a sane way anyhow.
The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don’t want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don’t expect any useful or sane discussion to occur on sensitive subjects like this.
(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)
This seems an excessively hostile and presumptuous way to state that you disagree with Anna’s conclusion.
No it isn’t, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.
The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don’t want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.
You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you’re wrong.
Well, taking your words seriously, you are claiming to be a Legilimens. Since you are not, maybe you are not as clear as you think you are.
It sure looks from what you wrote that you drew an inference from “Anna does not agree with me” to “Anna is running broken or disreputable inference rules, or is lying out of self-interest” without considering alternate hypotheses.
This also seems like an excessively hostile way of disagreeing! I think there’s some illusion of transparency going on.
I think
Might have worked better
It is unfortunate that the one word on your comment that you gave emphasis to is the one word that invalidates it (rather than being a mere subjective disagreement). Since I have already been quite clear that I consider fubarobfusco’s comment to be both epistemically flawed and an unacceptable violation of lesswrong’s (or at very least my) ideals you ought to be able to predict that this would make me dismiss you as merely supporting toxic behavior. It means that the full weight of the grandparent comment applies to you, with additional emphasis given that you are persisting despite the redundant explanation.
Wedrifid writing ‘Sorry’ in response to fubarobfusco’s behavior—or anything else involving untenable misrepresentations of the words of another—would have been disingenuous. Moreover anyone who is remotely familiar with wedrifid would interpret him making that particular political move in that context as passive-aggressive dissembling… and would have been entirely correct in doing so.
Part of my point was that your words are not nearly as clear as you think they are. Merely telling people your words are clear doesn’t make people understand them.
I probably won’t respond further because this conversation quickly became frustrating for me.
There are a lot of topics about which most people have only bullshit to say. The solution is to downvote bullshit instead of censoring potentially important topics. If not enough people can detect bullshit that’s an entirely different (and far worse) problem.
Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.
I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren’t mindkilled, so I think it is actually good that it achieves this much.
They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I’m comfortable with them being censored as a side effect of a policy with useful effects.
Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn’t something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.