Depends on exactly how it was written, I think. “The paradigmatic criticism of utilitarianism has always been that we shouldn’t rob banks and donate the proceeds to charity”—sure, that’s not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. “There’s this bank in Missouri that looks really easy to rob”—no.
Uncharitable reading: As long as taking utilitarianism seriously doesn’t lead to arguments to violate formalized 21st century Western norms too much it is ok to argue for taking utilitarianism seriously. You are however free to debunk how it supposedly leads to things considered unacceptable on the Berkeley campus in 2012, since it obviously can’t.
Best way would be to construct the comment in a way that makes it least likely to seem bad when quoted outside of LW. For example we could imagine an alternative universe with intelligent bunnies and carrot-banks. Would it be good if a bunny robbed the carrot-bank and donated the carrots to charity?
If someone copied this comment on a different forum, it would seem silly, but not threatening. It is more difficult to start a wave of moral panic because of carrots and bunnies.
What about discussions which discuss flaws in security systems, generally? e.g. “Banks often have this specific flaw which can be mitigated in this cost-ineffective manner.”?
This does indeed seem like something that’s covered by the new policy. It’s illegal. In the alternative where it’s a bad idea, talking about it has net negative expected utility. If it were for some reason a good idea, it would still be incredibly stupid to talk about it on the &^%$ing Internet. I shall mark it for deletion if the policy passes.
So you don’t see value in discussions like these? Thought experiments that give some insights into morality? Is really the (probably barely any) effect on the reputation of LW because of those posts really more than the benefit of the discussion?
I think that post was a net negative effect on reality and that diminishing the number of people who read it again is a net positive. No, the conversation isn’t worth it.
Oh come on, you are evoking your basilisk-related logic here? How does it have a negative effect. Please don’t tell me that it is because you think that there will be more suicides in the world if the number of readers of the post is larger? And further please don’t tell me that if you thought that you think that this will lead to a net negative effect for the world? But please do answer me.
It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), “Oh, look, LW is encouraging people to commit suicide and donate the money to them.” That is what actually happens. It is the only real significant consequence.
Now it’s true that, in general, any particular post may have only a small effect in this direction, because, for example, idiots repeatedly make up crap about how SIAI’s ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical, and so themselves become the ones who conceptually promote violence. But it would be nice to have a nice clear policy in place we can point to and say, “An issue like this would not be discussable on LW because we think that talking about violence against individuals can conceptually promote such violence, even in the form of hypotheticals, and that any such individuals would justly have a right to complain. We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers. But everyone else should be advised that any such ‘hypothetical’ would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy.”
Idiots make up crap. You probably can’t change this. The more significant you are, the more crap idiots will make up about you. Idiots claim that Barack Obama is a Kenyan Muslim terrorist and that George Bush is mentally subnormal. Not because they have sufficient evidence of these propositions, but because gossip about Obama and Bush is thereby juicier than gossip about my neighbor Marty whom you’ve never heard of.
Idiots make up crap about projects, too. They say NASA faked the moon landing, vaccines cause autism, and that international food aid contains sterility drugs. It turns out that scurrilous rumors about NASA and the United Nations spread farther than scurrilous rumors about that funny-looking building in the town park which is totally a secret drug lab for the mayor.
But everyone else should be advised that any such ‘hypothetical’ would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy.”
How about treating the hypothetical as the stupidity it is? “Dude, beating up AI researchers wouldn’t work and you’re a jerk for posting it. There are a half dozen obvious reasons it wouldn’t work, if you take five minutes to think about it … and you’re a jerk for posting it because it’s stirring up shit for no good reason. Seriously, quit it. This is LW, not Conspiracy Hotline.”
Idiots claim that Barack Obama is a Kenyan Muslim terrorist and that George Bush is mentally subnormal.
The important difference is that in these cases the given idiot is less famous than the person they make crap about.
Imagine an alternative universe where Barack Obama is just an unknown guy, and some idiots for whatever reason start claiming that he is a Muslim terrorist. I can imagine an anonymous phone call to the police, a police action with some misunderstanding, resulting with too many negative utilons for Mr. Obama.
In our universe, Mr. Obama has the advantage of being more famous than any such possible accuser. However, SIAI does not have the same advantage.
It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), “Oh, look, LW is encouraging people to commit suicide and donate the money to them.” That is what actually happens. It is the only real significant consequence.
This is where the rubber meets the road as far as whether we really mean it when we say “that which can be destroyed by the truth, should be.” If we accept this argument, then by “mere addition” of censorship rules, you eventually end up renaming SIAI “The Institute for Puppies and Unicorn Farts”, and completly lying to the public about what it is you’re actually about, in order to benefit PR.
“Oh, look, LW is encouraging people to commit suicide and donate the money to them.”
Well, are you?
idiots repeatedly make up crap about how SIAI’s ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical,
True, but you have said things that seem to imply it. Seriously, you can’t go around saying “X” and “X->Y” and then object when people start attributing position “Y” to you.
No. To prove this, I shall shortly delete the post advocating it.
True, but you have said things that seem to imply it. Seriously, you can’t go around saying “X” and “X->Y” and then object when people start attributing position “Y” to you.
Point one: We never said X->Y. We said X, and a bunch of people too stupid to understand the fallacy of appeal to consequences said ‘X->violence, look what those bad people advocate’ as an attempted counterargument. Since no actual good can possibly come of discussing this on any set of assumptions, it would be nice to have the counter-counterargument, “Unlike this bad person here, we have a policy of deleting posts which claim Q->specific-violence even if the post claims not to believe in Q because the identifiable target would have a reasonable complaint of being threatened”.
it would be nice to have the counter-counterargument, “Unlike this bad person here, we have a policy of deleting posts which claim Q->specific-violence even if the post claims not to believe in Q because the identifiable target would have a reasonable complaint of being threatened”.
I would find this counter-counter-argument extremely uncompelling if made by an opponent. Suppose you read someone’s blog who made statements which could be interpreted as vaguely anti-Semitic, but it could go either way. Now suppose someone in the comments of that blog post replied by saying “Yeah, you’re totally right, we should kill all the Jews!”.
Which type of response from the blog owner do you think would be more likely to convince you that he was not actually an anti-Semite: 1) deleting the comment, covering up its existence, and never speaking of it, or 2) Leaving the comment in place, and refuting it—carefully laying out why the commenter is wrong.
I know that I for one would find the latter response much more convincing of the author’s benign intent.
Note: in order to post this comment, despite it being, IMHO entirely on-point and important to the conversation, I had to take a 5 point karma hit.… due to the LAST poorly thought out, dictatorially imposed, consensus-defying policy change.
Which type of response from the blog owner do you think would be more likely to convince you that he was not actually an anti-Semite: 1) deleting the comment, covering up its existence, and never speaking of it, or 2) Leaving the comment in place, and refuting it—carefully laying out why the commenter is wrong.
If someone really wants to get some cheap internet points for accusing the author of antisemitism, either option can be used. In both cases, the fact that the comment was written on the blog would be interpreted as an evidence for blog somehow evoking this kind of comment. Both deleting and refuting would be interpreted like “the author pretends to disagree, for obvious PR reasons, but he cannot fool us”.
The advantage of deleting the comment is that a potential accuser has smaller chance to notice it (well, unless some readers make “why did this specific comment disappear?” their topic of the month), and they cannot support their attacks with hyperlinks and screenshots. Also, if someone puts specific keywords in Google, they will not get that blog among results.
Regardless of your intentions, I know of one person who somewhat seriously considered that course of action as a result of the post in question. (The individual in question has been talked out of it in the short term, by way of ‘the negative publicity would hurt more than the money would help’, but my impression is that the chance that they’ll try something like that has still increased, probably permanently.)
[sincerity mode]So… is that a good thing, or a bad thing?[/sincerity mode]
In many circumstances, sacrificing one’s own life in order to save others is considered a good thing, and people who do it are called “heroes”. A famous example is the story of railroad engineer Casey Jones, who, after realizing that a collision with a stalled train was inevitable, chose to remain in the engine and slow his own train as much as possible, saving the rest of the passengers and crew at the cost of his own life.
“Really Extreme Altruism” (with the money going to one of GiveWell’s top charities) isn’t as dramatic as a “typical” real-life Heroic Sacrifice, but the outcome is the same: one person dies, a lot of other people live who would have otherwise died. It’s the manner of the sacrifice (and the distributed, distant nature of the benefit) that makes it far more disturbing.
There should be a warning on the donate page: “For reasons of public relations, please refrain from donating and minimize your association with us if you are or may in the future become suicidal.”
Of course, if I were, not being able to contribute would be one less reason to stick around. I could shop for some less controversial group to support (possibly one that indirectly helped SIAI/MIRI), but it wouldn’t be quite as motivating or as obviously sufficient to offset the cost of living.
I thought I posted this comment last night, but it seems like I didn’t (and now I have to pay karma to post it) but aren’t we just encouraging belief bias this way? (which has an additional negative utility on top of the loss of positive utility from the discussion and loss of utility because people see us as a heavily-censored community and form another type of negative opinion of us)
Idiots make up crap about all kinds of things, not just violence or other illegal acts. Ideas outside societal norms often attract bad PR. If your primary goal here is to improve PR, you would have to censor posts by explicit PR criteria. The proposed criteria of discussion of violence or law-breaking is not optimized for this goal. So, what is it you really want?
Discussion of violence is something that (you claim) has no positive value, even ignoring PR. So it’s easy to decide to censor it. But have you really considered what else to censor according to your goals? Violence clearly came due to the now deleted post; it was an available example. But you shouldn’t just react to it and ignore other things, if your goal is not to prevent discussion of violence or crime in itself.
This is an example of why I support this kind of censorship. Lesswrong just isn’t capable of thinking about such things in a sane way anyhow.
The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don’t want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don’t expect any useful or sane discussion to occur on sensitive subjects like this.
(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)
This seems an excessively hostile and presumptuous way to state that you disagree with Anna’s conclusion.
No it isn’t, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.
The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don’t want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.
You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you’re wrong.
No it isn’t, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.
Well, taking your words seriously, you are claiming to be a Legilimens. Since you are not, maybe you are not as clear as you think you are.
It sure looks from what you wrote that you drew an inference from “Anna does not agree with me” to “Anna is running broken or disreputable inference rules, or is lying out of self-interest” without considering alternate hypotheses.
This also seems like an excessively hostile way of disagreeing! I think there’s some illusion of transparency going on.
I think
Sorry, I think you’ve misunderstood me. I don’t want to see bullshit on lesswrong. [Elaboation] The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship.
This also seems like an excessively hostile way of disagreeing!
It is unfortunate that the one word on your comment that you gave emphasis to is the one word that invalidates it (rather than being a mere subjective disagreement). Since I have already been quite clear that I consider fubarobfusco’s comment to be both epistemically flawed and an unacceptable violation of lesswrong’s (or at very least my) ideals you ought to be able to predict that this would make me dismiss you as merely supporting toxic behavior. It means that the full weight of the grandparent comment applies to you, with additional emphasis given that you are persisting despite the redundant explanation.
Sorry
Wedrifid writing ‘Sorry’ in response to fubarobfusco’s behavior—or anything else involving untenable misrepresentations of the words of another—would have been disingenuous. Moreover anyone who is remotely familiar with wedrifid would interpret him making that particular political move in that context as passive-aggressive dissembling… and would have been entirely correct in doing so.
Part of my point was that your words are not nearly as clear as you think they are. Merely telling people your words are clear doesn’t make people understand them.
I probably won’t respond further because this conversation quickly became frustrating for me.
The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don’t want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.
There are a lot of topics about which most people have only bullshit to say. The solution is to downvote bullshit instead of censoring potentially important topics. If not enough people can detect bullshit that’s an entirely different (and far worse) problem.
The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don’t want to see the (now) Executive Director of CFAR doing either of those things.
Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.
I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren’t mindkilled, so I think it is actually good that it achieves this much.
They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I’m comfortable with them being censored as a side effect of a policy with useful effects.
I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren’t mindkilled, so I think it is actually good that it achieves this much.
Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn’t something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.
Would this censor posts about robbing banks and then donating the proceeds to charity?
Depends on exactly how it was written, I think. “The paradigmatic criticism of utilitarianism has always been that we shouldn’t rob banks and donate the proceeds to charity”—sure, that’s not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. “There’s this bank in Missouri that looks really easy to rob”—no.
Uncharitable reading: As long as taking utilitarianism seriously doesn’t lead to arguments to violate formalized 21st century Western norms too much it is ok to argue for taking utilitarianism seriously. You are however free to debunk how it supposedly leads to things considered unacceptable on the Berkeley campus in 2012, since it obviously can’t.
What abot pro-robbing banks in general?
Best way would be to construct the comment in a way that makes it least likely to seem bad when quoted outside of LW. For example we could imagine an alternative universe with intelligent bunnies and carrot-banks. Would it be good if a bunny robbed the carrot-bank and donated the carrots to charity?
If someone copied this comment on a different forum, it would seem silly, but not threatening. It is more difficult to start a wave of moral panic because of carrots and bunnies.
What about discussions which discuss flaws in security systems, generally? e.g. “Banks often have this specific flaw which can be mitigated in this cost-ineffective manner.”?
Or Really Extreme Altruism?
Note to all: Alicorn is referring to something else. Robbing banks may be extreme but it is not altruism.
Edited in a link.
This does indeed seem like something that’s covered by the new policy. It’s illegal. In the alternative where it’s a bad idea, talking about it has net negative expected utility. If it were for some reason a good idea, it would still be incredibly stupid to talk about it on the &^%$ing Internet. I shall mark it for deletion if the policy passes.
So you don’t see value in discussions like these? Thought experiments that give some insights into morality? Is really the (probably barely any) effect on the reputation of LW because of those posts really more than the benefit of the discussion?
I think that post was a net negative effect on reality and that diminishing the number of people who read it again is a net positive. No, the conversation isn’t worth it.
Oh come on, you are evoking your basilisk-related logic here? How does it have a negative effect. Please don’t tell me that it is because you think that there will be more suicides in the world if the number of readers of the post is larger? And further please don’t tell me that if you thought that you think that this will lead to a net negative effect for the world? But please do answer me.
It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), “Oh, look, LW is encouraging people to commit suicide and donate the money to them.” That is what actually happens. It is the only real significant consequence.
Now it’s true that, in general, any particular post may have only a small effect in this direction, because, for example, idiots repeatedly make up crap about how SIAI’s ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical, and so themselves become the ones who conceptually promote violence. But it would be nice to have a nice clear policy in place we can point to and say, “An issue like this would not be discussable on LW because we think that talking about violence against individuals can conceptually promote such violence, even in the form of hypotheticals, and that any such individuals would justly have a right to complain. We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers. But everyone else should be advised that any such ‘hypothetical’ would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy.”
Idiots make up crap. You probably can’t change this. The more significant you are, the more crap idiots will make up about you. Idiots claim that Barack Obama is a Kenyan Muslim terrorist and that George Bush is mentally subnormal. Not because they have sufficient evidence of these propositions, but because gossip about Obama and Bush is thereby juicier than gossip about my neighbor Marty whom you’ve never heard of.
Idiots make up crap about projects, too. They say NASA faked the moon landing, vaccines cause autism, and that international food aid contains sterility drugs. It turns out that scurrilous rumors about NASA and the United Nations spread farther than scurrilous rumors about that funny-looking building in the town park which is totally a secret drug lab for the mayor.
How about treating the hypothetical as the stupidity it is? “Dude, beating up AI researchers wouldn’t work and you’re a jerk for posting it. There are a half dozen obvious reasons it wouldn’t work, if you take five minutes to think about it … and you’re a jerk for posting it because it’s stirring up shit for no good reason. Seriously, quit it. This is LW, not Conspiracy Hotline.”
And yet, when attempting to list them, the only one anyone from SIAI can seem think of is bad PR.
The important difference is that in these cases the given idiot is less famous than the person they make crap about.
Imagine an alternative universe where Barack Obama is just an unknown guy, and some idiots for whatever reason start claiming that he is a Muslim terrorist. I can imagine an anonymous phone call to the police, a police action with some misunderstanding, resulting with too many negative utilons for Mr. Obama.
In our universe, Mr. Obama has the advantage of being more famous than any such possible accuser. However, SIAI does not have the same advantage.
Sounds like a fine reply on LW. I think it will be useful, on forums not LW, to have a LW-policy to point to.
This is where the rubber meets the road as far as whether we really mean it when we say “that which can be destroyed by the truth, should be.” If we accept this argument, then by “mere addition” of censorship rules, you eventually end up renaming SIAI “The Institute for Puppies and Unicorn Farts”, and completly lying to the public about what it is you’re actually about, in order to benefit PR.
Well, are you?
True, but you have said things that seem to imply it. Seriously, you can’t go around saying “X” and “X->Y” and then object when people start attributing position “Y” to you.
No. To prove this, I shall shortly delete the post advocating it.
Point one: We never said X->Y. We said X, and a bunch of people too stupid to understand the fallacy of appeal to consequences said ‘X->violence, look what those bad people advocate’ as an attempted counterargument. Since no actual good can possibly come of discussing this on any set of assumptions, it would be nice to have the counter-counterargument, “Unlike this bad person here, we have a policy of deleting posts which claim Q->specific-violence even if the post claims not to believe in Q because the identifiable target would have a reasonable complaint of being threatened”.
I would find this counter-counter-argument extremely uncompelling if made by an opponent. Suppose you read someone’s blog who made statements which could be interpreted as vaguely anti-Semitic, but it could go either way. Now suppose someone in the comments of that blog post replied by saying “Yeah, you’re totally right, we should kill all the Jews!”.
Which type of response from the blog owner do you think would be more likely to convince you that he was not actually an anti-Semite: 1) deleting the comment, covering up its existence, and never speaking of it, or 2) Leaving the comment in place, and refuting it—carefully laying out why the commenter is wrong.
I know that I for one would find the latter response much more convincing of the author’s benign intent.
Note: in order to post this comment, despite it being, IMHO entirely on-point and important to the conversation, I had to take a 5 point karma hit.… due to the LAST poorly thought out, dictatorially imposed, consensus-defying policy change.
If someone really wants to get some cheap internet points for accusing the author of antisemitism, either option can be used. In both cases, the fact that the comment was written on the blog would be interpreted as an evidence for blog somehow evoking this kind of comment. Both deleting and refuting would be interpreted like “the author pretends to disagree, for obvious PR reasons, but he cannot fool us”.
The advantage of deleting the comment is that a potential accuser has smaller chance to notice it (well, unless some readers make “why did this specific comment disappear?” their topic of the month), and they cannot support their attacks with hyperlinks and screenshots. Also, if someone puts specific keywords in Google, they will not get that blog among results.
I wasn’t thinking of SIAI as the charity.
Regardless of your intentions, I know of one person who somewhat seriously considered that course of action as a result of the post in question. (The individual in question has been talked out of it in the short term, by way of ‘the negative publicity would hurt more than the money would help’, but my impression is that the chance that they’ll try something like that has still increased, probably permanently.)
[sincerity mode]So… is that a good thing, or a bad thing?[/sincerity mode]
In many circumstances, sacrificing one’s own life in order to save others is considered a good thing, and people who do it are called “heroes”. A famous example is the story of railroad engineer Casey Jones, who, after realizing that a collision with a stalled train was inevitable, chose to remain in the engine and slow his own train as much as possible, saving the rest of the passengers and crew at the cost of his own life.
“Really Extreme Altruism” (with the money going to one of GiveWell’s top charities) isn’t as dramatic as a “typical” real-life Heroic Sacrifice, but the outcome is the same: one person dies, a lot of other people live who would have otherwise died. It’s the manner of the sacrifice (and the distributed, distant nature of the benefit) that makes it far more disturbing.
There should be a warning on the donate page: “For reasons of public relations, please refrain from donating and minimize your association with us if you are or may in the future become suicidal.”
Of course, if I were, not being able to contribute would be one less reason to stick around. I could shop for some less controversial group to support (possibly one that indirectly helped SIAI/MIRI), but it wouldn’t be quite as motivating or as obviously sufficient to offset the cost of living.
This intention of yours is not transparent. Plus, they don’t care.
I edited the original post to link to GiveWell’s top charities list.
I thought I posted this comment last night, but it seems like I didn’t (and now I have to pay karma to post it) but aren’t we just encouraging belief bias this way? (which has an additional negative utility on top of the loss of positive utility from the discussion and loss of utility because people see us as a heavily-censored community and form another type of negative opinion of us)
Idiots make up crap about all kinds of things, not just violence or other illegal acts. Ideas outside societal norms often attract bad PR. If your primary goal here is to improve PR, you would have to censor posts by explicit PR criteria. The proposed criteria of discussion of violence or law-breaking is not optimized for this goal. So, what is it you really want?
Discussion of violence is something that (you claim) has no positive value, even ignoring PR. So it’s easy to decide to censor it. But have you really considered what else to censor according to your goals? Violence clearly came due to the now deleted post; it was an available example. But you shouldn’t just react to it and ignore other things, if your goal is not to prevent discussion of violence or crime in itself.
As far as I can tell, Really Extreme Altruism actually is legal.
What about the possibility that someone who thought it was a good idea would change their mind after talking about it?
This seems an order-of-magnitude less likely than somebody wouldn’t naturally think of the dumb idea, seeing the dumb idea.
Therefore censor uncommon bad ideas generally?
This is an example of why I support this kind of censorship. Lesswrong just isn’t capable of thinking about such things in a sane way anyhow.
The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don’t want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don’t expect any useful or sane discussion to occur on sensitive subjects like this.
(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)
This seems an excessively hostile and presumptuous way to state that you disagree with Anna’s conclusion.
No it isn’t, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.
The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don’t want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.
You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you’re wrong.
Well, taking your words seriously, you are claiming to be a Legilimens. Since you are not, maybe you are not as clear as you think you are.
It sure looks from what you wrote that you drew an inference from “Anna does not agree with me” to “Anna is running broken or disreputable inference rules, or is lying out of self-interest” without considering alternate hypotheses.
This also seems like an excessively hostile way of disagreeing! I think there’s some illusion of transparency going on.
I think
Might have worked better
It is unfortunate that the one word on your comment that you gave emphasis to is the one word that invalidates it (rather than being a mere subjective disagreement). Since I have already been quite clear that I consider fubarobfusco’s comment to be both epistemically flawed and an unacceptable violation of lesswrong’s (or at very least my) ideals you ought to be able to predict that this would make me dismiss you as merely supporting toxic behavior. It means that the full weight of the grandparent comment applies to you, with additional emphasis given that you are persisting despite the redundant explanation.
Wedrifid writing ‘Sorry’ in response to fubarobfusco’s behavior—or anything else involving untenable misrepresentations of the words of another—would have been disingenuous. Moreover anyone who is remotely familiar with wedrifid would interpret him making that particular political move in that context as passive-aggressive dissembling… and would have been entirely correct in doing so.
Part of my point was that your words are not nearly as clear as you think they are. Merely telling people your words are clear doesn’t make people understand them.
I probably won’t respond further because this conversation quickly became frustrating for me.
There are a lot of topics about which most people have only bullshit to say. The solution is to downvote bullshit instead of censoring potentially important topics. If not enough people can detect bullshit that’s an entirely different (and far worse) problem.
Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.
I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren’t mindkilled, so I think it is actually good that it achieves this much.
They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I’m comfortable with them being censored as a side effect of a policy with useful effects.
Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn’t something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.