I didn’t posit there is no point to damage control. I’m saying that in certain cases, people are criticized equally no matter what they do.
If someone chooses not to engage, they are hiding something. If they engage, they are giving the inquisitor what he wants. If they jest about their mistake, they are not remorseful. If they are somber, they are taking it too seriously and making things worse.
I read your links and...yikes...this new round of responses is pretty bad. I guess part of me feels bad for EY. It was a mistake. He’s human. The internet is ruthless…
Let me chime in briefly. The way EY handles this issue tends to be bad as a rule. This is a blind spot in his otherwise brilliant, well, everything.
A recent example: a few months ago a bunch of members of the official Less Wrong group on Facebook were banished and blocked from viewing it without receiving a single warning. Several among them, myself included, had one thing in common: participation in threads about the Slate article.
I myself didn’t care much about it. Participation in that group wasn’t a huge part of my Facebook life, although admittedly it was informative. The point is just that doing things like these, and continuing to do things like these, accrete a bad reputation around EY.
It really amazes me he has so much difficulty calibrating for the Streisand Effect.
That was part of a brief effort on my part to ban everyone making stupid comments within the LW Facebook Group, which I hadn’t actually realized existed but which I was informed was giving people terrible impressions. I deleted multiple posts and banned all commenters who I thought had made stupid comments on them; the “hur hur basilisk mockery” crowd was only one, but I think a perfectly legitimate target for this general sweep. It’s still a pretty low-quality group, but it’s a lot better than it was before I went through and banned everyone who I saw making more than one stupid comment.
Unfortunately Facebook doesn’t seem to have an easy “delete comment and ban commenter from Group” procedure for Android, which makes it harder to repeat this procedure because Android is most of where I check Facebook.
What does “stupid” refer to in this context? Does it mean the comments were unintelligent? Not quite intelligent enough? Mean? Derailing discussion? I’m asking because there are certainly some criteria where the banning and deleting would leave a worse impression than the original comments, and I’m thinking that the equilibrium may be surprisingly in the direction of the more obnoxious comments. Especially since the banning and deleting is being done by someone who is more identified with LW than likely were any of the commenters.
Thanks for letting us know what happened. I’m one of the Facebook members who were banned, and I’ve spent these months wondering what I might have done wrong. May I at least know what was the stupid thing I said? And is there any atonement procedure to get back in the Facebook group?
So just to be clear: If I say “I won’t give into the basalisk because Eliezer says I shouldn’t”, will that protect me from the basilisk? If not, what should I do?
If you believe Eliezer, then you may believe him that the basilisk has ~0 probability of occurring. (I should find a citation for that, but I read it just a few minutes ago, somewhere around the discussion of this xkcd comic.) So you are already protected from it, because it does not exist (not even in ways relevant to acausal trade).
More broadly, you should decide to take this approach: never give into blackmail by somebody who knows that you have decided to take this approach. Now they have no incentive to blackmail you, and you are safe, even if they do exist! (I think that the strategy in this paragraph has been endorsed by Eliezer, but don’t trust me on that until you get a citation. Until then, you’ll have to reason it out for yourself.)
By “the basilisk”, do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.
By “the basilisk”, do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.
Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).
I didn’t posit there is no point to damage control. I’m saying that in certain cases, people are criticized equally no matter what they do.
If someone chooses not to engage, they are hiding something. If they engage, they are giving the inquisitor what he wants. If they jest about their mistake, they are not remorseful. If they are somber, they are taking it too seriously and making things worse.
I read your links and...yikes...this new round of responses is pretty bad. I guess part of me feels bad for EY. It was a mistake. He’s human. The internet is ruthless…
Let me chime in briefly. The way EY handles this issue tends to be bad as a rule. This is a blind spot in his otherwise brilliant, well, everything.
A recent example: a few months ago a bunch of members of the official Less Wrong group on Facebook were banished and blocked from viewing it without receiving a single warning. Several among them, myself included, had one thing in common: participation in threads about the Slate article.
I myself didn’t care much about it. Participation in that group wasn’t a huge part of my Facebook life, although admittedly it was informative. The point is just that doing things like these, and continuing to do things like these, accrete a bad reputation around EY.
It really amazes me he has so much difficulty calibrating for the Streisand Effect.
That was part of a brief effort on my part to ban everyone making stupid comments within the LW Facebook Group, which I hadn’t actually realized existed but which I was informed was giving people terrible impressions. I deleted multiple posts and banned all commenters who I thought had made stupid comments on them; the “hur hur basilisk mockery” crowd was only one, but I think a perfectly legitimate target for this general sweep. It’s still a pretty low-quality group, but it’s a lot better than it was before I went through and banned everyone who I saw making more than one stupid comment.
Unfortunately Facebook doesn’t seem to have an easy “delete comment and ban commenter from Group” procedure for Android, which makes it harder to repeat this procedure because Android is most of where I check Facebook.
Going around and banning people without explaining to then why you ban them is in general a good way to make enemies.
The fallout of the basilisk incidence, it should have taught you that censorship has costs.
The timing of the sweeping and the discussion about the basilisk article are also awfully coincidental.
What does “stupid” refer to in this context? Does it mean the comments were unintelligent? Not quite intelligent enough? Mean? Derailing discussion? I’m asking because there are certainly some criteria where the banning and deleting would leave a worse impression than the original comments, and I’m thinking that the equilibrium may be surprisingly in the direction of the more obnoxious comments. Especially since the banning and deleting is being done by someone who is more identified with LW than likely were any of the commenters.
Thanks for letting us know what happened. I’m one of the Facebook members who were banned, and I’ve spent these months wondering what I might have done wrong. May I at least know what was the stupid thing I said? And is there any atonement procedure to get back in the Facebook group?
So just to be clear: If I say “I won’t give into the basalisk because Eliezer says I shouldn’t”, will that protect me from the basilisk? If not, what should I do?
If you believe Eliezer, then you may believe him that the basilisk has ~0 probability of occurring. (I should find a citation for that, but I read it just a few minutes ago, somewhere around the discussion of this xkcd comic.) So you are already protected from it, because it does not exist (not even in ways relevant to acausal trade).
More broadly, you should decide to take this approach: never give into blackmail by somebody who knows that you have decided to take this approach. Now they have no incentive to blackmail you, and you are safe, even if they do exist! (I think that the strategy in this paragraph has been endorsed by Eliezer, but don’t trust me on that until you get a citation. Until then, you’ll have to reason it out for yourself.)
How does that work if they precommit to blackmail even when there is no incentive (which benefits them by making the blackmail more effective)?
By “the basilisk”, do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.
Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).