I applaud your thorough and even-handed wiki entry. In particular, this comment:
“One take-away is that someone in possession of a serious information hazard should exercise caution in visibly censoring or suppressing it (cf. the Streisand effect).”
Censorship, particularly of the heavy-handed variety displayed in this case, has a lower probability of success in an environment like the Internet. Many people dislike being censored or witnessing censorship, the censored poster could post someplace else, and another person might conceive the same idea in an independent venue.
And if censorship cannot succeed, then the implicit attempt to censor the line of thought will also fail. That being the case, would-be censors would be better served by either proceeding “as though no such hazard exists”, as you say, or by engaging the line of inquiry and developing a defense. I’d suggest that the latter, actually solving rather than suppressing the problem, is in general likely to prove more successful in the long run.
Examples of censorship failing are easy to see. But if censorship works, you will never hear about it. So how do we know censorship fails most of the time? Maybe it works 99% of the time, and this is just the rare 1% it doesn’t.
On reddit, comments are deleted silently. The user isn’t informed their comment has been deleted, and if they go to it, it still shows up for them. Bans are handled the same way.
This actually works fine. Most users don’t notice it and so never complain about it. But when moderation is made more visible, all hell breaks loose. You get tons of angry PMs and stuff.
Lesswrong is based on reddit’s code. Presumably moderation here works the same way. If moderators had been removing all my comments about a certain subject, I would have no idea. And neither would anyone else. It’s only when big things are removed that people notice. Like an entire post that lots of people had already seen.
Most users don’t notice it and so never complain about it.
I don’t believe this can be true for active (and reasonably smart) users. If, suddenly, none of your comments gets any replies at all and you know about the existence of hellbans, well… Besides, they are trivially easy to discover by making another account. Anyone with sockpuppets would notice a hellban immediately.
I think you would be surprised at how effective shadow bans are. Most users just think their comments haven’t gotten any replies by chance and eventually lose interest in the site. Or in some cases keep making comments for months. The only way to tell is to look at your user page signed out. And even that wouldn’t work if they started to track cookies or ip instead of just the account you are signed in on.
But shadow bans are a pretty extreme example of silent moderation. My point was that removing individual comments almost always goes unnoticed. /r/Technology had a bot that automatically removed all posts about Tesla for over a year before anyone noticed. Moderators set up all kinds of crazy regexes on posts and comments that keep unwanted topics away. And users have no idea whatsoever.
I’m new to the subject, so I’m sorry if the following is obvious or completely wrong, but the comment left by Eliezer doesn’t seem like something that would be written by a smart person who is trying to suppress information. I seriously doubt that EY didn’t know about Streisand effect.
However the comment does seem like something that would be written by a smart person who is trying to create a meme or promote his blog.
In HPMOR characters give each other advice “to understand a plot, assume that what happened was the intended result, and look at who benefits.” The idea of Roko’s basilisk went viral and lesswrong.com got a lot of traffic from popular news sites(I’m assuming).
I also don’t think that there’s anything wrong with it, I’m just sayin’.
The line goes “to fathom a strange plot, one technique was to look at what ended up happening, assume it was the intended result, and ask who benefited”. But in the real world strange secret complicated Machiavellian plots are pretty rare, and successful strange secret complicated Machiavellian plots are even rarer. So I’d be wary of applying this rule to explain big once-off events outside of fiction. (Even to HPMoR’s author!)
I agree Eliezer didn’t seem to be trying very hard to suppress information. I think that’s probably just because he’s a human, and humans get angry when they see other humans defecting from a (perceived) social norm, and anger plus time pressure causes hasty dumb decisions. I don’t think this is super complicated. Though I hope he’d have acted differently if he thought the infohazard risk was really severe, as opposed to just not-vanishingly-small.
the comment left by Eliezer doesn’t seem like something that would be written by a smart person who is trying to suppress information. I seriously doubt that EY didn’t know about Streisand effect.
No worries about being wrong. But I definitely think you’re overestimating Eliezer, and humanity in general. Thinking that calling someone an idiot for doing something stupid, and then deleting their post, would cause a massive blow up of epic proportions, is sometng you can really only predict in hindsight.
Perhaps this did generate some traffic, but LessWrong doesn’t have adds. And any publicity this generated was bad publicity, since Roko’s argument was far too weird to be taken seriously by almost anyone.
It doesn’t look like anyone benefited. Eliezer made an ass of himself. I would guess that he was rather rushed at the time.
At worst, it’s a demonstration of how much influence LessWrong has relative to the size of its community. Many people who don’t know this site exists know about Roko’s basilisk now.
I applaud your thorough and even-handed wiki entry. In particular, this comment:
“One take-away is that someone in possession of a serious information hazard should exercise caution in visibly censoring or suppressing it (cf. the Streisand effect).”
Censorship, particularly of the heavy-handed variety displayed in this case, has a lower probability of success in an environment like the Internet. Many people dislike being censored or witnessing censorship, the censored poster could post someplace else, and another person might conceive the same idea in an independent venue.
And if censorship cannot succeed, then the implicit attempt to censor the line of thought will also fail. That being the case, would-be censors would be better served by either proceeding “as though no such hazard exists”, as you say, or by engaging the line of inquiry and developing a defense. I’d suggest that the latter, actually solving rather than suppressing the problem, is in general likely to prove more successful in the long run.
Examples of censorship failing are easy to see. But if censorship works, you will never hear about it. So how do we know censorship fails most of the time? Maybe it works 99% of the time, and this is just the rare 1% it doesn’t.
On reddit, comments are deleted silently. The user isn’t informed their comment has been deleted, and if they go to it, it still shows up for them. Bans are handled the same way.
This actually works fine. Most users don’t notice it and so never complain about it. But when moderation is made more visible, all hell breaks loose. You get tons of angry PMs and stuff.
Lesswrong is based on reddit’s code. Presumably moderation here works the same way. If moderators had been removing all my comments about a certain subject, I would have no idea. And neither would anyone else. It’s only when big things are removed that people notice. Like an entire post that lots of people had already seen.
I don’t believe this can be true for active (and reasonably smart) users. If, suddenly, none of your comments gets any replies at all and you know about the existence of hellbans, well… Besides, they are trivially easy to discover by making another account. Anyone with sockpuppets would notice a hellban immediately.
I think you would be surprised at how effective shadow bans are. Most users just think their comments haven’t gotten any replies by chance and eventually lose interest in the site. Or in some cases keep making comments for months. The only way to tell is to look at your user page signed out. And even that wouldn’t work if they started to track cookies or ip instead of just the account you are signed in on.
But shadow bans are a pretty extreme example of silent moderation. My point was that removing individual comments almost always goes unnoticed. /r/Technology had a bot that automatically removed all posts about Tesla for over a year before anyone noticed. Moderators set up all kinds of crazy regexes on posts and comments that keep unwanted topics away. And users have no idea whatsoever.
The Streisand effect is false.
Is there a way to demonstrate that? :-)
There’s this reddit user who didn’t realize ve was shadowbanned for three years: https://www.reddit.com/comments/351buo/tifu_by_posting_for_three_years_and_just_now/
Yeah, and there are women who don’t realize they’re pregnant until they start giving birth.
The tails are long and they don’t tell you much about what’s happening in the middle.
Note Houshalter said “most users”.
I’m new to the subject, so I’m sorry if the following is obvious or completely wrong, but the comment left by Eliezer doesn’t seem like something that would be written by a smart person who is trying to suppress information. I seriously doubt that EY didn’t know about Streisand effect.
However the comment does seem like something that would be written by a smart person who is trying to create a meme or promote his blog.
In HPMOR characters give each other advice “to understand a plot, assume that what happened was the intended result, and look at who benefits.” The idea of Roko’s basilisk went viral and lesswrong.com got a lot of traffic from popular news sites(I’m assuming).
I also don’t think that there’s anything wrong with it, I’m just sayin’.
The line goes “to fathom a strange plot, one technique was to look at what ended up happening, assume it was the intended result, and ask who benefited”. But in the real world strange secret complicated Machiavellian plots are pretty rare, and successful strange secret complicated Machiavellian plots are even rarer. So I’d be wary of applying this rule to explain big once-off events outside of fiction. (Even to HPMoR’s author!)
I agree Eliezer didn’t seem to be trying very hard to suppress information. I think that’s probably just because he’s a human, and humans get angry when they see other humans defecting from a (perceived) social norm, and anger plus time pressure causes hasty dumb decisions. I don’t think this is super complicated. Though I hope he’d have acted differently if he thought the infohazard risk was really severe, as opposed to just not-vanishingly-small.
No worries about being wrong. But I definitely think you’re overestimating Eliezer, and humanity in general. Thinking that calling someone an idiot for doing something stupid, and then deleting their post, would cause a massive blow up of epic proportions, is sometng you can really only predict in hindsight.
Perhaps this did generate some traffic, but LessWrong doesn’t have adds. And any publicity this generated was bad publicity, since Roko’s argument was far too weird to be taken seriously by almost anyone.
It doesn’t look like anyone benefited. Eliezer made an ass of himself. I would guess that he was rather rushed at the time.
At worst, it’s a demonstration of how much influence LessWrong has relative to the size of its community. Many people who don’t know this site exists know about Roko’s basilisk now.
Well, there is the philosophy that “there’s no such thing as bad publicity”.