This community has a virtue of taking weird ideas seriously. Roko came up with a weird idea which, the more seriously you took it, the more horrifying it became. This was deemed an info hazard, and censored in some way, I don’t know how. But the people who didn’t take it seriously in the first place weren’t horrified by the idea and thus were confused about why it should have been censored, and thus boosted the Streisand effect.
the more seriously you took it, the more horrifying it became.
Eh. Up to a point. And then if you take it more seriously than that, it becomes less horrifying again.
Arguments for why it’s scary are the decision theory equivalent to someone describing how scary knives are, and how to make your own sharp knives, but never mentioning any knife safety tips.
“Sharp knives,” in this metaphor, is the recognition that other people might try to manipulate us, and the decision theory of why they’d do it and how it would work. “Knife safety” is our own ability to use decision theory to not get manipulated.
The reason that I think Roko’s basilisk is a net harmful idea is because there are a lot of people who are way more motivated to learn/talk about “cool” topics like sharp scary knives or ideas that sound dangerous, who are not nearly as motivated to learn about “boring” topics like knife safety or mathy decision theory. So for people who happen to allocate their attention this way (maybe you’re an edgy young adult, or maybe you’re just a naturally anxious person, or maybe you’re an edgy anxious person), it might just make them more anxious or otherwise mislead them.
In general, your post is pretty misleading. It was not censored because the idea itself horrified people.
The idea was either wrong, in which case preventing people from reading a wrong idea is net beneficial by getting them better ideas or the idea was right and that suggests it’s dangerous. EY censored it because he believed that neither state would make it valuable to have the post on LessWrong and maybe out of a general precautionary principle. You don’t need to be horrified by things to use the precautionary principle.
I recall that among the reasons given were that it had triggered severe reactions in some members of the community. “Horrified” would be a mild way to describe the reaction that was claimed at the time. There was discussion about whether the idea itself could be self-fulfilling and thus inherently dangerous (in addition to the pain caused in some people thinking about it), but that didn’t last long.
I don’t think simply being wrong would have been enough to try to censor it.
Jessica wrote a post about how the people at MIRI at the time thought about keeping information that’s a potentially dangerous secret. It was pretty extreme and they’re trying to keep information that Jessica would have seen as a more trivial secret drove her into schizophrenia.
It was not an intellectual atmosphere of keeping ideas that are horrifying people, dangerous because those are horrifying ideas. MIRI had complex ideas about secrecy that they considered very serious and if you ignore those but treat the motivation like “people did something because they are horrified” you project decision heuristics onto EY that he didn’t use.
Fair enough. I’m not part of the Bay Area rationalist community, and I suspect there was a lot of stuff going on that didn’t appear in public posts or discussion on the topic. People (including Eliezer and others) are complicated, and there are both private and public reasons for actions, as well as reasons that aren’t easily understood, even by the actors.
BTW, none of this explains why lincolnquirk’s comment was strong-downvoted. Even if it’s incorrect (though it doesn’t seem to me to be—more incomplete), it’s not harmful or wasteful.
I don’t think that someone who by their own admission doesn’t think that they have a good understanding should offer up their explanation on a rumor in a case like this.
This community has a virtue of taking weird ideas seriously. Roko came up with a weird idea which, the more seriously you took it, the more horrifying it became. This was deemed an info hazard, and censored in some way, I don’t know how. But the people who didn’t take it seriously in the first place weren’t horrified by the idea and thus were confused about why it should have been censored, and thus boosted the Streisand effect.
Eh. Up to a point. And then if you take it more seriously than that, it becomes less horrifying again.
Arguments for why it’s scary are the decision theory equivalent to someone describing how scary knives are, and how to make your own sharp knives, but never mentioning any knife safety tips.
“Sharp knives,” in this metaphor, is the recognition that other people might try to manipulate us, and the decision theory of why they’d do it and how it would work. “Knife safety” is our own ability to use decision theory to not get manipulated.
The reason that I think Roko’s basilisk is a net harmful idea is because there are a lot of people who are way more motivated to learn/talk about “cool” topics like sharp scary knives or ideas that sound dangerous, who are not nearly as motivated to learn about “boring” topics like knife safety or mathy decision theory. So for people who happen to allocate their attention this way (maybe you’re an edgy young adult, or maybe you’re just a naturally anxious person, or maybe you’re an edgy anxious person), it might just make them more anxious or otherwise mislead them.
If you don’t know, why try to answer?
In general, your post is pretty misleading. It was not censored because the idea itself horrified people.
The idea was either wrong, in which case preventing people from reading a wrong idea is net beneficial by getting them better ideas or the idea was right and that suggests it’s dangerous. EY censored it because he believed that neither state would make it valuable to have the post on LessWrong and maybe out of a general precautionary principle. You don’t need to be horrified by things to use the precautionary principle.
I recall that among the reasons given were that it had triggered severe reactions in some members of the community. “Horrified” would be a mild way to describe the reaction that was claimed at the time. There was discussion about whether the idea itself could be self-fulfilling and thus inherently dangerous (in addition to the pain caused in some people thinking about it), but that didn’t last long.
I don’t think simply being wrong would have been enough to try to censor it.
The precautionary principle mattered.
Jessica wrote a post about how the people at MIRI at the time thought about keeping information that’s a potentially dangerous secret. It was pretty extreme and they’re trying to keep information that Jessica would have seen as a more trivial secret drove her into schizophrenia.
It was not an intellectual atmosphere of keeping ideas that are horrifying people, dangerous because those are horrifying ideas. MIRI had complex ideas about secrecy that they considered very serious and if you ignore those but treat the motivation like “people did something because they are horrified” you project decision heuristics onto EY that he didn’t use.
Fair enough. I’m not part of the Bay Area rationalist community, and I suspect there was a lot of stuff going on that didn’t appear in public posts or discussion on the topic. People (including Eliezer and others) are complicated, and there are both private and public reasons for actions, as well as reasons that aren’t easily understood, even by the actors.
BTW, none of this explains why lincolnquirk’s comment was strong-downvoted. Even if it’s incorrect (though it doesn’t seem to me to be—more incomplete), it’s not harmful or wasteful.
I don’t think that someone who by their own admission doesn’t think that they have a good understanding should offer up their explanation on a rumor in a case like this.