Self-censorship to protect our own mental health? Stupid.
My gloss on it is that this is at best a minor part, though it figures in.
The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation.
More explaining why many won’t think it dangerous at all. This doesn’t directly point anything out, but any details do narrow the search-space:
V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer.
I personally don’t buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I’m willing to self-censor to some degree, even though I hate the heavy-handed response.
Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don’t really live my life in a way that’s consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats.
I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.
(1) I know that the SIAI mission is vitally important.
(2) If we blow it, the universe could be paved with paper clips.
(3) Or worse.
(4) I hereby certify that points 1 & 2 do not give me nightmares.
(5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
Although 5 could be easily replaced by “Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don’t want to think about explicitly.”
I read the idea, but it seemed to have basically the same flaw as Pascal’s wager does. On that ground alone it seemed like it shouldn’t be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn’t save the post.)
My analysis was that it described a real danger. Not a topic worth banning, of course—but not as worthless a danger as the one that arises in Pascal’s wager.
My gloss on it is that this is at best a minor part, though it figures in.
I think that, even if this is a minor part of the reasoning for those who (unlike me) believe in the danger, it could easily be the best, most consensus* basis for an explicit deletion policy. I’d support such a policy, and definitely think a secret policy is stupid for several reasons.
My gloss on it is that this is at best a minor part, though it figures in.
The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation.
More explaining why many won’t think it dangerous at all. This doesn’t directly point anything out, but any details do narrow the search-space: V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer.
I personally don’t buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I’m willing to self-censor to some degree, even though I hate the heavy-handed response.
Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don’t really live my life in a way that’s consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats.
I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.
How about an informed consent form:
(1) I know that the SIAI mission is vitally important.
(2) If we blow it, the universe could be paved with paper clips.
(3) Or worse.
(4) I hereby certify that points 1 & 2 do not give me nightmares.
(5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
I feel you should detail point (1) a bit more (explain in more detail what the SIAI intends to do), but I agree with the principle. Upvoted.
I like it!
Although 5 could be easily replaced by “Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don’t want to think about explicitly.”
I read the idea, but it seemed to have basically the same flaw as Pascal’s wager does. On that ground alone it seemed like it shouldn’t be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn’t save the post.)
My analysis was that it described a real danger. Not a topic worth banning, of course—but not as worthless a danger as the one that arises in Pascal’s wager.
I think that, even if this is a minor part of the reasoning for those who (unlike me) believe in the danger, it could easily be the best, most consensus* basis for an explicit deletion policy. I’d support such a policy, and definitely think a secret policy is stupid for several reasons.
*no consensus here will be perfect.