Is I read it, the policy does not address the basilisk and basilisk type issues, which, while I don’t think should be moderated, are. “Information Hazards” specifically says “not mental health reasons.”
My point was that it doesn’t cause mental health problems, not that it can’t trigger them. Perhaps that’s a bad way to put it. If it does, there’s something beyond the information hazard going on, either an existing problem being triggered, or a multiple hazard. As I understand it, a basilisk is hazardous because you know the argument, without it needing to corrupt your reasoning abilities. Roko’s is alleged to be hazardous even to a rational agent. (I don’t think it is, and I think censoring it prevents an interesting debate about why. I don’t plan to say any more, given the existing censorship policies. If this is already too much, please let me know and I will edit accordingly.)
Is I read it, the policy does not address the basilisk and basilisk type issues
It does, in as much as it includes:
8) Topics we have asked people to stop discussing.
This particular entry makes all the others more or less redundant. This is perhaps better than only having the “information Hazard” clause. Because Eliezer deleting something based on the “Eliezer says so” is at least coherent and unambiguous. It doesn’t matter whether a post by Roko is actually dangerous. The says so clause can still cover it and we can just roll our eyes and tolerate Eliezer’s quirks.
Is I read it, the policy does not address the basilisk and basilisk type issues, which, while I don’t think should be moderated, are. “Information Hazards” specifically says “not mental health reasons.”
A true basilisk is not a mental health risk, or at least not only such. Whether one such has been found is a separate question (I lean toward no).
IIRC, allegedly there were a few people with OCD having nightmares after reading that post by Roko.
My point was that it doesn’t cause mental health problems, not that it can’t trigger them. Perhaps that’s a bad way to put it. If it does, there’s something beyond the information hazard going on, either an existing problem being triggered, or a multiple hazard. As I understand it, a basilisk is hazardous because you know the argument, without it needing to corrupt your reasoning abilities. Roko’s is alleged to be hazardous even to a rational agent. (I don’t think it is, and I think censoring it prevents an interesting debate about why. I don’t plan to say any more, given the existing censorship policies. If this is already too much, please let me know and I will edit accordingly.)
Quantum roulette is a possible candidate.
Well, the “LW basilisk” just turned out to be a knife sharp enough to cut yourself with. And sometimes you need sharp knives.
It does, in as much as it includes:
This particular entry makes all the others more or less redundant. This is perhaps better than only having the “information Hazard” clause. Because Eliezer deleting something based on the “Eliezer says so” is at least coherent and unambiguous. It doesn’t matter whether a post by Roko is actually dangerous. The says so clause can still cover it and we can just roll our eyes and tolerate Eliezer’s quirks.
Well his attempt here is to lay out a bit more than “Because Eliezer says so” as a reason.