The following is thinking further on the issue, not necessarily disagreement with your points:
Your comment is close to advocating compartmentalisation for mental health: the deliberate choice to have a known bad map. Compartmentalisation is an intellectual sin, because reality is all of a piece.
We can’t go to absolutes. Historically, “someone warned me off this information” has been badly counterproductive. Lying to oneself about the world is bad; a society lying to itself about the world has historically been disastrous.
How much science has exploded people’s heads as if they’d seen a very small basilisk? Quite a lot.
That said, decompartmentalising too quickly can lead to decompartmentalising toxic waste, which can lead to problems. Humans are apes with all manner of evolved holes in their thinking.
What I’m saying is that even though dangerous stuff is dangerous, a programme for learning to handle it strikes me as really not optional.
(And this is not to say anything about my opinion of Suicide Note, the fat rambling book-length PDF this post is about, which I dipped into at random and rapidly consigned to the mental circular file. I’d think anyone susceptible to this one is already on the edge and could be pushed over by anything whatsoever. I realise I’m typical-minding there, of course.)
We can’t go to absolutes. Historically, “someone warned me off this information” has been badly counterproductive.
There are lots of warnings about information that’s supposedly wrong, or confusing, but these are relatively easy information hazards to defend against. If the only danger of a text is that it’s wrong, then being told why it’s wrong is sufficient protection to read it. Highly confused/confusing text is a little more dangerous—reading lots of postmodernism would be bad for you—but the danger there is only in trying to make sense of it where there is no sense to be made, so, again, a warning should be sufficient defense.
I think warnings about information being actively harmful have been pretty rare, though. I can think of a few major categories, and some one-offs.
There’s information that would destroy faith in a religion, and information that would alter political allegiance. These seem like obvious false alarms (since speakers have a motivation for warning falsely). In fact, the presence of a warning like that is usually evidence that you should read it.
I wouldn’t call any of these classes basilisks. Information hazards, maybe, but weak ones. But then there’re rare one-offs, the ones that people have called basilisks, and with confirmed deaths or psychological injuries to their credit. These are clearly not in the same league. They genuinely do require careful handling. And because they’re rare one-offs, handling them carefully won’t consume inordinate resources; and as long as you’re making an explicit risk-benefit calculation, you can factor in the expected value of whatever it is you’re blinding yourself to, so they won’t blind you to very much.
Compartmentalization is bad in general, but expected utility trumps all. Every heuristic has its exceptions, and information-is-good is only a heuristic.
What I’m saying is that even though dangerous stuff is dangerous, a programme for learning to handle it strikes me as really not optional.
It seems to me that starting with analysis at a distance is a necessary (but not necessarily sufficient) precaution in that handling.
I think warnings about information being actively harmful have been pretty rare, though.
There are very few, if any, societies without censorship.
But then there’re rare one-offs, the ones that people have called basilisks, and with confirmed deaths or psychological injuries to their credit. These are clearly not in the same league.
I need examples, more than the present post (“hey, here’s a rambling crackpot 2000-page suicide note”) or, in the case of the LessWrong forbidden post, individuals with known mental disabilities (OCD) getting extremely upset. (I don’t deny that they were upset, or that this is something to consider; I do deny it’s enough reason for complete suppression.)
Would your criteria ban the song “Gloomy Sunday”?
It seems to me that starting with analysis at a distance is a necessary (but not necessarily sufficient) precaution in that handling.
No. Mainly because enforcing a ban on any song requires arranging society in a bad way. Also because I don’t consider the mood shift from a depressing song to be much of a harm, and the title is sufficient warning for anyone who wouldn’t want to listen to something gloomy. However, my criteria would imply that you should think twice before adding it to your playlist, though, thrice if people subscribe to that playlist who don’t want to or ought not to want to listen to it.
jimrandomh had other very apposite comments in private message which I’ve responded to. I don’t think we deeply disagree on anything much to do with the issue of the necessity of learning to stare back at basilisks.
The following is thinking further on the issue, not necessarily disagreement with your points:
Your comment is close to advocating compartmentalisation for mental health: the deliberate choice to have a known bad map. Compartmentalisation is an intellectual sin, because reality is all of a piece.
We can’t go to absolutes. Historically, “someone warned me off this information” has been badly counterproductive. Lying to oneself about the world is bad; a society lying to itself about the world has historically been disastrous.
How much science has exploded people’s heads as if they’d seen a very small basilisk? Quite a lot.
That said, decompartmentalising too quickly can lead to decompartmentalising toxic waste, which can lead to problems. Humans are apes with all manner of evolved holes in their thinking.
What I’m saying is that even though dangerous stuff is dangerous, a programme for learning to handle it strikes me as really not optional.
(And this is not to say anything about my opinion of Suicide Note, the fat rambling book-length PDF this post is about, which I dipped into at random and rapidly consigned to the mental circular file. I’d think anyone susceptible to this one is already on the edge and could be pushed over by anything whatsoever. I realise I’m typical-minding there, of course.)
There are lots of warnings about information that’s supposedly wrong, or confusing, but these are relatively easy information hazards to defend against. If the only danger of a text is that it’s wrong, then being told why it’s wrong is sufficient protection to read it. Highly confused/confusing text is a little more dangerous—reading lots of postmodernism would be bad for you—but the danger there is only in trying to make sense of it where there is no sense to be made, so, again, a warning should be sufficient defense.
I think warnings about information being actively harmful have been pretty rare, though. I can think of a few major categories, and some one-offs.
There’s information that would destroy faith in a religion, and information that would alter political allegiance. These seem like obvious false alarms (since speakers have a motivation for warning falsely). In fact, the presence of a warning like that is usually evidence that you should read it.
I wouldn’t call any of these classes basilisks. Information hazards, maybe, but weak ones. But then there’re rare one-offs, the ones that people have called basilisks, and with confirmed deaths or psychological injuries to their credit. These are clearly not in the same league. They genuinely do require careful handling. And because they’re rare one-offs, handling them carefully won’t consume inordinate resources; and as long as you’re making an explicit risk-benefit calculation, you can factor in the expected value of whatever it is you’re blinding yourself to, so they won’t blind you to very much.
Compartmentalization is bad in general, but expected utility trumps all. Every heuristic has its exceptions, and information-is-good is only a heuristic.
It seems to me that starting with analysis at a distance is a necessary (but not necessarily sufficient) precaution in that handling.
There are very few, if any, societies without censorship.
I need examples, more than the present post (“hey, here’s a rambling crackpot 2000-page suicide note”) or, in the case of the LessWrong forbidden post, individuals with known mental disabilities (OCD) getting extremely upset. (I don’t deny that they were upset, or that this is something to consider; I do deny it’s enough reason for complete suppression.)
Would your criteria ban the song “Gloomy Sunday”?
It’s catalogue of citable examples time, then.
Claims of real-life examples of the motif of harmful sensation are not rare at all. Substantiated ones are rather less common.
No. Mainly because enforcing a ban on any song requires arranging society in a bad way. Also because I don’t consider the mood shift from a depressing song to be much of a harm, and the title is sufficient warning for anyone who wouldn’t want to listen to something gloomy. However, my criteria would imply that you should think twice before adding it to your playlist, though, thrice if people subscribe to that playlist who don’t want to or ought not to want to listen to it.
Sorry, I should have given a link. I speak of the claims of it inducing suicide.
What I’m saying is that you need actual evidence before invoking claims of harmful sensation.
jimrandomh had other very apposite comments in private message which I’ve responded to. I don’t think we deeply disagree on anything much to do with the issue of the necessity of learning to stare back at basilisks.