Strongly agree with this. If you truly believe that a piece of text is harmful to those who read it, then you should also believe that it is immoral (under utilitarianism) to spread it to people who might be vulnerable.
My position is that nihilism is dangerous, but only to those who are stuck on the idea of a universal utility function, and don’t have a personal utility function (or the idea of a personal utility function) to fill in the void when that idea is shown to be unworkable. So it certainly can be read safely, but there are non-optional prerequisites for safe handling. You should be careful about posting anything more about this until you’re confident that you understand what those prerequisites are, and have written introductory text that fulfills them.
The trouble is that a rationalism that fails to deal with possible basilisks … fails.
We don’t all have an impregnable mental fortress. (And anyone claiming they do is speaking foolishly—they might do, but they can’t possibly be certain.) But a failure to be able to deal with such is, nevertheless, a failure of rationality.
So let’s do something useful. How do we train a rationalist to safely outstare a basilisk and turn Medusa to stone?
So let’s do something useful. How do we train a rationalist to safely outstare a basilisk and turn Medusa to stone?
In general, if you want to outstare a basilisk, you need to reason about it from a distance first—gather abstract information about severity, defenses and prerequisites first, before reading it, then perform an explicit risk-benefit calculation to decide whether to read it or not. And if people who’ve encountered it say that you shouldn’t read it, then accept that. There is no such thing as a reliable, fully general defense. There exist classes of information, such as communication from malignant superintelligences able to fully simulate the receiver, for which defense is believed to be impossible, even in principle, and refusing to read or listen is the best answer.
Note that someone who believes an information hazard is genuinely dangerous will refuse to provide details about its contents, and this can severely bias discussions about whether it’s dangerous or not. Don’t analyze debates about information hazards the way you’d analyze normal debates. If one side says it’s safe, and gives reasons, and the other side says it’s dangerous but refuses to give their reasons, then you should assume it’s dangerous. (You can still judge the is-dangerous side based on their qualifications and on how accurate they are when speaking on other subjects, though.)
The following is thinking further on the issue, not necessarily disagreement with your points:
Your comment is close to advocating compartmentalisation for mental health: the deliberate choice to have a known bad map. Compartmentalisation is an intellectual sin, because reality is all of a piece.
We can’t go to absolutes. Historically, “someone warned me off this information” has been badly counterproductive. Lying to oneself about the world is bad; a society lying to itself about the world has historically been disastrous.
How much science has exploded people’s heads as if they’d seen a very small basilisk? Quite a lot.
That said, decompartmentalising too quickly can lead to decompartmentalising toxic waste, which can lead to problems. Humans are apes with all manner of evolved holes in their thinking.
What I’m saying is that even though dangerous stuff is dangerous, a programme for learning to handle it strikes me as really not optional.
(And this is not to say anything about my opinion of Suicide Note, the fat rambling book-length PDF this post is about, which I dipped into at random and rapidly consigned to the mental circular file. I’d think anyone susceptible to this one is already on the edge and could be pushed over by anything whatsoever. I realise I’m typical-minding there, of course.)
We can’t go to absolutes. Historically, “someone warned me off this information” has been badly counterproductive.
There are lots of warnings about information that’s supposedly wrong, or confusing, but these are relatively easy information hazards to defend against. If the only danger of a text is that it’s wrong, then being told why it’s wrong is sufficient protection to read it. Highly confused/confusing text is a little more dangerous—reading lots of postmodernism would be bad for you—but the danger there is only in trying to make sense of it where there is no sense to be made, so, again, a warning should be sufficient defense.
I think warnings about information being actively harmful have been pretty rare, though. I can think of a few major categories, and some one-offs.
There’s information that would destroy faith in a religion, and information that would alter political allegiance. These seem like obvious false alarms (since speakers have a motivation for warning falsely). In fact, the presence of a warning like that is usually evidence that you should read it.
I wouldn’t call any of these classes basilisks. Information hazards, maybe, but weak ones. But then there’re rare one-offs, the ones that people have called basilisks, and with confirmed deaths or psychological injuries to their credit. These are clearly not in the same league. They genuinely do require careful handling. And because they’re rare one-offs, handling them carefully won’t consume inordinate resources; and as long as you’re making an explicit risk-benefit calculation, you can factor in the expected value of whatever it is you’re blinding yourself to, so they won’t blind you to very much.
Compartmentalization is bad in general, but expected utility trumps all. Every heuristic has its exceptions, and information-is-good is only a heuristic.
What I’m saying is that even though dangerous stuff is dangerous, a programme for learning to handle it strikes me as really not optional.
It seems to me that starting with analysis at a distance is a necessary (but not necessarily sufficient) precaution in that handling.
I think warnings about information being actively harmful have been pretty rare, though.
There are very few, if any, societies without censorship.
But then there’re rare one-offs, the ones that people have called basilisks, and with confirmed deaths or psychological injuries to their credit. These are clearly not in the same league.
I need examples, more than the present post (“hey, here’s a rambling crackpot 2000-page suicide note”) or, in the case of the LessWrong forbidden post, individuals with known mental disabilities (OCD) getting extremely upset. (I don’t deny that they were upset, or that this is something to consider; I do deny it’s enough reason for complete suppression.)
Would your criteria ban the song “Gloomy Sunday”?
It seems to me that starting with analysis at a distance is a necessary (but not necessarily sufficient) precaution in that handling.
No. Mainly because enforcing a ban on any song requires arranging society in a bad way. Also because I don’t consider the mood shift from a depressing song to be much of a harm, and the title is sufficient warning for anyone who wouldn’t want to listen to something gloomy. However, my criteria would imply that you should think twice before adding it to your playlist, though, thrice if people subscribe to that playlist who don’t want to or ought not to want to listen to it.
jimrandomh had other very apposite comments in private message which I’ve responded to. I don’t think we deeply disagree on anything much to do with the issue of the necessity of learning to stare back at basilisks.
Strongly agree with this. If you truly believe that a piece of text is harmful to those who read it, then you should also believe that it is immoral (under utilitarianism) to spread it to people who might be vulnerable.
My position is that nihilism is dangerous, but only to those who are stuck on the idea of a universal utility function, and don’t have a personal utility function (or the idea of a personal utility function) to fill in the void when that idea is shown to be unworkable. So it certainly can be read safely, but there are non-optional prerequisites for safe handling. You should be careful about posting anything more about this until you’re confident that you understand what those prerequisites are, and have written introductory text that fulfills them.
The trouble is that a rationalism that fails to deal with possible basilisks … fails.
We don’t all have an impregnable mental fortress. (And anyone claiming they do is speaking foolishly—they might do, but they can’t possibly be certain.) But a failure to be able to deal with such is, nevertheless, a failure of rationality.
So let’s do something useful. How do we train a rationalist to safely outstare a basilisk and turn Medusa to stone?
In general, if you want to outstare a basilisk, you need to reason about it from a distance first—gather abstract information about severity, defenses and prerequisites first, before reading it, then perform an explicit risk-benefit calculation to decide whether to read it or not. And if people who’ve encountered it say that you shouldn’t read it, then accept that. There is no such thing as a reliable, fully general defense. There exist classes of information, such as communication from malignant superintelligences able to fully simulate the receiver, for which defense is believed to be impossible, even in principle, and refusing to read or listen is the best answer.
Note that someone who believes an information hazard is genuinely dangerous will refuse to provide details about its contents, and this can severely bias discussions about whether it’s dangerous or not. Don’t analyze debates about information hazards the way you’d analyze normal debates. If one side says it’s safe, and gives reasons, and the other side says it’s dangerous but refuses to give their reasons, then you should assume it’s dangerous. (You can still judge the is-dangerous side based on their qualifications and on how accurate they are when speaking on other subjects, though.)
The following is thinking further on the issue, not necessarily disagreement with your points:
Your comment is close to advocating compartmentalisation for mental health: the deliberate choice to have a known bad map. Compartmentalisation is an intellectual sin, because reality is all of a piece.
We can’t go to absolutes. Historically, “someone warned me off this information” has been badly counterproductive. Lying to oneself about the world is bad; a society lying to itself about the world has historically been disastrous.
How much science has exploded people’s heads as if they’d seen a very small basilisk? Quite a lot.
That said, decompartmentalising too quickly can lead to decompartmentalising toxic waste, which can lead to problems. Humans are apes with all manner of evolved holes in their thinking.
What I’m saying is that even though dangerous stuff is dangerous, a programme for learning to handle it strikes me as really not optional.
(And this is not to say anything about my opinion of Suicide Note, the fat rambling book-length PDF this post is about, which I dipped into at random and rapidly consigned to the mental circular file. I’d think anyone susceptible to this one is already on the edge and could be pushed over by anything whatsoever. I realise I’m typical-minding there, of course.)
There are lots of warnings about information that’s supposedly wrong, or confusing, but these are relatively easy information hazards to defend against. If the only danger of a text is that it’s wrong, then being told why it’s wrong is sufficient protection to read it. Highly confused/confusing text is a little more dangerous—reading lots of postmodernism would be bad for you—but the danger there is only in trying to make sense of it where there is no sense to be made, so, again, a warning should be sufficient defense.
I think warnings about information being actively harmful have been pretty rare, though. I can think of a few major categories, and some one-offs.
There’s information that would destroy faith in a religion, and information that would alter political allegiance. These seem like obvious false alarms (since speakers have a motivation for warning falsely). In fact, the presence of a warning like that is usually evidence that you should read it.
I wouldn’t call any of these classes basilisks. Information hazards, maybe, but weak ones. But then there’re rare one-offs, the ones that people have called basilisks, and with confirmed deaths or psychological injuries to their credit. These are clearly not in the same league. They genuinely do require careful handling. And because they’re rare one-offs, handling them carefully won’t consume inordinate resources; and as long as you’re making an explicit risk-benefit calculation, you can factor in the expected value of whatever it is you’re blinding yourself to, so they won’t blind you to very much.
Compartmentalization is bad in general, but expected utility trumps all. Every heuristic has its exceptions, and information-is-good is only a heuristic.
It seems to me that starting with analysis at a distance is a necessary (but not necessarily sufficient) precaution in that handling.
There are very few, if any, societies without censorship.
I need examples, more than the present post (“hey, here’s a rambling crackpot 2000-page suicide note”) or, in the case of the LessWrong forbidden post, individuals with known mental disabilities (OCD) getting extremely upset. (I don’t deny that they were upset, or that this is something to consider; I do deny it’s enough reason for complete suppression.)
Would your criteria ban the song “Gloomy Sunday”?
It’s catalogue of citable examples time, then.
Claims of real-life examples of the motif of harmful sensation are not rare at all. Substantiated ones are rather less common.
No. Mainly because enforcing a ban on any song requires arranging society in a bad way. Also because I don’t consider the mood shift from a depressing song to be much of a harm, and the title is sufficient warning for anyone who wouldn’t want to listen to something gloomy. However, my criteria would imply that you should think twice before adding it to your playlist, though, thrice if people subscribe to that playlist who don’t want to or ought not to want to listen to it.
Sorry, I should have given a link. I speak of the claims of it inducing suicide.
What I’m saying is that you need actual evidence before invoking claims of harmful sensation.
jimrandomh had other very apposite comments in private message which I’ve responded to. I don’t think we deeply disagree on anything much to do with the issue of the necessity of learning to stare back at basilisks.