Some are relatively harmless, like The Game, being easy to overcome, and causing minimal suffering to those who don’t.
Some respect the use-mention distinction, like those described in Blit and comp.basilisk FAQ, making it possible to learn and think about them without suffering their effects.
These two don’t really fit the use of “basilisk” I’ve heard (even though the second coined the term, IIRC), because they are not “ideas, knowing about which causes great harm (in expectation)”. You are saying that there are two distinct approaches:
Innoculation: the idea is close enough to omnipresent that someone is very likely run into it (or invent it); for basilisks of this sort, focusing on prevention and treatment is probably best.
Containment: the idea is esoteric, and/or it cannot be treated; for basilisks of this sort, the only solution is to signal-boost the possibility of their existence and to insist on the virtue of silence on any instances actually found.
If we accept the term “basilisk” to include those that should be treated by innoculation (I’m leaning against this, as it de-fangs, so to speak, the term when used to refer to the other sort), then the drowning child argument is a perfect example: it can cause great emotional stress, and you’re likely to run into it if you take any philosophy class, or read any EA material, but there are many ways to defuse the argument, some of which come very naturally to most people.
Obviously, even if I had an example of the latter type, I wouldn’t reference it here, but I think that such things might exist, and there’s value to keeping wary of them.
Following on the BLIT link, we can do something similar now to deep learning networks. We can even make adversarial patches in realspace—a kind of machine basilisk, if you will.
It depends on the meme in question.
Some are relatively harmless, like The Game, being easy to overcome, and causing minimal suffering to those who don’t.
Some respect the use-mention distinction, like those described in Blit and comp.basilisk FAQ, making it possible to learn and think about them without suffering their effects.
These two don’t really fit the use of “basilisk” I’ve heard (even though the second coined the term, IIRC), because they are not “ideas, knowing about which causes great harm (in expectation)”. You are saying that there are two distinct approaches:
Innoculation: the idea is close enough to omnipresent that someone is very likely run into it (or invent it); for basilisks of this sort, focusing on prevention and treatment is probably best.
Containment: the idea is esoteric, and/or it cannot be treated; for basilisks of this sort, the only solution is to signal-boost the possibility of their existence and to insist on the virtue of silence on any instances actually found.
If we accept the term “basilisk” to include those that should be treated by innoculation (I’m leaning against this, as it de-fangs, so to speak, the term when used to refer to the other sort), then the drowning child argument is a perfect example: it can cause great emotional stress, and you’re likely to run into it if you take any philosophy class, or read any EA material, but there are many ways to defuse the argument, some of which come very naturally to most people.
Obviously, even if I had an example of the latter type, I wouldn’t reference it here, but I think that such things might exist, and there’s value to keeping wary of them.
Following on the BLIT link, we can do something similar now to deep learning networks. We can even make adversarial patches in realspace—a kind of machine basilisk, if you will.