I think within a bayesian framework where in-general you assume information has positive value, it’s useful to have an explicit term when that is not the case. It’s a relatively rare occurrence, and as such your usual ways of dealing with information will probably backfire.
The obvious things to do is to not learn about that information in the first place (i.e. avoid dangerous research), understand and address the causes for why this information is dangerous (because e.g. you can’t coordinate on not building dangerous technology), or as a last resort, silo the information and limit the spread of it.
I do think that it would be useful to have different words that distinguish between “infohazard to the average individual” and “societal infohazard”. The first one is really exceedingly rare. The second one is still rare but more common because society has a huge distribution of beliefs and enough crazy people that if information can be used dangerously, there is a non-trivial chance it will.
I think a lot of my underlying instinctive opposition to this concept boils down to thinking that we can and do coordinate on this stuff quite a lot. Arguably, AI is the weird counterexample of a thought that wants to be thunk—I think modern Western society is very nearly tailor-made to seek a thing that is abstract, maximizing, systematizing of knowledge, and useful, especially if it fills a hole left by the collapse of organized religion.
I think for most other infohazards, the proper approach requires setting up an (often-government) team that handles them, which requires those employees to expose themselves to the infohazard to manage it. And, yeah, sometimes they suffer real damage from it. There’s no way to analyze ISIS beheading videos to stop their perpetrators without seeing some beheading videos; I think that’s the more-common varietal of infohazard I’m thinking of.
I think within a bayesian framework where in-general you assume information has positive value, it’s useful to have an explicit term when that is not the case. It’s a relatively rare occurrence, and as such your usual ways of dealing with information will probably backfire.
The obvious things to do is to not learn about that information in the first place (i.e. avoid dangerous research), understand and address the causes for why this information is dangerous (because e.g. you can’t coordinate on not building dangerous technology), or as a last resort, silo the information and limit the spread of it.
I do think that it would be useful to have different words that distinguish between “infohazard to the average individual” and “societal infohazard”. The first one is really exceedingly rare. The second one is still rare but more common because society has a huge distribution of beliefs and enough crazy people that if information can be used dangerously, there is a non-trivial chance it will.
I still like the term “recipe for destruction” when limiting it to stuff similar to dangerous technology.
I think a lot of my underlying instinctive opposition to this concept boils down to thinking that we can and do coordinate on this stuff quite a lot. Arguably, AI is the weird counterexample of a thought that wants to be thunk—I think modern Western society is very nearly tailor-made to seek a thing that is abstract, maximizing, systematizing of knowledge, and useful, especially if it fills a hole left by the collapse of organized religion.
I think for most other infohazards, the proper approach requires setting up an (often-government) team that handles them, which requires those employees to expose themselves to the infohazard to manage it. And, yeah, sometimes they suffer real damage from it. There’s no way to analyze ISIS beheading videos to stop their perpetrators without seeing some beheading videos; I think that’s the more-common varietal of infohazard I’m thinking of.