I’m pretty sure that I think “infohazard” is a conceptual dead-end concept that embeds some really false understandings of how secrets are used by humans. It is an orphan of a concept—it doesn’t go anywhere. Ok, the information’s harmful. You need humans to touch that info anyways to do responsible risk-mitigation. So now what ?
That “so now what” doesn’t sound like a dead end to me. The question of how to mitigate risk when normal risk-mitigation procedures are themselves risky seems like an important one.
I agree that it’s not terribly useful beyond identifying someone’s fears. Using almost any taxonomy to specify what the speaker is actually worried about lets you stop saying “infohazard” and start talking about “bad actor misuse of information” or “naive user tricked by partial (but true) information”. These ARE often useful, even though the aggregate term “infohazard” is limited.
Yeah, that’s a useful taxonomy to be reminded of. I think it’s interesting how the “development hazard”, item 8, with maybe a smidge of “adversary hazard”, is the driver of people’s thinking on AI. I’m pretty unconvinced that good infohazard doctrine, even for AI, can be written based on thinking mainly about that!
I suggest there is a concept distinct enough to warrant the special term, but if it’s expansive enough to include secrets, beneficial informationthat some people prefer others not know, that renders it worthless.
“Infohazard” ought to be reserved for information that harms the mind that contains it, with spoilers as the most mild examples, SCP-style horrors as the extreme fictional examples.
I think within a bayesian framework where in-general you assume information has positive value, it’s useful to have an explicit term when that is not the case. It’s a relatively rare occurrence, and as such your usual ways of dealing with information will probably backfire.
The obvious things to do is to not learn about that information in the first place (i.e. avoid dangerous research), understand and address the causes for why this information is dangerous (because e.g. you can’t coordinate on not building dangerous technology), or as a last resort, silo the information and limit the spread of it.
I do think that it would be useful to have different words that distinguish between “infohazard to the average individual” and “societal infohazard”. The first one is really exceedingly rare. The second one is still rare but more common because society has a huge distribution of beliefs and enough crazy people that if information can be used dangerously, there is a non-trivial chance it will.
I think a lot of my underlying instinctive opposition to this concept boils down to thinking that we can and do coordinate on this stuff quite a lot. Arguably, AI is the weird counterexample of a thought that wants to be thunk—I think modern Western society is very nearly tailor-made to seek a thing that is abstract, maximizing, systematizing of knowledge, and useful, especially if it fills a hole left by the collapse of organized religion.
I think for most other infohazards, the proper approach requires setting up an (often-government) team that handles them, which requires those employees to expose themselves to the infohazard to manage it. And, yeah, sometimes they suffer real damage from it. There’s no way to analyze ISIS beheading videos to stop their perpetrators without seeing some beheading videos; I think that’s the more-common varietal of infohazard I’m thinking of.
Ok, the information’s harmful. You need humans to touch that info anyways to do responsible risk-mitigation. So now what ?
I think one of the points is that you should now focus on selective rather than corrective or structural means to figure out who is nonetheless allowed to work on the basis of this information.
Calling something an infohazard, at least in my thinking, generally implies both that:
any attempts to devise galaxy-brained incentive structures that try to get large groups of people to nonetheless react in socially beneficial ways when they access this information are totally doomed and should be scrapped from the beginning.
you absolutely should not give this information to anyone that you have doubts would handle it well; musings along the lines of “but maybe I can teach/convince them later on what the best way to go about this is” are generally wrong and should also be dismissed.
So what do you do if you nonetheless require that at least some people are keeping track of things? Well, as I said above, you use selective methods instead. More precisely, you carefully curate a very short list of human beings that are responsible people and likely also share your meta views on how dangerous truths ought to be handled, and you do your absolute best to make sure the group never expands beyond those you have already vetted as capable of handling the situation properly.
I think at the meta level I very much doubt that I am responsible enough to create and curate a list of human beings for the most dangerous hazards. For example, I am very confident that I could not 100% successfully detect a foreign government spy inside my friend group, because even the US intelligence community can’t do that… you need other mitigating controls, instead.
I’m pretty sure that I think “infohazard” is a conceptual dead-end concept that embeds some really false understandings of how secrets are used by humans. It is an orphan of a concept—it doesn’t go anywhere. Ok, the information’s harmful. You need humans to touch that info anyways to do responsible risk-mitigation. So now what ?
That “so now what” doesn’t sound like a dead end to me. The question of how to mitigate risk when normal risk-mitigation procedures are themselves risky seems like an important one.
I agree that it’s not terribly useful beyond identifying someone’s fears. Using almost any taxonomy to specify what the speaker is actually worried about lets you stop saying “infohazard” and start talking about “bad actor misuse of information” or “naive user tricked by partial (but true) information”. These ARE often useful, even though the aggregate term “infohazard” is limited.
See e.g. Table 1 of https://nickbostrom.com/information-hazards.pdf
Yeah, that’s a useful taxonomy to be reminded of. I think it’s interesting how the “development hazard”, item 8, with maybe a smidge of “adversary hazard”, is the driver of people’s thinking on AI. I’m pretty unconvinced that good infohazard doctrine, even for AI, can be written based on thinking mainly about that!
I suggest there is a concept distinct enough to warrant the special term, but if it’s expansive enough to include secrets, beneficial information that some people prefer others not know, that renders it worthless.
“Infohazard” ought to be reserved for information that harms the mind that contains it, with spoilers as the most mild examples, SCP-style horrors as the extreme fictional examples.
I think within a bayesian framework where in-general you assume information has positive value, it’s useful to have an explicit term when that is not the case. It’s a relatively rare occurrence, and as such your usual ways of dealing with information will probably backfire.
The obvious things to do is to not learn about that information in the first place (i.e. avoid dangerous research), understand and address the causes for why this information is dangerous (because e.g. you can’t coordinate on not building dangerous technology), or as a last resort, silo the information and limit the spread of it.
I do think that it would be useful to have different words that distinguish between “infohazard to the average individual” and “societal infohazard”. The first one is really exceedingly rare. The second one is still rare but more common because society has a huge distribution of beliefs and enough crazy people that if information can be used dangerously, there is a non-trivial chance it will.
I still like the term “recipe for destruction” when limiting it to stuff similar to dangerous technology.
I think a lot of my underlying instinctive opposition to this concept boils down to thinking that we can and do coordinate on this stuff quite a lot. Arguably, AI is the weird counterexample of a thought that wants to be thunk—I think modern Western society is very nearly tailor-made to seek a thing that is abstract, maximizing, systematizing of knowledge, and useful, especially if it fills a hole left by the collapse of organized religion.
I think for most other infohazards, the proper approach requires setting up an (often-government) team that handles them, which requires those employees to expose themselves to the infohazard to manage it. And, yeah, sometimes they suffer real damage from it. There’s no way to analyze ISIS beheading videos to stop their perpetrators without seeing some beheading videos; I think that’s the more-common varietal of infohazard I’m thinking of.
I think one of the points is that you should now focus on selective rather than corrective or structural means to figure out who is nonetheless allowed to work on the basis of this information.
Calling something an infohazard, at least in my thinking, generally implies both that:
any attempts to devise galaxy-brained incentive structures that try to get large groups of people to nonetheless react in socially beneficial ways when they access this information are totally doomed and should be scrapped from the beginning.
you absolutely should not give this information to anyone that you have doubts would handle it well; musings along the lines of “but maybe I can teach/convince them later on what the best way to go about this is” are generally wrong and should also be dismissed.
So what do you do if you nonetheless require that at least some people are keeping track of things? Well, as I said above, you use selective methods instead. More precisely, you carefully curate a very short list of human beings that are responsible people and likely also share your meta views on how dangerous truths ought to be handled, and you do your absolute best to make sure the group never expands beyond those you have already vetted as capable of handling the situation properly.
I think at the meta level I very much doubt that I am responsible enough to create and curate a list of human beings for the most dangerous hazards. For example, I am very confident that I could not 100% successfully detect a foreign government spy inside my friend group, because even the US intelligence community can’t do that… you need other mitigating controls, instead.