I agree that it’s not terribly useful beyond identifying someone’s fears. Using almost any taxonomy to specify what the speaker is actually worried about lets you stop saying “infohazard” and start talking about “bad actor misuse of information” or “naive user tricked by partial (but true) information”. These ARE often useful, even though the aggregate term “infohazard” is limited.
Yeah, that’s a useful taxonomy to be reminded of. I think it’s interesting how the “development hazard”, item 8, with maybe a smidge of “adversary hazard”, is the driver of people’s thinking on AI. I’m pretty unconvinced that good infohazard doctrine, even for AI, can be written based on thinking mainly about that!
I agree that it’s not terribly useful beyond identifying someone’s fears. Using almost any taxonomy to specify what the speaker is actually worried about lets you stop saying “infohazard” and start talking about “bad actor misuse of information” or “naive user tricked by partial (but true) information”. These ARE often useful, even though the aggregate term “infohazard” is limited.
See e.g. Table 1 of https://nickbostrom.com/information-hazards.pdf
Yeah, that’s a useful taxonomy to be reminded of. I think it’s interesting how the “development hazard”, item 8, with maybe a smidge of “adversary hazard”, is the driver of people’s thinking on AI. I’m pretty unconvinced that good infohazard doctrine, even for AI, can be written based on thinking mainly about that!