Yeah, that’s a useful taxonomy to be reminded of. I think it’s interesting how the “development hazard”, item 8, with maybe a smidge of “adversary hazard”, is the driver of people’s thinking on AI. I’m pretty unconvinced that good infohazard doctrine, even for AI, can be written based on thinking mainly about that!
See e.g. Table 1 of https://nickbostrom.com/information-hazards.pdf
Yeah, that’s a useful taxonomy to be reminded of. I think it’s interesting how the “development hazard”, item 8, with maybe a smidge of “adversary hazard”, is the driver of people’s thinking on AI. I’m pretty unconvinced that good infohazard doctrine, even for AI, can be written based on thinking mainly about that!