I’m not personally concerned about what Bostrom called ‘risks of irrationality and error’ or ‘risks to valuable states and activities’. There are costs of rationality though, where knowing just a little can expose you to harms that you’re not yet equipped to handle (classic examples: scope sensitivity, demandingness, death). This rounds to common sense - ‘be sensitive about when/whether/how to discuss upsetting topics’.
Mostly though, I’m inclined to keep quiet about data, idea, and attention hazards where my teenage self might have wanted to share interesting ideas like the antibiotic-gradient trick, at least without some benefit beyond having a fun discussion. Threat models for election security, yes—there’s a clear public interest in everyone understanding the tradeoffs involved in paper vs electronic ballots, or remote vs polling-place voting. Ideas for asymmetric warfare, not so much.
Does this include extreme examples, such as pieces of information that permanently damage your mind when exposed to, or antimemes?
Have you made any changes to your personal life because of this?
Excellent question!
I’m not personally concerned about what Bostrom called ‘risks of irrationality and error’ or ‘risks to valuable states and activities’. There are costs of rationality though, where knowing just a little can expose you to harms that you’re not yet equipped to handle (classic examples: scope sensitivity, demandingness, death). This rounds to common sense - ‘be sensitive about when/whether/how to discuss upsetting topics’.
Mostly though, I’m inclined to keep quiet about data, idea, and attention hazards where my teenage self might have wanted to share interesting ideas like the antibiotic-gradient trick, at least without some benefit beyond having a fun discussion. Threat models for election security, yes—there’s a clear public interest in everyone understanding the tradeoffs involved in paper vs electronic ballots, or remote vs polling-place voting. Ideas for asymmetric warfare, not so much.