Interesting. I’d like to explore the distinction between “risk of converging on a dis-preferred social equilibrium” (which I’d frame as “making others aware that this equilibrium is feasible”) and other kinds of revealing information which others use to act in ways you don’t like. I don’t see much difference.
The more obvious cases (“here are plans to a gun that I’m especially vulnerable to”) don’t get used much unless you have explicit enemies, while the more subtle ones (“I can imagine living in a world where people judge you for scratching your nose with your left hand”) require less intentionality of harm directed at you. But it’s the same mechanism and info-risk.
For one thing, the equilibrium might not actually be feasible, but making others aware that you have thought about it might nevertheless have harmful effects (e.g. they might mistakenly think that it is, or they might correctly realize something in the vicinity is.) For another, “teach others something that can be used against you” while technically describing the sort of thing I’m talking about, tends to conjure up a very different image in the mind of the reader—an image more like your gun plans example.
I agree there is not a sharp distinction between these, probably. (I don’t know, didn’t think about it.) I wrote this shortform because, well, I guess I thought of this as a somewhat new idea—I thought of most infohazards talk as being focused on other kinds of examples. Thank you for telling me otherwise!
(oops. I now realize this probably come across wrong). Sorry! I didn’t intend to be telling you things, nor did I mean to imply that pointing out more subtle variants of known info-hazards was useless. I really appreciate the topic, and I’m happy to have exactly as much text as we have in exploring non-trivial application of the infohazard concept, and helping identify whether further categorization is helpful (I’m not convinced, but I probably don’t have to be).
Interesting. I’d like to explore the distinction between “risk of converging on a dis-preferred social equilibrium” (which I’d frame as “making others aware that this equilibrium is feasible”) and other kinds of revealing information which others use to act in ways you don’t like. I don’t see much difference.
The more obvious cases (“here are plans to a gun that I’m especially vulnerable to”) don’t get used much unless you have explicit enemies, while the more subtle ones (“I can imagine living in a world where people judge you for scratching your nose with your left hand”) require less intentionality of harm directed at you. But it’s the same mechanism and info-risk.
For one thing, the equilibrium might not actually be feasible, but making others aware that you have thought about it might nevertheless have harmful effects (e.g. they might mistakenly think that it is, or they might correctly realize something in the vicinity is.) For another, “teach others something that can be used against you” while technically describing the sort of thing I’m talking about, tends to conjure up a very different image in the mind of the reader—an image more like your gun plans example.
I agree there is not a sharp distinction between these, probably. (I don’t know, didn’t think about it.) I wrote this shortform because, well, I guess I thought of this as a somewhat new idea—I thought of most infohazards talk as being focused on other kinds of examples. Thank you for telling me otherwise!
(oops. I now realize this probably come across wrong). Sorry! I didn’t intend to be telling you things, nor did I mean to imply that pointing out more subtle variants of known info-hazards was useless. I really appreciate the topic, and I’m happy to have exactly as much text as we have in exploring non-trivial application of the infohazard concept, and helping identify whether further categorization is helpful (I’m not convinced, but I probably don’t have to be).