Ok, the information’s harmful. You need humans to touch that info anyways to do responsible risk-mitigation. So now what ?
I think one of the points is that you should now focus on selective rather than corrective or structural means to figure out who is nonetheless allowed to work on the basis of this information.
Calling something an infohazard, at least in my thinking, generally implies both that:
any attempts to devise galaxy-brained incentive structures that try to get large groups of people to nonetheless react in socially beneficial ways when they access this information are totally doomed and should be scrapped from the beginning.
you absolutely should not give this information to anyone that you have doubts would handle it well; musings along the lines of “but maybe I can teach/convince them later on what the best way to go about this is” are generally wrong and should also be dismissed.
So what do you do if you nonetheless require that at least some people are keeping track of things? Well, as I said above, you use selective methods instead. More precisely, you carefully curate a very short list of human beings that are responsible people and likely also share your meta views on how dangerous truths ought to be handled, and you do your absolute best to make sure the group never expands beyond those you have already vetted as capable of handling the situation properly.
I think at the meta level I very much doubt that I am responsible enough to create and curate a list of human beings for the most dangerous hazards. For example, I am very confident that I could not 100% successfully detect a foreign government spy inside my friend group, because even the US intelligence community can’t do that… you need other mitigating controls, instead.
I think one of the points is that you should now focus on selective rather than corrective or structural means to figure out who is nonetheless allowed to work on the basis of this information.
Calling something an infohazard, at least in my thinking, generally implies both that:
any attempts to devise galaxy-brained incentive structures that try to get large groups of people to nonetheless react in socially beneficial ways when they access this information are totally doomed and should be scrapped from the beginning.
you absolutely should not give this information to anyone that you have doubts would handle it well; musings along the lines of “but maybe I can teach/convince them later on what the best way to go about this is” are generally wrong and should also be dismissed.
So what do you do if you nonetheless require that at least some people are keeping track of things? Well, as I said above, you use selective methods instead. More precisely, you carefully curate a very short list of human beings that are responsible people and likely also share your meta views on how dangerous truths ought to be handled, and you do your absolute best to make sure the group never expands beyond those you have already vetted as capable of handling the situation properly.
I think at the meta level I very much doubt that I am responsible enough to create and curate a list of human beings for the most dangerous hazards. For example, I am very confident that I could not 100% successfully detect a foreign government spy inside my friend group, because even the US intelligence community can’t do that… you need other mitigating controls, instead.