As for why such standards aren’t well established with the rationalist AI safety community or publicized on LessWrong, I suspect there may be some unfortunate conflict between truth-seekingness as a value and censorship in the form of protecting infohazards.
Thanks, though this is more of a process document, than the kind of policy that I’m looking to: the policy which helps to answer the questions “Should I treat this or that idea or other information as infohazardous, and why?”. Except for a five-item list of “presumptuous infohazards”. If I understand correctly, the actual policy is left to the judgement of project leads and the coordinator. Or, perhaps, it exists at Conjecture as a written artifact, but is considered private or secret itself.
The Conjecture Internal Infohazard Policy seems like a good start!
As for why such standards aren’t well established with the rationalist AI safety community or publicized on LessWrong, I suspect there may be some unfortunate conflict between truth-seekingness as a value and censorship in the form of protecting infohazards.
Thanks, though this is more of a process document, than the kind of policy that I’m looking to: the policy which helps to answer the questions “Should I treat this or that idea or other information as infohazardous, and why?”. Except for a five-item list of “presumptuous infohazards”. If I understand correctly, the actual policy is left to the judgement of project leads and the coordinator. Or, perhaps, it exists at Conjecture as a written artifact, but is considered private or secret itself.