Say (in the extreme case) you are Einstein in the moment where he realized that E=mc², and you think so far ahead that you can imagine the nuclear bomb. You don’t know if it is possible to build, but you decide to keep your work secret. As the field of physics advances, other will come across the same insight. How can these isolated, silent individuals find the others without going public about it? Is there some kind of shelling point they can construct without knowing of each other?
Edit: Just to be clear and avoid confusing; I don’t have any such potentially dangerous insight. I was thinking in a more general way about AI policy. Advocating for an agreement that no ones creates an AI with property X, would just be an invitation to try building an AI with property X. Just like you wouldn’t read aloud a list with bad words in preschool. That got me thinking that there should be a way to check with others, before publishing.
[Question] Is there a known method to find others who came across the same potential infohazard without spoiling it to the public?
Say (in the extreme case) you are Einstein in the moment where he realized that E=mc², and you think so far ahead that you can imagine the nuclear bomb. You don’t know if it is possible to build, but you decide to keep your work secret. As the field of physics advances, other will come across the same insight. How can these isolated, silent individuals find the others without going public about it? Is there some kind of shelling point they can construct without knowing of each other?
Edit: Just to be clear and avoid confusing; I don’t have any such potentially dangerous insight. I was thinking in a more general way about AI policy. Advocating for an agreement that no ones creates an AI with property X, would just be an invitation to try building an AI with property X. Just like you wouldn’t read aloud a list with bad words in preschool. That got me thinking that there should be a way to check with others, before publishing.