If that insight is undermined by being communicated, then communicating it to the world immunizes the world from it. If that is a mechanism by which an AI-in-a-box could escape, then it needs to be communicated with every AI researcher.
Don’t see why it would. We’d learn there was a vulnerability we all had not spotted, and close it; this would give us all reason to assume that there are likely further vulnerabilities.
If that insight is undermined by being communicated, then communicating it to the world immunizes the world from it. If that is a mechanism by which an AI-in-a-box could escape, then it needs to be communicated with every AI researcher.
Unless such “immunity” will cause people to overestimate their level of protection from all those potential different insights that are yet unknown...
Don’t see why it would. We’d learn there was a vulnerability we all had not spotted, and close it; this would give us all reason to assume that there are likely further vulnerabilities.