No. Even in an alternate universe where anachronistic AI is easy, fallible humans trying to do research in secret without feedback from a scientific community just isn’t going to work. There might possibly be a case for keeping the last stages of development secret; if you’re convinced you’re on an end run, that you’re writing the actual code for an FAI, it would not necessarily be irrational to keep the code under wraps. But secret research is a nonstarter.
Note that the Manhattan project is not a counterexample. By the time it began, the research had already been done; all that remained was development, which was why they could go from start to success in only a handful of years. (One physicist asked in the 1930s about the possibility of an atomic bomb, replied that it was physically possible but hopelessly impractical because you would need to turn a whole country into a uranium refinery. Nor was he wrong: the Manhattan project threw resources exceeding those of many countries at the problem, something that would indeed have been impractical for anyone except the US.)
How do you craft your message to the public?
What exactly are you are trying to achieve by sending a message to the public? Bear in mind that if they believe you, you’ve just committed suicide; it’ll get turned into a political issue, any hope of successfully building FAI will disappear, and all the other forms of death that have nothing to do with AI will have an uncontested field. I wouldn’t try to keep it secret, but I wouldn’t go out of my way to play public relations Russian roulette with the future of the universe either.
As for what I would do, I’ll follow up with that in another comment.
No. Even in an alternate universe where anachronistic AI is easy, fallible humans trying to do research in secret without feedback from a scientific community just isn’t going to work. There might possibly be a case for keeping the last stages of development secret; if you’re convinced you’re on an end run, that you’re writing the actual code for an FAI, it would not necessarily be irrational to keep the code under wraps. But secret research is a nonstarter.
Note that the Manhattan project is not a counterexample. By the time it began, the research had already been done; all that remained was development, which was why they could go from start to success in only a handful of years. (One physicist asked in the 1930s about the possibility of an atomic bomb, replied that it was physically possible but hopelessly impractical because you would need to turn a whole country into a uranium refinery. Nor was he wrong: the Manhattan project threw resources exceeding those of many countries at the problem, something that would indeed have been impractical for anyone except the US.)
What exactly are you are trying to achieve by sending a message to the public? Bear in mind that if they believe you, you’ve just committed suicide; it’ll get turned into a political issue, any hope of successfully building FAI will disappear, and all the other forms of death that have nothing to do with AI will have an uncontested field. I wouldn’t try to keep it secret, but I wouldn’t go out of my way to play public relations Russian roulette with the future of the universe either.
As for what I would do, I’ll follow up with that in another comment.