“Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software.”
The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn’t kept secret by default; if it weren’t for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.
“If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).”
Only if they believe you, which they almost certainly won’t. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there’s still an additional burden of proof on top of that, because they’d also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.
So you’re saying SIAI is deliberately appearing incompetent and far from its goal to avoid being attacked?
ETA: I realize you’re probably not saying it’s doing that already, but you certainly suggest that it’s going to be in SIAI’s best interests going forward.
“Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software.”
The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn’t kept secret by default; if it weren’t for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.
“If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).”
Only if they believe you, which they almost certainly won’t. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there’s still an additional burden of proof on top of that, because they’d also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.
So you’re saying SIAI is deliberately appearing incompetent and far from its goal to avoid being attacked?
ETA: I realize you’re probably not saying it’s doing that already, but you certainly suggest that it’s going to be in SIAI’s best interests going forward.