So, SIAI plans to develop an AI that will take over the world, keeping their techniques secret, and therefore not getting critiques from the rest of the world.
This is WRONG. Horrendously, terrifyingly, irrationally wrong.
It reminds me of this:
if we can make it all the way to Singularity without it ever becoming a “public policy” issue, I think maybe we should.
“Plan to Singularity” dates back to 2000. Other parties are now murmuring—but I wouldn’t say machine intelligence had yet become a “public policy” issue. I think it will, in due course though. So, I don’t think the original plan is very likely to pan out.
It reminds me of this:
http://yudkowsky.net/obsolete/plan.html
The plan to steal the singularity.
Any other plan would be insane! (Or, at least, only sane as a second choice when stealing seems impractical.)
Uh huh. You don’t think some other parties might prefer to be consulted?
A plan to pull this off before the other parties wake up may set off some alarm bells.
… The kind of thing that makes ‘just do it’ seem impractical?
“Plan to Singularity” dates back to 2000. Other parties are now murmuring—but I wouldn’t say machine intelligence had yet become a “public policy” issue. I think it will, in due course though. So, I don’t think the original plan is very likely to pan out.