They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.
If they’re going to have that exact wrong level of cluefulness, why wouldn’t they already have a (much better-funded, much less careful) AGI project of their own?
As Vladimir says, it’s too early to start solving this problem, and if “things start moving rapidly” anytime soon, then AFAICS we’re just screwed, government involvement or no.
If they’re going to have that exact wrong level of cluefulness, why wouldn’t they already have a (much better-funded, much less careful) AGI project of their own?
Maybe they do, maybe they don’t. I won’t try to add more details to a scenario because that’s not the right way to think about this, IMO. If it happens, it probably won’t be a movie plot scenario anyway (“Spies kidnap top AI research team and torture them until they make a few changes to program, granting our Great Leader dominion over all”)...
What I’m interested in is security of AGI research in general. It would be extremely sad to see FAI theory go very far only to be derailed by (possibly well-intentioned) people who see AGI as a great source of power and want to have it “on their side” or whatever.
If they’re going to have that exact wrong level of cluefulness, why wouldn’t they already have a (much better-funded, much less careful) AGI project of their own?
As Vladimir says, it’s too early to start solving this problem, and if “things start moving rapidly” anytime soon, then AFAICS we’re just screwed, government involvement or no.
Maybe they do, maybe they don’t. I won’t try to add more details to a scenario because that’s not the right way to think about this, IMO. If it happens, it probably won’t be a movie plot scenario anyway (“Spies kidnap top AI research team and torture them until they make a few changes to program, granting our Great Leader dominion over all”)...
What I’m interested in is security of AGI research in general. It would be extremely sad to see FAI theory go very far only to be derailed by (possibly well-intentioned) people who see AGI as a great source of power and want to have it “on their side” or whatever.