After reading the current comments I’ve come up with this:
1) Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.)
2) Tell it to not make another AI (this seems a bit vague but I don’t know how to make it more specific) (maybe: all computing must come from one physical core location. This could prevent an AI from tricking someone into isolating a back up, effectively making a copy)
3) Set an upper bound for the amount physical space all AI combined in that specific area can use.
4) As a safeguard, if it does find a way around 2, let it incorporate the above rules, unaltered, in any new AI it makes.
After reading the current comments I’ve come up with this:
1) Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.) 2) Tell it to not make another AI (this seems a bit vague but I don’t know how to make it more specific) (maybe: all computing must come from one physical core location. This could prevent an AI from tricking someone into isolating a back up, effectively making a copy) 3) Set an upper bound for the amount physical space all AI combined in that specific area can use. 4) As a safeguard, if it does find a way around 2, let it incorporate the above rules, unaltered, in any new AI it makes.