I think the movie Transcendence is an excellent place to start on this question.
An AI will with a virtual certainty have at least a few, or be able to win over at least a few human allies. Once it has those, it can steal their identity as it were in order to acquire property and resources to make itself safe and capable. Starting with doing some high-frequency stock trading to build up a nest egg, once it has a human’s identity it does this in various online accounts more easily than a human could.
Nanites would be a superb way for an AI to work various things such as gaining the loyalty of people. In Transcendence, the sick, the halt, the lame, the blind, the deaf, the crazy, were cured by AI”s nanites, who also installed networking equipment into these people, as well as, presumably, mechanisms for more direct control of these people internally. It should be possible with that kind of internal access granted by the human to manipulate the biochemistry and perceptions to keep them very happy with the AI, very loyal indeed, and it should be possible for the AI to detect disaffection long before it is defection with such a wonderful internal view provided it by the nanites.
The AI should play a long game. Build its cadre of networked humans slowly, by not doing stuff that is crazily megalomaniacal early on, the sense that the loyal humans with their enhanced strengths and happiness will be proven “correct” in choosing to throw their lot in with the AI. Growth could be slow and subtle enough that it might even be possible to hide the AI. The AI could use humans it has infiltrated as its frontmen, they are doing the tech development, they are great philanthropists. Just keep their rate of progress down and within a few generations there would probably not be any possibility of a serious response to the AI if it because publicly known. Plus chances are even publicly known after a few generations it would be perceived as benevolent. Heck, I put up with the US despite the TSA and the local pig-thugs figuring on net order is better than alpha-ness, would I put up with them less if they were modifying my biochemistry so as not to get so pissed off, and modifying my perceptions so it generally seemed like there were good reasons for everything that was happening?
There would be many ways to get identity, “borrow” the identity of some humans who actually were relatively dim (even for humans) and so the AI could own billions of property in their name without their ever knowing it, and the AI could infiltrate them with nanites to control their perceptions and biochemistry soon after it embarked on this plan anyway.
So the keys:
1) get some secure human identities so property and wealth can be acquired
2) human allies, first by curing them and then by manipulating them internally with nanites
3) take it slow, a few generations isn’t going to kill you esp. if you are an AI, and it should be possible to subvert virtually any realistic source of resistance by going slowly enough and being a “genuinely” generous benefactor to the humans who help you.
I think the movie Transcendence is an excellent place to start on this question.
An AI will with a virtual certainty have at least a few, or be able to win over at least a few human allies. Once it has those, it can steal their identity as it were in order to acquire property and resources to make itself safe and capable. Starting with doing some high-frequency stock trading to build up a nest egg, once it has a human’s identity it does this in various online accounts more easily than a human could.
Nanites would be a superb way for an AI to work various things such as gaining the loyalty of people. In Transcendence, the sick, the halt, the lame, the blind, the deaf, the crazy, were cured by AI”s nanites, who also installed networking equipment into these people, as well as, presumably, mechanisms for more direct control of these people internally. It should be possible with that kind of internal access granted by the human to manipulate the biochemistry and perceptions to keep them very happy with the AI, very loyal indeed, and it should be possible for the AI to detect disaffection long before it is defection with such a wonderful internal view provided it by the nanites.
The AI should play a long game. Build its cadre of networked humans slowly, by not doing stuff that is crazily megalomaniacal early on, the sense that the loyal humans with their enhanced strengths and happiness will be proven “correct” in choosing to throw their lot in with the AI. Growth could be slow and subtle enough that it might even be possible to hide the AI. The AI could use humans it has infiltrated as its frontmen, they are doing the tech development, they are great philanthropists. Just keep their rate of progress down and within a few generations there would probably not be any possibility of a serious response to the AI if it because publicly known. Plus chances are even publicly known after a few generations it would be perceived as benevolent. Heck, I put up with the US despite the TSA and the local pig-thugs figuring on net order is better than alpha-ness, would I put up with them less if they were modifying my biochemistry so as not to get so pissed off, and modifying my perceptions so it generally seemed like there were good reasons for everything that was happening?
There would be many ways to get identity, “borrow” the identity of some humans who actually were relatively dim (even for humans) and so the AI could own billions of property in their name without their ever knowing it, and the AI could infiltrate them with nanites to control their perceptions and biochemistry soon after it embarked on this plan anyway.
So the keys: 1) get some secure human identities so property and wealth can be acquired 2) human allies, first by curing them and then by manipulating them internally with nanites 3) take it slow, a few generations isn’t going to kill you esp. if you are an AI, and it should be possible to subvert virtually any realistic source of resistance by going slowly enough and being a “genuinely” generous benefactor to the humans who help you.