What are concrete ways that an unboxed AI could take over the world? People seem to skip from “UFAI created” to “UFAI rules the world” without explaining how the one must cause the other. It’s not obvious to me that superhuman intelligence necessarily leads to superhuman power when constrained in material resources and allies.
Could someone sketch out a few example timelines of events for how a UFAI could take over the world?
If the AI can talk itself out of a box then it demonstrates it can manipulate humans extremely well. Once it has internet access, it can commandeer resources to boost its computational power. It can analyze thousands of possible exploits to access “secure” systems in a fraction of a second, and failing that, can use social engineering on humans to gain access instead. Gaining control over vast amounts of digital money and other capital would be trivial. This process compounds on itself until there is nothing else left over which to gain control.
That’s a possible avenue for world domination. I’m sure that there are others.
Worst case scenario, can’t humans just abandon the internet altogether once they realize this is happening? Declare that only physical currency is valid, cut off all internet communications and only communicate by means that the AI can’t access?
Of course it should be easy for the AI to avoid notice for a long while, but once we get to “turn the universe into computronium to make paperclips” (or any other scheme that diverges from business-as-usual drastically) people will eventually catch on. There is an upper bound to the level of havoc the AI can wreak without people eventually noticing and resisting in the manner described above.
How exactly would the order to abandon the internet get out to everyone? There are almost no means of global communications that aren’t linked to the internet in some way.
Government orders the major internet service providers to shut down their services, presumably :) Not saying that that would necessarily be easily to coordinate, nor that the loss of internet wouldn’t cripple the global economy. Just that it seems to be a different order of risk than an extinction event.
My intuition on the matter was that an AI would be limited in its scope of influence to digital networks, and its access to physical resources, e.g. labs, factories and the like would be contingent on persuading people to do things for it. But everyone here is so confident that FAI --> doom that I was wondering if there was some obvious and likely successful method of seizing control of physical resources that everyone else already knew and I had missed.
No, but I read it just now, thank you for linking me. The example takeover strategy offered there was bribing a lab tech to assemble nanomachines (which I am guessing would then be used to facilitate some grey goo scenario, although that wasn’t explicitly stated). That particular strategy seems a bit far-fetched, since nanomachines don’t exist yet and we thus don’t know their capabilities. However, I can see how something similar with an engineered pandemic would be relatively easy to carry out, assuming ability to fake access to digital currency (likely) and the existence of sufficiently avaricious and gullible lab techs to bribe (possible).
I was thinking in terms of “how could an AI rule humanity indefinitely” rather than “how could an AI wipe out most of humanity quickly.” Oops. The second does seem like an easier task.
What are concrete ways that an unboxed AI could take over the world? People seem to skip from “UFAI created” to “UFAI rules the world” without explaining how the one must cause the other. It’s not obvious to me that superhuman intelligence necessarily leads to superhuman power when constrained in material resources and allies.
Could someone sketch out a few example timelines of events for how a UFAI could take over the world?
If the AI can talk itself out of a box then it demonstrates it can manipulate humans extremely well. Once it has internet access, it can commandeer resources to boost its computational power. It can analyze thousands of possible exploits to access “secure” systems in a fraction of a second, and failing that, can use social engineering on humans to gain access instead. Gaining control over vast amounts of digital money and other capital would be trivial. This process compounds on itself until there is nothing else left over which to gain control.
That’s a possible avenue for world domination. I’m sure that there are others.
Worst case scenario, can’t humans just abandon the internet altogether once they realize this is happening? Declare that only physical currency is valid, cut off all internet communications and only communicate by means that the AI can’t access?
Of course it should be easy for the AI to avoid notice for a long while, but once we get to “turn the universe into computronium to make paperclips” (or any other scheme that diverges from business-as-usual drastically) people will eventually catch on. There is an upper bound to the level of havoc the AI can wreak without people eventually noticing and resisting in the manner described above.
How exactly would the order to abandon the internet get out to everyone? There are almost no means of global communications that aren’t linked to the internet in some way.
Government orders the major internet service providers to shut down their services, presumably :) Not saying that that would necessarily be easily to coordinate, nor that the loss of internet wouldn’t cripple the global economy. Just that it seems to be a different order of risk than an extinction event.
My intuition on the matter was that an AI would be limited in its scope of influence to digital networks, and its access to physical resources, e.g. labs, factories and the like would be contingent on persuading people to do things for it. But everyone here is so confident that FAI --> doom that I was wondering if there was some obvious and likely successful method of seizing control of physical resources that everyone else already knew and I had missed.
Have you read That Alien Message?
No, but I read it just now, thank you for linking me. The example takeover strategy offered there was bribing a lab tech to assemble nanomachines (which I am guessing would then be used to facilitate some grey goo scenario, although that wasn’t explicitly stated). That particular strategy seems a bit far-fetched, since nanomachines don’t exist yet and we thus don’t know their capabilities. However, I can see how something similar with an engineered pandemic would be relatively easy to carry out, assuming ability to fake access to digital currency (likely) and the existence of sufficiently avaricious and gullible lab techs to bribe (possible).
I was thinking in terms of “how could an AI rule humanity indefinitely” rather than “how could an AI wipe out most of humanity quickly.” Oops. The second does seem like an easier task.