As we begin seeing robots/computers that are more human-like
It’s not at all clear that a AGI will be human-like, anyone than humans are dog-like.
BTW, I’m curious to hear more about the mechanics of your scenario. The AGI hacks itself onto every (Internet-connected) computer in the world. Then what?
How do you fight the AGI past that point?
It controls total global communication flow. It can play out different humans against each other till it effectively rules the world. After it has total political control it can move more and more resources to itself.
Maybe it would increase chances of nuclear war, especially if the AGI could infect nuclear-warhead-related computer systems.
That’s not even needed. You just need to set up a bunch of convincing false flag attacks that implicate Pakistan attacking India.
A clever AI might provoke such conflicts to distract humans from fighting it.
Don’t underrate how a smart AI can fight conflicts. Having no Akrasia, no need for sleep, the ability to self replicate your mind and being able to plan very complex conflicts rationally are valuable for fighting conflicts.
For the AGI it’s even enough to get political control over a few countries will the other countries have their economy collapse due to lack of computers the AGI could help those countries that it controls to overpower the others over the long run.
It’s not at all clear that a AGI will be human-like, anyone than humans are dog-like.
Ok, bad wording on my part. I meant “more generally intelligent.”
How do you fight the AGI past that point?
I was imagining people would destroy their computers, except the ones not connected to the Internet. However, if the AGI is hiding itself, it could go a long way before people realized what was going on.
However, if the AGI is hiding itself, it could go a long way before people realized what was going on.
Exactly. On the one hand the AGI doesn’t try to let humans get wind of it’s plans. On the other hand it’s going to produce distractions.
You have to remember how delusional some folks are. Imaging trying to convince the North Korean’s that they have to destroy their computers because those computer are infested with an evil AI.
Even in the US nearly half of the population still believes in creationism. How many of them can be convinced that the evil government is trying to take away their computers to establish a dictatorship?
Before the government goes attempts to trash the computer the AI sent an email to a conspiracy theory website, where it starts revealing some classified documents it aquired through hacking that show government misbehavior.
Then it sents an email to the same group saying that the US government is going to shut down all civilian computers because freedom of speech is to dangerous to the US government and that the US government will be using the excuse that the computers are part of a Chinese botnet.
In our time you need computers to stock supermarket shelves with goods. Container ships need GPS and see charts to navigate.
People start fighting each other. Some are likely to blame the people who wanted to thrash the computers as responsible for the mess.
Even if you can imagine shutting of all computer in 2013, in 2033 most cars will be computers in which the AI can rest. A lot of military firepower will be in drones that the AI can control.
Even with what you describe, humans wouldn’t become extinct, barring other outcomes like really bad nuclear war or whatever.
However, since the AI wouldn’t be destroyed, it could bide its time. Maybe it could ally with some people and give them tech/power in exchange for carrying out its bidding. They could help build the robots, etc. that would be needed to actually wipe out humanity.
Obviously there’s a lot of conjunction here. I’m not claiming this scenario specifically is likely. But it helps to stimulate the imagination to work out an existence proof for the extinction risk from AGI.
Maybe it could ally with some people and give them tech/power in exchange for carrying out its bidding.
Some AI’s already do this today. The outsource work they can’t do to Amazon’s mechanical turk where humans get payed money to do tasks for the AI.
Other humans take on job on rentacoder where they never see the human that’s hiring them.
Even with what you describe, humans wouldn’t become extinct, barring other outcomes like really bad nuclear war or whatever.
Human’s wouldn’t get extinct in a short time frame but if the AGI has decades of time than it can increase it’s own power over time and decrease it’s dependence on humans.
Sooner or later the humans wouldn’t be useful for the AGI anymore and then go extinct.
It’s not at all clear that a AGI will be human-like, anyone than humans are dog-like.
How do you fight the AGI past that point?
It controls total global communication flow. It can play out different humans against each other till it effectively rules the world. After it has total political control it can move more and more resources to itself.
That’s not even needed. You just need to set up a bunch of convincing false flag attacks that implicate Pakistan attacking India.
A clever AI might provoke such conflicts to distract humans from fighting it.
Don’t underrate how a smart AI can fight conflicts. Having no Akrasia, no need for sleep, the ability to self replicate your mind and being able to plan very complex conflicts rationally are valuable for fighting conflicts.
For the AGI it’s even enough to get political control over a few countries will the other countries have their economy collapse due to lack of computers the AGI could help those countries that it controls to overpower the others over the long run.
I think he meant more along the lines of computers/robots/ non-super AIs becoming more powerful, IDK.
Ok, bad wording on my part. I meant “more generally intelligent.”
I was imagining people would destroy their computers, except the ones not connected to the Internet. However, if the AGI is hiding itself, it could go a long way before people realized what was going on.
Interesting scenarios. Thanks!
Exactly. On the one hand the AGI doesn’t try to let humans get wind of it’s plans. On the other hand it’s going to produce distractions.
You have to remember how delusional some folks are. Imaging trying to convince the North Korean’s that they have to destroy their computers because those computer are infested with an evil AI.
Even in the US nearly half of the population still believes in creationism. How many of them can be convinced that the evil government is trying to take away their computers to establish a dictatorship?
Before the government goes attempts to trash the computer the AI sent an email to a conspiracy theory website, where it starts revealing some classified documents it aquired through hacking that show government misbehavior.
Then it sents an email to the same group saying that the US government is going to shut down all civilian computers because freedom of speech is to dangerous to the US government and that the US government will be using the excuse that the computers are part of a Chinese botnet.
In our time you need computers to stock supermarket shelves with goods. Container ships need GPS and see charts to navigate.
People start fighting each other. Some are likely to blame the people who wanted to thrash the computers as responsible for the mess.
Even if you can imagine shutting of all computer in 2013, in 2033 most cars will be computers in which the AI can rest. A lot of military firepower will be in drones that the AI can control.
Some really creative ideas, ChristianKl. :)
Even with what you describe, humans wouldn’t become extinct, barring other outcomes like really bad nuclear war or whatever.
However, since the AI wouldn’t be destroyed, it could bide its time. Maybe it could ally with some people and give them tech/power in exchange for carrying out its bidding. They could help build the robots, etc. that would be needed to actually wipe out humanity.
Obviously there’s a lot of conjunction here. I’m not claiming this scenario specifically is likely. But it helps to stimulate the imagination to work out an existence proof for the extinction risk from AGI.
Some AI’s already do this today. The outsource work they can’t do to Amazon’s mechanical turk where humans get payed money to do tasks for the AI.
Other humans take on job on rentacoder where they never see the human that’s hiring them.
Human’s wouldn’t get extinct in a short time frame but if the AGI has decades of time than it can increase it’s own power over time and decrease it’s dependence on humans. Sooner or later the humans wouldn’t be useful for the AGI anymore and then go extinct.