You don’t see how an AGI would benefit from spreading itself in a distributed form to every computer on the planet, control and manipulate all online communications, encrypt the contents of hard drives and keep their contents hostage, etc.? You could have the AGI’s code running on every Internet-connected computer on the planet, which would make it virtually impossible to get rid of.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups. Add to that the fact that we’ve already had one case of a computer virus infecting the control computers in an off-line facility and, according to one report, delaying the nuclear program of country by two years. Add to that the fact that even people’s normal phones are increasingly becoming smartphones, which can be hacked, and that simpler phones have already shown themselves to be vulnerable to being crashed by a well-crafted SMS. Let 20 more years pass and us become more and more dependant on IT, and an AGI could probably keep all of humanity as its hostage—shutting down the entire Internet simply wouldn’t be an option.
And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level.
This is nonsense. The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial. Besides, we already have early stage self-replicating machinery today: you don’t need nanotech for that.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups.
I understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can’t be disturbed that easily and is an enemy you can’t pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks.
The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial.
It isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
and even more so if it runs in the cloud (just use an EMP).
The point was that you can’t use an EMP if that means bringing down the whole human computer network.
t isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious.
Why would people need to get suspicious? If you have tabs on all the communications in the world, you can make a killing on the market, even if you didn’t delay the orders from your competitors. One could fully legitimately raise enough money by trading to hire people to do everything you wanted. Nobody needs to ever notice that there’s something amiss, especially not if you do it via enough shell corporations.
Of course, the AGI could also use more forceful means, though it’s by no means necessary. If the AGI revealed itself and the fact that it was holding all of humanity’s networked computers hostage, it could probably just flat-out tell the humans “do this or else”. Sure, not everyone would obey, but some would. Also, disrupt enough communications and manufacture enough chaos, and people will be too distracted and stressed to properly question forged orders. Social engineering is rather easy with humans, and desperate people are quite prone to wishful thinking.
Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech.
This claim strikes me as bizarre. Why would you need nanotech to acquire more resources for self-improvement?
Some botnets have been reported to have around 350,000 members. Currently, the distributed computing project Folding@Home, with 290,000 active clients composed mostly of volunteer home PCs and Playstations, can reach speeds in the 10^15 FLOPS range. Now say that an AGI gets developed in 20 years from now. A relatively conservative estimate that presumed an AGI couldn’t hack into more computers than the best malware practicioners of today, that a personal computer would have a hundred times the computing power of today, and that an AGI required a minimum of 10^13 FLOPS to run, would suggest that an AGI could either increase its own computational capacity 12,000-fold, or spawn 12,000 copies of itself.
Alternatively, if it wanted to avoid detection and didn’t want to do anything illegal or otherwise suspicious, it could just sign up on a site like oDesk and do lots of programming jobs for money, then rent all the computing capacity it needed until it was ready to unleash itself to the world. This is actually the more likely alternative, as unlike the botnet/hacking scheme it’s a virtually risk-free to gain extra processing power.
Acquiring computational resources is easy, and it will only get easier as time goes on. Also, while upgrading your hardware is one way of self-improving, you seem to be completely ignoring the potential for software self-improvement. It’s not a given that the AGI would even need massive hardware. For instance:
Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.
The point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.
Are you aware of what the most common EMPs are? Nukes. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball.
You don’t see how an AGI would benefit from spreading itself in a distributed form to every computer on the planet, control and manipulate all online communications, encrypt the contents of hard drives and keep their contents hostage, etc.? You could have the AGI’s code running on every Internet-connected computer on the planet, which would make it virtually impossible to get rid of.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups. Add to that the fact that we’ve already had one case of a computer virus infecting the control computers in an off-line facility and, according to one report, delaying the nuclear program of country by two years. Add to that the fact that even people’s normal phones are increasingly becoming smartphones, which can be hacked, and that simpler phones have already shown themselves to be vulnerable to being crashed by a well-crafted SMS. Let 20 more years pass and us become more and more dependant on IT, and an AGI could probably keep all of humanity as its hostage—shutting down the entire Internet simply wouldn’t be an option.
This is nonsense. The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial. Besides, we already have early stage self-replicating machinery today: you don’t need nanotech for that.
I understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can’t be disturbed that easily and is an enemy you can’t pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks.
It isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
The point was that you can’t use an EMP if that means bringing down the whole human computer network.
Why would people need to get suspicious? If you have tabs on all the communications in the world, you can make a killing on the market, even if you didn’t delay the orders from your competitors. One could fully legitimately raise enough money by trading to hire people to do everything you wanted. Nobody needs to ever notice that there’s something amiss, especially not if you do it via enough shell corporations.
Of course, the AGI could also use more forceful means, though it’s by no means necessary. If the AGI revealed itself and the fact that it was holding all of humanity’s networked computers hostage, it could probably just flat-out tell the humans “do this or else”. Sure, not everyone would obey, but some would. Also, disrupt enough communications and manufacture enough chaos, and people will be too distracted and stressed to properly question forged orders. Social engineering is rather easy with humans, and desperate people are quite prone to wishful thinking.
This claim strikes me as bizarre. Why would you need nanotech to acquire more resources for self-improvement?
Some botnets have been reported to have around 350,000 members. Currently, the distributed computing project Folding@Home, with 290,000 active clients composed mostly of volunteer home PCs and Playstations, can reach speeds in the 10^15 FLOPS range. Now say that an AGI gets developed in 20 years from now. A relatively conservative estimate that presumed an AGI couldn’t hack into more computers than the best malware practicioners of today, that a personal computer would have a hundred times the computing power of today, and that an AGI required a minimum of 10^13 FLOPS to run, would suggest that an AGI could either increase its own computational capacity 12,000-fold, or spawn 12,000 copies of itself.
Alternatively, if it wanted to avoid detection and didn’t want to do anything illegal or otherwise suspicious, it could just sign up on a site like oDesk and do lots of programming jobs for money, then rent all the computing capacity it needed until it was ready to unleash itself to the world. This is actually the more likely alternative, as unlike the botnet/hacking scheme it’s a virtually risk-free to gain extra processing power.
Acquiring computational resources is easy, and it will only get easier as time goes on. Also, while upgrading your hardware is one way of self-improving, you seem to be completely ignoring the potential for software self-improvement. It’s not a given that the AGI would even need massive hardware. For instance:
Here my reply.
And what kind of computer controls the EMP? Or is it hand-cranked?
The point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.
Are you aware of what the most common EMPs are? Nukes. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball.
What kind of computer indeed!
I used this very example to argue with Robin Hanson during after-lecture QA (it should be in Parsons Part 2), it did not seem to help :)