Both our models depend on many assumptions. Whether these asumptions are in reality true or false, the chances of AI attack following my scenario can be bigger or smaller. (Note: There may be other scenarios.) Here are some of my assumptions:
-- The human-level AI is thinking like a smart human; more precisely like a racional human with strategic, social, and programming skills. (AI without strategic skills would forget to make a backup. AI without social skills would forget to keep secret. AI without programming skills could not hack the computers on Internet. It is not automatic that AI has programming skills just because it is itself a computer—human’s aren’t automatically biochemical experts just because they are built on biochemistry.)
-- Resources required to run a human-level AI are relatively small, so AI can find them on the Internet. If more than a high-end home PC is required to run a human-level AI, then the algorithm can be parallelized. (If AI needs more capacity than it can gain by hacking unsafe computers on Internet, or if it cannot split itself onto many computers, then the original computer and it data line to the rest of the world remain the weak point. At least until AI gains enough resources to build a copy somewhere else; but such copy may be extremely expensive and would inevitably draw attention.)
-- AI is thinking at real-time, or at least near real-time so by gaining more resources it can make itself faster than human. (Very slow AI incapable of increasing its speed could lose in a time race.)
I don’t know if any of this is true. I imagined a situation where a human-level AI is run on an average computer; I imagined that with a correct algorithm one does not need extreme amount of resources. This may be completely wrong. Actually, now I would bet it is wrong.
However it seems to me that you overestimate humans. It is not obvious that humans would immediately notice that something is wrong. It is not obvious that they would make the right response, fast enough. There are many people who are deceived by “Nigerian scams”. Computers of financial institutions are sometimes hacked. (For an AI capable of modifying itself, hacking other computers should be extremely easy.)
And by the way, it is not necessary to control a global conspiracy. Just large enough that it allows building a few backup super-computers. Maybe it needs just one cooperating millionaire.
The human-level AI is thinking like a smart human; more precisely like a racional human with strategic, social, and programming skills.
But how? Are those social skills hard-coded or learnt? To hard-code social skills good enough to take over the world seems like something that will take millennia. And I don’t see how an AI was going to acquire those skills either. Do you think it is computationally tractable to learn how to talk with a nice voice, how to write convincing emails etc, just by reading a few studies and watching YouTube videos? I don’t know of any evidence that would support such a hypothesis.
The same is true for physics and technology. You need large scale experiments like CERN to make insights in physics and large scale facilities like the Intel chip manufactures to create new processors.
Resources required to run a human-level AI are relatively small, so AI can find them on the Internet. If more than a high-end home PC is required to run a human-level AI, then the algorithm can be parallelized.
Both statements are highly speculative.
AI is thinking at real-time, or at least near real-time so by gaining more resources it can make itself faster than human.
The questionable assumptions here are 1) that all available resources can efficiently run a GAI 2) that available resources can be easily hacked without being noticed 3) that throwing additional computational resources at important problems solves them proportionally faster 4) that important problems are parallelizable.
However it seems to me that you overestimate humans.
The arguments that humans are not perfect general intelligences is an important one and should be seriously considered. But I haven’t seen any evidence that most evolutionary designs are vastly less efficient than their technological counterparts. A lot of the apparent advantages of technological designs is a result of making wrong comparisons like between birds and rockets. We haven’t been able to design anything that is nearly as efficient as natural flight. It is true that artificial flight can overall carry more weight. But just because a train full of hard disk drives has more bandwidth than your internet connection does not imply that someone with trains full of HDD’s would be superior at data transfer.
And by the way, it is not necessary to control a global conspiracy. Just large enough that it allows building a few backup super-computers. Maybe it needs just one cooperating millionaire.
To launch a new company that builds your improved computational substrate you need massive amounts of influence. I don’t perceive it to be at all plausible that such a feat would go unnoticed.
Both our models depend on many assumptions. Whether these asumptions are in reality true or false, the chances of AI attack following my scenario can be bigger or smaller. (Note: There may be other scenarios.) Here are some of my assumptions:
-- The human-level AI is thinking like a smart human; more precisely like a racional human with strategic, social, and programming skills. (AI without strategic skills would forget to make a backup. AI without social skills would forget to keep secret. AI without programming skills could not hack the computers on Internet. It is not automatic that AI has programming skills just because it is itself a computer—human’s aren’t automatically biochemical experts just because they are built on biochemistry.)
-- Resources required to run a human-level AI are relatively small, so AI can find them on the Internet. If more than a high-end home PC is required to run a human-level AI, then the algorithm can be parallelized. (If AI needs more capacity than it can gain by hacking unsafe computers on Internet, or if it cannot split itself onto many computers, then the original computer and it data line to the rest of the world remain the weak point. At least until AI gains enough resources to build a copy somewhere else; but such copy may be extremely expensive and would inevitably draw attention.)
-- AI is thinking at real-time, or at least near real-time so by gaining more resources it can make itself faster than human. (Very slow AI incapable of increasing its speed could lose in a time race.)
I don’t know if any of this is true. I imagined a situation where a human-level AI is run on an average computer; I imagined that with a correct algorithm one does not need extreme amount of resources. This may be completely wrong. Actually, now I would bet it is wrong.
However it seems to me that you overestimate humans. It is not obvious that humans would immediately notice that something is wrong. It is not obvious that they would make the right response, fast enough. There are many people who are deceived by “Nigerian scams”. Computers of financial institutions are sometimes hacked. (For an AI capable of modifying itself, hacking other computers should be extremely easy.)
And by the way, it is not necessary to control a global conspiracy. Just large enough that it allows building a few backup super-computers. Maybe it needs just one cooperating millionaire.
But how? Are those social skills hard-coded or learnt? To hard-code social skills good enough to take over the world seems like something that will take millennia. And I don’t see how an AI was going to acquire those skills either. Do you think it is computationally tractable to learn how to talk with a nice voice, how to write convincing emails etc, just by reading a few studies and watching YouTube videos? I don’t know of any evidence that would support such a hypothesis.
The same is true for physics and technology. You need large scale experiments like CERN to make insights in physics and large scale facilities like the Intel chip manufactures to create new processors.
Both statements are highly speculative.
The questionable assumptions here are 1) that all available resources can efficiently run a GAI 2) that available resources can be easily hacked without being noticed 3) that throwing additional computational resources at important problems solves them proportionally faster 4) that important problems are parallelizable.
The arguments that humans are not perfect general intelligences is an important one and should be seriously considered. But I haven’t seen any evidence that most evolutionary designs are vastly less efficient than their technological counterparts. A lot of the apparent advantages of technological designs is a result of making wrong comparisons like between birds and rockets. We haven’t been able to design anything that is nearly as efficient as natural flight. It is true that artificial flight can overall carry more weight. But just because a train full of hard disk drives has more bandwidth than your internet connection does not imply that someone with trains full of HDD’s would be superior at data transfer.
To launch a new company that builds your improved computational substrate you need massive amounts of influence. I don’t perceive it to be at all plausible that such a feat would go unnoticed.