Here’s one from a friend of mine. It’s not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it’s traditionally presented.
There’s plenty of reason to believe that Moore’s Law will slow down in the near future
Progress on AI algorithms has historically been rather slow.
AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.
These three things together suggest that there will be a ‘grace period’ between the development of general agents, and the creation of a FOOM-capable AI.
Our best guess for the duration of this grace period is on the order of multiple decades.
During this time, general-but-dumb agents will be widely used for economic purposes.
These agents will have exactly the same perverse instantiation problems as a FOOM-capable AI, but on a much smaller scale. When they start trying to turn people into paperclips, the fallout will be limited by their intelligence.
This will ensure that the problem is taken seriously, and these dumb agents will make it much easier to solve FAI-related problems, by giving us an actual test bed for our ideas where they can’t go too badly wrong.
This is a plausible-but-not-guaranteed scenario for the future, which feels much less grim than the standard AI-risk narrative. You might be able to extend it into something more robust.
Dumb agent could also cause human extinction. “To kill all humans” is computationly simpler task than to create superintelligence. And it may be simplier by many orders of magnitude.
I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.
Just imagine a Stuxnet-style computer virus which will find DNA-synthesisers and print different viruses on each of them, calculating exact DNA mutations for hundreds different flu strains.
You can’t manufacture new flu stains with just by just hacking a DNA synthesizer, And anyway, most of non-intelligently created flu strains would be non-viable or non-lethal.
There are parts that are different, but it seems worth mentioning that this is quite similar to certain forms of Bostrom’s second-guessing arguments, as discussed in Chapter 14 of Superintelligence and in Technological Revolutions: Ethics and Policy in the Dark:
A related type of argument is that we ought—rather callously—to welcome small
and medium-scale catastrophes on grounds that they make us aware of our
vulnerabilities and spur us into taking precautions that reduce the probability of an
existential catastrophe. The idea is that a small or medium-scale catastrophe acts
like an inoculation, challenging civilization with a relatively survivable form of a
threat and stimulating an immune response that readies the world to deal with the
existential variety of the threat.
I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.
Here’s one from a friend of mine. It’s not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it’s traditionally presented.
There’s plenty of reason to believe that Moore’s Law will slow down in the near future
Progress on AI algorithms has historically been rather slow.
AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.
These three things together suggest that there will be a ‘grace period’ between the development of general agents, and the creation of a FOOM-capable AI.
Our best guess for the duration of this grace period is on the order of multiple decades.
During this time, general-but-dumb agents will be widely used for economic purposes.
These agents will have exactly the same perverse instantiation problems as a FOOM-capable AI, but on a much smaller scale. When they start trying to turn people into paperclips, the fallout will be limited by their intelligence.
This will ensure that the problem is taken seriously, and these dumb agents will make it much easier to solve FAI-related problems, by giving us an actual test bed for our ideas where they can’t go too badly wrong.
This is a plausible-but-not-guaranteed scenario for the future, which feels much less grim than the standard AI-risk narrative. You might be able to extend it into something more robust.
Dumb agent could also cause human extinction. “To kill all humans” is computationly simpler task than to create superintelligence. And it may be simplier by many orders of magnitude.
I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.
Just imagine a Stuxnet-style computer virus which will find DNA-synthesisers and print different viruses on each of them, calculating exact DNA mutations for hundreds different flu strains.
You can’t manufacture new flu stains with just by just hacking a DNA synthesizer, And anyway, most of non-intelligently created flu strains would be non-viable or non-lethal.
I mean that the virus will be as intelligent as human bioligist, may be EM. It is enough for virus synthesis but not for personal self-imprivement
There are parts that are different, but it seems worth mentioning that this is quite similar to certain forms of Bostrom’s second-guessing arguments, as discussed in Chapter 14 of Superintelligence and in Technological Revolutions: Ethics and Policy in the Dark:
I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.
Well that’s actually quite refreshing.