Bruce Schneier beats people over the head with the notion DON’T DEFEND AGAINST MOVIE PLOTS! The “AI takes over the world” plot is influencing a lot of people’s thinking. Unfriendly AGI, despite its potential power, may well have huge blind spots; mind design space is big!
I have not yet watched a movie where humans are casually obliterated by a superior force, be that a GAI or a technologically superior alien species. At least some of the humans always seem to have a fighting chance. The odds are overwhelming of course, but the enemy always has a blind spot that can be exploited. You list some of them here. They are just the kind of thing McKay deploys successfully against advanced nanotechnology. Different shows naturally give the AI different exploitable weaknesses. For the sake of the story such AIs are almost always completely blind to the most of the obvious weaknesses of humanity.
The whole ‘overcome a superior enemy by playing to your strengths and exploiting their weakness’ makes for great viewing but outside of the movies it is far less likely to play a part. The chance of creating an uFAI that is powerful enough to be a threat and launch some kind of attack and yet not be able to wipe out humans is negligible outside of fiction. Chimpanzees do not prevail over a civilisation with nuclear weapons. And no, the fact that they can beat us in unarmed close combat does not matter. They just die.
Yes, this is movie-plot-ish-thinking in the sense that I’m proposing that superintelligences can be both dangerous and defeatable/controllable/mitigatable. I’m as prone to falling into the standard human fallacies as the next person.
However, the notion that “avoid strength, attack weakness” is primarily a movie-plot-ism seems dubious to me.
Here is a more concrete prophesy, that I hope will help us communicate better:
Humans will perform software experiments trying to harness badly-understood technologies (ecosystems of self-modifying software agents, say). There will be some (epsilon) danger of paperclipping in this process. Humans will take precautions (lots of people have ideas for precautions that we could take). It is rational for them to take precautions, AND the precautions do not completely eliminate the chance of paperclipping, AND it is rational for them to forge ahead with the experiments despite the danger. During these experiments, people will gradually learn how the badly-understood technologies work, and transform them into much safer (and often much more effective) technologies.
However, the notion that “avoid strength, attack weakness” is primarily a movie-plot-ism seems dubious to me.
That certainly would be dubious. Avoid strength, attack weakness is right behind ‘be a whole heap stronger’ as far as obvious universal strategies go.
Humans will perform software experiments trying to harness badly-understood technologies (ecosystems of self-modifying software agents, say). There will be some (epsilon) danger of paperclipping in this process. Humans will take precautions (lots of people have ideas for precautions that we could take). It is rational for them to take precautions, AND the precautions do not completely eliminate the chance of paperclipping, AND it is rational for them to forge ahead with the experiments despite the danger. During these experiments, people will gradually learn how the badly-understood technologies work, and transform them into much safer (and often much more effective) technologies.
If there are ways to make it possible to experiment and make small mistakes and minimise the risk of catastrophe then I am all in favour of using them. Working out which experiments are good ones to do so that people can learn from them and which ones will make everything dead is a non-trivial task that I’m quite glad to leave to someone else. Given that I suspect both caution and courage to lead to an unfortunately high probability of extinction I don’t envy them the responsibility.
AND it is rational for them to forge ahead with the experiments despite the danger.
Possibly. You can’t make that conclusion without knowing the epsilon in question and the alternatives to such experimentation. But there are times when it is rational to go ahead despite the danger.
The fate of most species is extinction. As the first intelligent agents, people can’t seriously expect our species to last for very long. Now that we have unleashed user-modifiable genetic materials on the planet, DNA’s days are surely numbered. Surely that’s a good thing. Today’s primitive and backwards biotechnology is a useless tangle of unmaintainable spaghetti code that leaves a trail of slime wherever it goes—who would want to preserve that?
I have not yet watched a movie where humans are casually obliterated by a superior force, be that a GAI or a technologically superior alien species. At least some of the humans always seem to have a fighting chance. The odds are overwhelming of course, but the enemy always has a blind spot that can be exploited. You list some of them here. They are just the kind of thing McKay deploys successfully against advanced nanotechnology. Different shows naturally give the AI different exploitable weaknesses. For the sake of the story such AIs are almost always completely blind to the most of the obvious weaknesses of humanity.
The whole ‘overcome a superior enemy by playing to your strengths and exploiting their weakness’ makes for great viewing but outside of the movies it is far less likely to play a part. The chance of creating an uFAI that is powerful enough to be a threat and launch some kind of attack and yet not be able to wipe out humans is negligible outside of fiction. Chimpanzees do not prevail over a civilisation with nuclear weapons. And no, the fact that they can beat us in unarmed close combat does not matter. They just die.
Yes, this is movie-plot-ish-thinking in the sense that I’m proposing that superintelligences can be both dangerous and defeatable/controllable/mitigatable. I’m as prone to falling into the standard human fallacies as the next person.
However, the notion that “avoid strength, attack weakness” is primarily a movie-plot-ism seems dubious to me.
Here is a more concrete prophesy, that I hope will help us communicate better:
Humans will perform software experiments trying to harness badly-understood technologies (ecosystems of self-modifying software agents, say). There will be some (epsilon) danger of paperclipping in this process. Humans will take precautions (lots of people have ideas for precautions that we could take). It is rational for them to take precautions, AND the precautions do not completely eliminate the chance of paperclipping, AND it is rational for them to forge ahead with the experiments despite the danger. During these experiments, people will gradually learn how the badly-understood technologies work, and transform them into much safer (and often much more effective) technologies.
That certainly would be dubious. Avoid strength, attack weakness is right behind ‘be a whole heap stronger’ as far as obvious universal strategies go.
If there are ways to make it possible to experiment and make small mistakes and minimise the risk of catastrophe then I am all in favour of using them. Working out which experiments are good ones to do so that people can learn from them and which ones will make everything dead is a non-trivial task that I’m quite glad to leave to someone else. Given that I suspect both caution and courage to lead to an unfortunately high probability of extinction I don’t envy them the responsibility.
Possibly. You can’t make that conclusion without knowing the epsilon in question and the alternatives to such experimentation. But there are times when it is rational to go ahead despite the danger.
The fate of most species is extinction. As the first intelligent agents, people can’t seriously expect our species to last for very long. Now that we have unleashed user-modifiable genetic materials on the planet, DNA’s days are surely numbered. Surely that’s a good thing. Today’s primitive and backwards biotechnology is a useless tangle of unmaintainable spaghetti code that leaves a trail of slime wherever it goes—who would want to preserve that?
You didn’t see the Hitchhiker’s Guide to the Galaxy film? ;)
:) Well, maybe substitute ‘some’ for ‘one’ in the next sentence.
http://en.wikipedia.org/wiki/Invasion_of_the_Body_Snatchers_(1978_film)
...apparently has everyone getting it fairly quickly at the hands of aliens.