I. Introduction
I watched a 40 minute video by Eric Drexler called Reframing Superintelligence. I take it the content somewhat overlaps with his PDF book of the same name.
In it he expresses the opinion that under the Comprehensive Artificial Intelligence Services model, high level actors will not try to take over the world, because there is a serious risk of being stopped, with a bad outcome for the aggressor.
Dr. Drexler made the point that a subgoal of ‘take over the world’ is ‘overthrow the government of China’, and that they would come after you if you made a “credible” attempt.
II. Eric Drexler’s CAIS scheme may fail to solve at least 4 distinct problems.
1. Development of AI may be asymmetric, undermining deterrence.
I think this is somewhat naive. Picture the world as a body of water, with a temperature that represents the degree of advancement in AI capablities. Dr. Drexler’s AI services idea would raise the temperature of the whole world until it exceeds the boiling point (e.g. is superintelligent), but avoid nucleation (an attempt to dominate the world). The premise seems to be that anyone with high-level access to CAIS can make such an attempt, but they will be circumspect about doing so because others are just as advanced, and could defeat and punish them.
What this misses is what I’ll call symmetry breaking. Suppose that the United States makes a non-trivial advance in the hardware used to run neural networks, allowing most neural networks to be run more efficiently. This in turn allows research and development of neural nets much closer to human-brain-scale.
Suppose the U.S. classifies this advance, retaining the fruits of the ensuing research for itself.
The symmetry between the United States and other countries is thus broken, first by hardware, and then by software. The U.S. thus can use CAIS with relative impunity as soon as it attains superintelligence. Even if the CAIS software leaked across borders, the U.S. alone would have the hardware to run it.
Of course the impunity is relative, because using CAIS to dominate the world is still risky for the dominator, but if the U.S. is feeling lucky, China won’t be able to stop it. Hence Dr. Drexler’s argument for the stability of CAIS fails in the presence of symmetry breakers.
2. Even if development is symmetric, there may be a strong first-mover incentive to take over the world.
To reiterate, Dr. Drexler’s hope seems to be that if every major power has access to CAIS, mutual deterrence will prevent hostile use. If the symmetric state can be reached, this might be the case. But it might not. What if CAIS reported to whoever used it that there was a certainty of success in world domination for whoever acted first? The only way out might be the domination of the world by a hegemon (human or artificial) tasked with preventing the domination of the world by anyone else!
An alternative scenario is semi-stablity, where a world-domination attempt by a first-mover may be thwarted by a second-mover, but the second-mover must at least temporarily dominate the world in order to do so (e.g. by scouring the world of the first-mover’s nanobots with the second-mover’s nanobots.)
3. Even if there isn’t such an incentive, aggression may be hard to define, thus deterrence may be difficult to implement.
What if what must be deterred are not attempts to dominate the world only, but also lesser disruptive goals? Disruptive goals can be classed in order of degree of disruption (from destruction to minimal disturbance) and scope of disruption (from universal to personal). It is not clear what lines to draw, and where.
4. Even if deterrence can be implemented, the system may usher in a techno-oligarcy.
There is another argument against CAIS. It is not simply that it can be misused (Dr. Drexler has already acknowledged that, and the alternative AGI model can be misused too.) Rather, CAIS subjects the world utterly to human will, and not just any human will, but the will of a selected set, with higher permissions being held by fewer people.
If full CAIS are limited to a few, then it seems to me that people will be ruled by immortal potentates whom they will have no chance overthrow, unless the potentates give them that chance voluntarily. I don’t know whether A.I.-U.S.A. would be livable, but I wouldn’t want to live in A.I.-People’s Republic of China.
III. Conclusion
1. The alternative to techno-oligarchy may be a singleton, which is what was supposed to be avoided in the first place.
Because CAIS are comprehensive, a person holding full permissions will be able to attempt to do anything, including trying to turn the world into paperclips. Unless, that is, they are under constant surveillance by CAIS not their own, or their CAIS won’t obey, which suggests an agent behind the disobedience.
Either most people won’t have full CAIS permissions, or the entire reachable universe will have to be wired with protective systems, lest al Qaeda send a von Neumann probe to Jupiter disguised as an innocent science project. At some point, this would imply the deployment of systems that are capable of making decisions on their own, or in other words AGI.
2. Not developing strong AI at all may be the only good option.
GeorgeStephen
Karma: 2
Thanks, your changes helped.