I think that there could be other ways to escape this alternative. In fact, I wrote a list of the possible ideas of “global solutions” (e.g. ban AI, take over the world, create many AIs) here.
Some possible ideas (not necessary good ones) are:
Use the first human upload as effective AI police which prevent creations of any other AI.
Use other forms of the narrow AI to take over the world and to create effective AI Police which is capable to find unauthorised AI research and stop it.
Drexler’s CAIS.
Something like Christiano’s approach. A group of people augmented by a narrow AI form a “human-AI-Oracle” and solves philosophy.
Active AI boxing as a commercial service.
Human augmentation.
Most of these ideas are centered around the ways of getting high-level real-world capabilities by combining limited AI with something powerful in the outside world (humans, data, nuclear power, market forces, active box), and then using these combined capabilities to prevent creation really dangerous AI.
None of these ideas seem especially promising even for achieving temporary power over the world (sufficient for preventing creation of other AI).
It seems even harder to achieve a long-term stable and safe world environment, in which we can take our time to solve remaining philosophy and AI safety problems and eventually realize the full potential value of the universe.
Some of them (other forms of the narrow AI to take over the world, Christiano’s approach) seem to require solving something like decision theory or metaphilosophy anyway to ensure safety.
I think that there could be other ways to escape this alternative. In fact, I wrote a list of the possible ideas of “global solutions” (e.g. ban AI, take over the world, create many AIs) here.
Some possible ideas (not necessary good ones) are:
Use the first human upload as effective AI police which prevent creations of any other AI.
Use other forms of the narrow AI to take over the world and to create effective AI Police which is capable to find unauthorised AI research and stop it.
Drexler’s CAIS.
Something like Christiano’s approach. A group of people augmented by a narrow AI form a “human-AI-Oracle” and solves philosophy.
Active AI boxing as a commercial service.
Human augmentation.
Most of these ideas are centered around the ways of getting high-level real-world capabilities by combining limited AI with something powerful in the outside world (humans, data, nuclear power, market forces, active box), and then using these combined capabilities to prevent creation really dangerous AI.
None of these ideas seem especially promising even for achieving temporary power over the world (sufficient for preventing creation of other AI).
It seems even harder to achieve a long-term stable and safe world environment, in which we can take our time to solve remaining philosophy and AI safety problems and eventually realize the full potential value of the universe.
Some of them (other forms of the narrow AI to take over the world, Christiano’s approach) seem to require solving something like decision theory or metaphilosophy anyway to ensure safety.