Probably not: Some nerdy superintelligent AI systems will emerge but humans will try their utmost to shut off early enough. Humankind will become very creative to socialize AGI. The highest risk is that a well funded intelligence agency (e.g. NSA) will be first. Their AI system could make use of TAO knowledge to kill all competing projects. Being nerdy intelligent it
could even manipulate competing AI projects that their AIs get “mental” illnesses. This AI will need quite a long time of learning and trust-building until it could take over world dominion.
Bostrums book is a wake-up call. In his book presentation Authors@Google (final six minutes Q&A) he claimed that only half a dozen scientists are working full time on the control problem worldwide. This is by far not enough to cope with future risks. More and effective funding is needed.
Bostrum does not want to destroy his straightforward FOOM-DOOM scenario. He does not discuss technical means of monitoring but only organizational ones.
I fully agree with Bostrum that too few people are working on the control problem. In 2007 Stephen Omohundro asked for synergistic research between psychologists, sociologists and computer system engineers on the control problem. Today we have to conclude that progress is limited.
We have many technical options at hand to prevent that an AI project can obtain decisive strategic advantage:
Prevent content overhang of powerful inside knowledge (taboos, secrecy, fighting organized internet crime).
Prevent control access by keeping life supporting infrastructures independent from internet.
Prevent hardware overhang by improving immunity to cyberattacks (hard crypto).
Develop weak-AI with superintelligent capability of monitoring AIs (thought police)
Develop fixed social conscience utility function.
“What is the best way to push it [risk of doom] down.” was Bostrums last sentence at his book presentation. This should be a further point of our discussion.
Probably not: Some nerdy superintelligent AI systems will emerge but humans will try their utmost to shut off early enough. Humankind will become very creative to socialize AGI. The highest risk is that a well funded intelligence agency (e.g. NSA) will be first. Their AI system could make use of TAO knowledge to kill all competing projects. Being nerdy intelligent it could even manipulate competing AI projects that their AIs get “mental” illnesses. This AI will need quite a long time of learning and trust-building until it could take over world dominion.
Bostrums book is a wake-up call. In his book presentation Authors@Google (final six minutes Q&A) he claimed that only half a dozen scientists are working full time on the control problem worldwide. This is by far not enough to cope with future risks. More and effective funding is needed.
Bostrum does not want to destroy his straightforward FOOM-DOOM scenario. He does not discuss technical means of monitoring but only organizational ones.
I fully agree with Bostrum that too few people are working on the control problem. In 2007 Stephen Omohundro asked for synergistic research between psychologists, sociologists and computer system engineers on the control problem. Today we have to conclude that progress is limited.
We have many technical options at hand to prevent that an AI project can obtain decisive strategic advantage:
Prevent content overhang of powerful inside knowledge (taboos, secrecy, fighting organized internet crime).
Prevent control access by keeping life supporting infrastructures independent from internet.
Prevent hardware overhang by improving immunity to cyberattacks (hard crypto).
Law against backdoors in any system.
Transparency (AI development, cyberattack monitoring).
Develop weak-AI with superintelligent capability of monitoring AIs (thought police)
Develop fixed social conscience utility function.
“What is the best way to push it [risk of doom] down.” was Bostrums last sentence at his book presentation. This should be a further point of our discussion.