Probably not: Some nerdy superintelligent AI systems will emerge but humans will try their utmost to shut off early enough. Humankind will become very creative to socialize AGI. The highest risk is that a well funded intelligence agency (e.g. NSA) will be first. Their AI system could make use of TAO knowledge to kill all competing projects. Being nerdy intelligent it
could even manipulate competing AI projects that their AIs get “mental” illnesses. This AI will need quite a long time of learning and trust-building until it could take over world dominion.
Bostrums book is a wake-up call. In his book presentation Authors@Google (final six minutes Q&A) he claimed that only half a dozen scientists are working full time on the control problem worldwide. This is by far not enough to cope with future risks. More and effective funding is needed.
Bostrum does not want to destroy his straightforward FOOM-DOOM scenario. He does not discuss technical means of monitoring but only organizational ones.
I fully agree with Bostrum that too few people are working on the control problem. In 2007 Stephen Omohundro asked for synergistic research between psychologists, sociologists and computer system engineers on the control problem. Today we have to conclude that progress is limited.
We have many technical options at hand to prevent that an AI project can obtain decisive strategic advantage:
Prevent content overhang of powerful inside knowledge (taboos, secrecy, fighting organized internet crime).
Prevent control access by keeping life supporting infrastructures independent from internet.
Prevent hardware overhang by improving immunity to cyberattacks (hard crypto).
Develop weak-AI with superintelligent capability of monitoring AIs (thought police)
Develop fixed social conscience utility function.
“What is the best way to push it [risk of doom] down.” was Bostrums last sentence at his book presentation. This should be a further point of our discussion.
Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
One way to look at it is this: Suppose a dominant AGI emerged which was largely running the planet and expanding into the galaxy.
Would it then be impossible to engineer another AGI which survived modestly in some niche or flew off at near the speed of light in a new direction? No.
For the first AGI to be the only AGI, all other AGI development would have to cease without such “niche AGIs” ever being created.
An AGI could be extremely unobtrusive for tens of thousands of years at a time, and even be engaging in some form of self-improvement or replication.
“Sterilizing” matter of all of the “niche AGI” it contains could be quite an involved process.
If you are trying to sterilize, then the level you seek is not only AGI level, but also replicators, which can mobilize lots of resources within a niche.
For the first AGI to be the only AGI, all other AGI development would have to cease without such “niche AGIs” ever being created.
That AGI does not need to stay the only one to solidly stay in power. Since it has been playing the game for longer, it would be reasonable for it to be able to keep tabs on other intelligent entities, and only interfere with their development if they became too powerful. You can still have other entities doing their own thing, there just has to be a predictable ceiling on how much power they can acquire—indeed, that is the idea behind FAI programming: Have the FAI solve some fundamental problems of society, but still leave a society composed of plenty of other intelligences.
This would be made easier if reality is virtualized (i.e. if the singleton AI handles building and maintaining computronium infrastructure, and the rest of society runs as programs using some of the resources it provides); you don’t need to monitor every piece of matter for what computations it might carry out, if you’ve limited how much computation power you give to specific entities, and prevented them from direct write access to physical reality, to begin with.
In the end, I think eventual decisive strategic advantage for a single AI is extremely likely; it’s certainly a stable solution, it might happen due to initial timing, and even if doesn’t happen right then, it can still happen later. It’s far from clear any other arrangement would be similarly stable over the extremely long time horizons of relevance here (which are the same as those for continued existence of intelligences derived from our civilization; in the presence of superintelligent AGIs, likely billions of years).
In fact, the most likely alternative to me is that humanity falls into some other existential catastrophe that prevents us from developing AGI at all.
Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
Ants are probably good example how could organisational intelligence (?) be advantage.
Although we have to think careful—apex predators does not use to form large biomass. So it could be more complicated to define success of life form.
Problem of humanity is not only global replacer—something which erase all other lifeforms. It could be enough to replace us in our niche. Something which globally (from life viewpoint ) means nothing.
And we dont need to be totally erased to meet huge disaster. Decline population to several milions or tousands… (pets or AI) … is also unwanted.
We are afraid not decisive strategic advance over ants but over humans.
Do you think an AI project will obtain a decisive strategic advantage? Why/Why not?
Probably not: Some nerdy superintelligent AI systems will emerge but humans will try their utmost to shut off early enough. Humankind will become very creative to socialize AGI. The highest risk is that a well funded intelligence agency (e.g. NSA) will be first. Their AI system could make use of TAO knowledge to kill all competing projects. Being nerdy intelligent it could even manipulate competing AI projects that their AIs get “mental” illnesses. This AI will need quite a long time of learning and trust-building until it could take over world dominion.
Bostrums book is a wake-up call. In his book presentation Authors@Google (final six minutes Q&A) he claimed that only half a dozen scientists are working full time on the control problem worldwide. This is by far not enough to cope with future risks. More and effective funding is needed.
Bostrum does not want to destroy his straightforward FOOM-DOOM scenario. He does not discuss technical means of monitoring but only organizational ones.
I fully agree with Bostrum that too few people are working on the control problem. In 2007 Stephen Omohundro asked for synergistic research between psychologists, sociologists and computer system engineers on the control problem. Today we have to conclude that progress is limited.
We have many technical options at hand to prevent that an AI project can obtain decisive strategic advantage:
Prevent content overhang of powerful inside knowledge (taboos, secrecy, fighting organized internet crime).
Prevent control access by keeping life supporting infrastructures independent from internet.
Prevent hardware overhang by improving immunity to cyberattacks (hard crypto).
Law against backdoors in any system.
Transparency (AI development, cyberattack monitoring).
Develop weak-AI with superintelligent capability of monitoring AIs (thought police)
Develop fixed social conscience utility function.
“What is the best way to push it [risk of doom] down.” was Bostrums last sentence at his book presentation. This should be a further point of our discussion.
Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
One way to look at it is this: Suppose a dominant AGI emerged which was largely running the planet and expanding into the galaxy.
Would it then be impossible to engineer another AGI which survived modestly in some niche or flew off at near the speed of light in a new direction? No.
For the first AGI to be the only AGI, all other AGI development would have to cease without such “niche AGIs” ever being created.
An AGI could be extremely unobtrusive for tens of thousands of years at a time, and even be engaging in some form of self-improvement or replication.
“Sterilizing” matter of all of the “niche AGI” it contains could be quite an involved process.
If you are trying to sterilize, then the level you seek is not only AGI level, but also replicators, which can mobilize lots of resources within a niche.
That AGI does not need to stay the only one to solidly stay in power. Since it has been playing the game for longer, it would be reasonable for it to be able to keep tabs on other intelligent entities, and only interfere with their development if they became too powerful. You can still have other entities doing their own thing, there just has to be a predictable ceiling on how much power they can acquire—indeed, that is the idea behind FAI programming: Have the FAI solve some fundamental problems of society, but still leave a society composed of plenty of other intelligences.
This would be made easier if reality is virtualized (i.e. if the singleton AI handles building and maintaining computronium infrastructure, and the rest of society runs as programs using some of the resources it provides); you don’t need to monitor every piece of matter for what computations it might carry out, if you’ve limited how much computation power you give to specific entities, and prevented them from direct write access to physical reality, to begin with.
In the end, I think eventual decisive strategic advantage for a single AI is extremely likely; it’s certainly a stable solution, it might happen due to initial timing, and even if doesn’t happen right then, it can still happen later. It’s far from clear any other arrangement would be similarly stable over the extremely long time horizons of relevance here (which are the same as those for continued existence of intelligences derived from our civilization; in the presence of superintelligent AGIs, likely billions of years). In fact, the most likely alternative to me is that humanity falls into some other existential catastrophe that prevents us from developing AGI at all.
Ants are probably good example how could organisational intelligence (?) be advantage.
According to wiki ″Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass.″. See also google answer, wiki table or stackexchange.
Although we have to think careful—apex predators does not use to form large biomass. So it could be more complicated to define success of life form.
Problem of humanity is not only global replacer—something which erase all other lifeforms. It could be enough to replace us in our niche. Something which globally (from life viewpoint ) means nothing.
And we dont need to be totally erased to meet huge disaster. Decline population to several milions or tousands… (pets or AI) … is also unwanted.
We are afraid not decisive strategic advance over ants but over humans.