Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
One way to look at it is this: Suppose a dominant AGI emerged which was largely running the planet and expanding into the galaxy.
Would it then be impossible to engineer another AGI which survived modestly in some niche or flew off at near the speed of light in a new direction? No.
For the first AGI to be the only AGI, all other AGI development would have to cease without such “niche AGIs” ever being created.
An AGI could be extremely unobtrusive for tens of thousands of years at a time, and even be engaging in some form of self-improvement or replication.
“Sterilizing” matter of all of the “niche AGI” it contains could be quite an involved process.
If you are trying to sterilize, then the level you seek is not only AGI level, but also replicators, which can mobilize lots of resources within a niche.
For the first AGI to be the only AGI, all other AGI development would have to cease without such “niche AGIs” ever being created.
That AGI does not need to stay the only one to solidly stay in power. Since it has been playing the game for longer, it would be reasonable for it to be able to keep tabs on other intelligent entities, and only interfere with their development if they became too powerful. You can still have other entities doing their own thing, there just has to be a predictable ceiling on how much power they can acquire—indeed, that is the idea behind FAI programming: Have the FAI solve some fundamental problems of society, but still leave a society composed of plenty of other intelligences.
This would be made easier if reality is virtualized (i.e. if the singleton AI handles building and maintaining computronium infrastructure, and the rest of society runs as programs using some of the resources it provides); you don’t need to monitor every piece of matter for what computations it might carry out, if you’ve limited how much computation power you give to specific entities, and prevented them from direct write access to physical reality, to begin with.
In the end, I think eventual decisive strategic advantage for a single AI is extremely likely; it’s certainly a stable solution, it might happen due to initial timing, and even if doesn’t happen right then, it can still happen later. It’s far from clear any other arrangement would be similarly stable over the extremely long time horizons of relevance here (which are the same as those for continued existence of intelligences derived from our civilization; in the presence of superintelligent AGIs, likely billions of years).
In fact, the most likely alternative to me is that humanity falls into some other existential catastrophe that prevents us from developing AGI at all.
Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
Ants are probably good example how could organisational intelligence (?) be advantage.
Although we have to think careful—apex predators does not use to form large biomass. So it could be more complicated to define success of life form.
Problem of humanity is not only global replacer—something which erase all other lifeforms. It could be enough to replace us in our niche. Something which globally (from life viewpoint ) means nothing.
And we dont need to be totally erased to meet huge disaster. Decline population to several milions or tousands… (pets or AI) … is also unwanted.
We are afraid not decisive strategic advance over ants but over humans.
Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
One way to look at it is this: Suppose a dominant AGI emerged which was largely running the planet and expanding into the galaxy.
Would it then be impossible to engineer another AGI which survived modestly in some niche or flew off at near the speed of light in a new direction? No.
For the first AGI to be the only AGI, all other AGI development would have to cease without such “niche AGIs” ever being created.
An AGI could be extremely unobtrusive for tens of thousands of years at a time, and even be engaging in some form of self-improvement or replication.
“Sterilizing” matter of all of the “niche AGI” it contains could be quite an involved process.
If you are trying to sterilize, then the level you seek is not only AGI level, but also replicators, which can mobilize lots of resources within a niche.
That AGI does not need to stay the only one to solidly stay in power. Since it has been playing the game for longer, it would be reasonable for it to be able to keep tabs on other intelligent entities, and only interfere with their development if they became too powerful. You can still have other entities doing their own thing, there just has to be a predictable ceiling on how much power they can acquire—indeed, that is the idea behind FAI programming: Have the FAI solve some fundamental problems of society, but still leave a society composed of plenty of other intelligences.
This would be made easier if reality is virtualized (i.e. if the singleton AI handles building and maintaining computronium infrastructure, and the rest of society runs as programs using some of the resources it provides); you don’t need to monitor every piece of matter for what computations it might carry out, if you’ve limited how much computation power you give to specific entities, and prevented them from direct write access to physical reality, to begin with.
In the end, I think eventual decisive strategic advantage for a single AI is extremely likely; it’s certainly a stable solution, it might happen due to initial timing, and even if doesn’t happen right then, it can still happen later. It’s far from clear any other arrangement would be similarly stable over the extremely long time horizons of relevance here (which are the same as those for continued existence of intelligences derived from our civilization; in the presence of superintelligent AGIs, likely billions of years). In fact, the most likely alternative to me is that humanity falls into some other existential catastrophe that prevents us from developing AGI at all.
Ants are probably good example how could organisational intelligence (?) be advantage.
According to wiki ″Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass.″. See also google answer, wiki table or stackexchange.
Although we have to think careful—apex predators does not use to form large biomass. So it could be more complicated to define success of life form.
Problem of humanity is not only global replacer—something which erase all other lifeforms. It could be enough to replace us in our niche. Something which globally (from life viewpoint ) means nothing.
And we dont need to be totally erased to meet huge disaster. Decline population to several milions or tousands… (pets or AI) … is also unwanted.
We are afraid not decisive strategic advance over ants but over humans.