For the first AGI to be the only AGI, all other AGI development would have to cease without such “niche AGIs” ever being created.
That AGI does not need to stay the only one to solidly stay in power. Since it has been playing the game for longer, it would be reasonable for it to be able to keep tabs on other intelligent entities, and only interfere with their development if they became too powerful. You can still have other entities doing their own thing, there just has to be a predictable ceiling on how much power they can acquire—indeed, that is the idea behind FAI programming: Have the FAI solve some fundamental problems of society, but still leave a society composed of plenty of other intelligences.
This would be made easier if reality is virtualized (i.e. if the singleton AI handles building and maintaining computronium infrastructure, and the rest of society runs as programs using some of the resources it provides); you don’t need to monitor every piece of matter for what computations it might carry out, if you’ve limited how much computation power you give to specific entities, and prevented them from direct write access to physical reality, to begin with.
In the end, I think eventual decisive strategic advantage for a single AI is extremely likely; it’s certainly a stable solution, it might happen due to initial timing, and even if doesn’t happen right then, it can still happen later. It’s far from clear any other arrangement would be similarly stable over the extremely long time horizons of relevance here (which are the same as those for continued existence of intelligences derived from our civilization; in the presence of superintelligent AGIs, likely billions of years).
In fact, the most likely alternative to me is that humanity falls into some other existential catastrophe that prevents us from developing AGI at all.
That AGI does not need to stay the only one to solidly stay in power. Since it has been playing the game for longer, it would be reasonable for it to be able to keep tabs on other intelligent entities, and only interfere with their development if they became too powerful. You can still have other entities doing their own thing, there just has to be a predictable ceiling on how much power they can acquire—indeed, that is the idea behind FAI programming: Have the FAI solve some fundamental problems of society, but still leave a society composed of plenty of other intelligences.
This would be made easier if reality is virtualized (i.e. if the singleton AI handles building and maintaining computronium infrastructure, and the rest of society runs as programs using some of the resources it provides); you don’t need to monitor every piece of matter for what computations it might carry out, if you’ve limited how much computation power you give to specific entities, and prevented them from direct write access to physical reality, to begin with.
In the end, I think eventual decisive strategic advantage for a single AI is extremely likely; it’s certainly a stable solution, it might happen due to initial timing, and even if doesn’t happen right then, it can still happen later. It’s far from clear any other arrangement would be similarly stable over the extremely long time horizons of relevance here (which are the same as those for continued existence of intelligences derived from our civilization; in the presence of superintelligent AGIs, likely billions of years). In fact, the most likely alternative to me is that humanity falls into some other existential catastrophe that prevents us from developing AGI at all.