One of these is not like the others. Whether a single AI takes over the world, or whether there are always multiple centers of power is something we choose. The rest are properties of the universe waiting to be discovered (or which we think we’ve discovered, as the case may be).
I think it’s an especially important choice, and I think that the world ends up looking much better, by my values, with one dominant AI rather than power diffused among many. This is not to say that one AI would make all the decisions, but rather that something has to be powerful enough to veto all the really bad ideas, and notice when others are playing with fire; and I don’t think very many entities can stably have that power at once. In human societies, spreading power out more reduces the damage if some of the humans turn out to be bad. Among AIs, the failure modes are different and I don’t think that’s the case any more; I think an insane AI does a more or less constant amount of damage (it destroys everything), so spreading out the power just increases the risk of one of them being bad.
Whether the recursive self-improvement is possible or not, is the property of the universe. Also other details, like how much additional time and energy is necessary for another increase in intelligence.
The answer to this question can make some outcomes more likely. For example, if recursive self-improvement is possible, and at some level you can get a huge increase in intelligence very quickly and relatively cheaply, one of the centers of power could easily overpower the other ones. Perhaps even in situations where every super-agent would read and analyze the source code of all other super-agents all the time; the increased intelligence could allow one of them to make changes that will seem harmless to the other ones.
On the other hand the multiple centers of power scenario is more likely if humankind spreads to many planets, and there is some natural limit how high an intelligence can become before it somehow collapses or starts needing insane amounts of energy; so no super-agent could be smart enough to conquer the rest of the world.
One of these is not like the others. Whether a single AI takes over the world, or whether there are always multiple centers of power is something we choose. The rest are properties of the universe waiting to be discovered (or which we think we’ve discovered, as the case may be).
I think it’s an especially important choice, and I think that the world ends up looking much better, by my values, with one dominant AI rather than power diffused among many. This is not to say that one AI would make all the decisions, but rather that something has to be powerful enough to veto all the really bad ideas, and notice when others are playing with fire; and I don’t think very many entities can stably have that power at once. In human societies, spreading power out more reduces the damage if some of the humans turn out to be bad. Among AIs, the failure modes are different and I don’t think that’s the case any more; I think an insane AI does a more or less constant amount of damage (it destroys everything), so spreading out the power just increases the risk of one of them being bad.
Whether the recursive self-improvement is possible or not, is the property of the universe. Also other details, like how much additional time and energy is necessary for another increase in intelligence.
The answer to this question can make some outcomes more likely. For example, if recursive self-improvement is possible, and at some level you can get a huge increase in intelligence very quickly and relatively cheaply, one of the centers of power could easily overpower the other ones. Perhaps even in situations where every super-agent would read and analyze the source code of all other super-agents all the time; the increased intelligence could allow one of them to make changes that will seem harmless to the other ones.
On the other hand the multiple centers of power scenario is more likely if humankind spreads to many planets, and there is some natural limit how high an intelligence can become before it somehow collapses or starts needing insane amounts of energy; so no super-agent could be smart enough to conquer the rest of the world.
The idea of technological determinism suggests this is hubris.