Unless somebody specifically pushes for multipolar scenario its unlikely to arise spontaneously. With our military-oriented psychology any SI will be first considered for military purposes, including prevention of SI achievement by others.
However, a smart group of people or organizations might purposefully multiply instances of near-ready SI in order to create competition which can increase our chances of survival. Creating social structure of SIs might make them socially aware and tolerant, which might include tolerance to people.
Note that multipolar scenarios can arise well before we have the capability to implement a SI.
The standard Hansonian scenario starts with human-level “ems” (emulations). If from-scratch AI development turns out to be difficult, we may develop partial-uploading technology first, and a highly multipolar em scenario would be likely at that point. Of course, AI research would still be on the table in such a scenario, so it wouldn’t necessarily be multipolar for very long.
Why would military purposes preclude multiple parties having artificial intelligence? It seems you are assuming that if anyone achieves superintelligent machines, they will have a decisive enough advantage to prevent anyone else from having the technology. But if they are achieved incrementally, that need not be so.
Do you think a multipolar outcome is more or less likely than a singleton scenario?
Unless somebody specifically pushes for multipolar scenario its unlikely to arise spontaneously. With our military-oriented psychology any SI will be first considered for military purposes, including prevention of SI achievement by others. However, a smart group of people or organizations might purposefully multiply instances of near-ready SI in order to create competition which can increase our chances of survival. Creating social structure of SIs might make them socially aware and tolerant, which might include tolerance to people.
Note that multipolar scenarios can arise well before we have the capability to implement a SI.
The standard Hansonian scenario starts with human-level “ems” (emulations). If from-scratch AI development turns out to be difficult, we may develop partial-uploading technology first, and a highly multipolar em scenario would be likely at that point. Of course, AI research would still be on the table in such a scenario, so it wouldn’t necessarily be multipolar for very long.
Why would military purposes preclude multiple parties having artificial intelligence? It seems you are assuming that if anyone achieves superintelligent machines, they will have a decisive enough advantage to prevent anyone else from having the technology. But if they are achieved incrementally, that need not be so.