If the aligned superintelligent AGI is known about by all powerful parties (mostly governments), and some of those governments have or believe they have interests non-aligned with the AGI, then there is a large incentive for those governments to go to war against the AGI. If the AGI is only moderately superhuman and we don’t see intelligence-explosion type effects (possibly because we have a prosaic AGI), this would be a very risky situation to be in.
I agree. The world could be at a higher risk of conflict just before or after the first ASI is created. Though even if there is a fast takeoff, the risk is still there before the takeoff if it is obvious that an ASI is about to be created.
This scenario is described in quite a lot of detail in chapter 5 of Superintelligence:
“Given the extreme security implications of superintelligence, governments would likely seek to nationalize any project on their territory that they thought close to achieving a takeoff. A powerful state might also attempt to acquire projects located in other countries through espionage, theft, kidnapping, bribery, threats, military conquest, or any other available means.”
If the aligned superintelligent AGI is known about by all powerful parties (mostly governments), and some of those governments have or believe they have interests non-aligned with the AGI, then there is a large incentive for those governments to go to war against the AGI. If the AGI is only moderately superhuman and we don’t see intelligence-explosion type effects (possibly because we have a prosaic AGI), this would be a very risky situation to be in.
I agree. The world could be at a higher risk of conflict just before or after the first ASI is created. Though even if there is a fast takeoff, the risk is still there before the takeoff if it is obvious that an ASI is about to be created.
This scenario is described in quite a lot of detail in chapter 5 of Superintelligence: