I’m not sure about this as mere limitation of AGI capability (to exclude destruction of humanity) is, in a sense, a hostile act. Control of AGI as in AI control problem certainly is hostile
It’s possible to have an AGI war and one AGI wins and then decides to stop duplicating itself but generally it’s likely that AGIs that do duplicate themselves are more powerful then those that don’t because self duplication is useful.
I’m not sure about this as mere limitation of AGI capability (to exclude destruction of humanity) is, in a sense, a hostile act. Control of AGI as in AI control problem certainly is hostile
The Fermi paradox does suggest that multiple AGIs that don’t solve the control problem would also self-destruct.
Why can’t one of the AGIs win? Fermi paradox potentially has other solutions as well
It’s possible to have an AGI war and one AGI wins and then decides to stop duplicating itself but generally it’s likely that AGIs that do duplicate themselves are more powerful then those that don’t because self duplication is useful.