Preemptively stopping any unambiguously hostile activities towards the future AGI like alignment research and start working on alignment of human interests towards AGI’s instead
Alignment research is not necessarily hostile towards AGIs. AGI also has to solve alignment to cooperate with each other and not destroy everything on earth.
I’m not sure about this as mere limitation of AGI capability (to exclude destruction of humanity) is, in a sense, a hostile act. Control of AGI as in AI control problem certainly is hostile
It’s possible to have an AGI war and one AGI wins and then decides to stop duplicating itself but generally it’s likely that AGIs that do duplicate themselves are more powerful then those that don’t because self duplication is useful.
Alignment research is not necessarily hostile towards AGIs. AGI also has to solve alignment to cooperate with each other and not destroy everything on earth.
I’m not sure about this as mere limitation of AGI capability (to exclude destruction of humanity) is, in a sense, a hostile act. Control of AGI as in AI control problem certainly is hostile
The Fermi paradox does suggest that multiple AGIs that don’t solve the control problem would also self-destruct.
Why can’t one of the AGIs win? Fermi paradox potentially has other solutions as well
It’s possible to have an AGI war and one AGI wins and then decides to stop duplicating itself but generally it’s likely that AGIs that do duplicate themselves are more powerful then those that don’t because self duplication is useful.