We could, in principle, decide that survival of humanity in current form (being various shades of unlikely depending on who you believe), is no longer a priority and focus on different goals what are still desirable in the face of likely extinction. For example:
See if any credible MAD schemes are possible when AGI is one of the players
Accept survival in a reduced capacity, i.e. kept as a pet or a battle-tested biological backup
Ensuring that AGI which kills us can at least do something interesting later, i.e. it’s something smarter than a fixed-goal paperclip optimizer
Preemptively stopping any unambiguously hostile activities towards the future AGI like alignment research and start working on alignment of human interests towards AGI’s instead
These are just off from the top of my head and I’m sure there are many more available once survival requirement is removed
Preemptively stopping any unambiguously hostile activities towards the future AGI like alignment research and start working on alignment of human interests towards AGI’s instead
Alignment research is not necessarily hostile towards AGIs. AGI also has to solve alignment to cooperate with each other and not destroy everything on earth.
I’m not sure about this as mere limitation of AGI capability (to exclude destruction of humanity) is, in a sense, a hostile act. Control of AGI as in AI control problem certainly is hostile
It’s possible to have an AGI war and one AGI wins and then decides to stop duplicating itself but generally it’s likely that AGIs that do duplicate themselves are more powerful then those that don’t because self duplication is useful.
We could, in principle, decide that survival of humanity in current form (being various shades of unlikely depending on who you believe), is no longer a priority and focus on different goals what are still desirable in the face of likely extinction. For example:
See if any credible MAD schemes are possible when AGI is one of the players
Accept survival in a reduced capacity, i.e. kept as a pet or a battle-tested biological backup
Ensuring that AGI which kills us can at least do something interesting later, i.e. it’s something smarter than a fixed-goal paperclip optimizer
Preemptively stopping any unambiguously hostile activities towards the future AGI like alignment research and start working on alignment of human interests towards AGI’s instead
These are just off from the top of my head and I’m sure there are many more available once survival requirement is removed
Alignment research is not necessarily hostile towards AGIs. AGI also has to solve alignment to cooperate with each other and not destroy everything on earth.
I’m not sure about this as mere limitation of AGI capability (to exclude destruction of humanity) is, in a sense, a hostile act. Control of AGI as in AI control problem certainly is hostile
The Fermi paradox does suggest that multiple AGIs that don’t solve the control problem would also self-destruct.
Why can’t one of the AGIs win? Fermi paradox potentially has other solutions as well
It’s possible to have an AGI war and one AGI wins and then decides to stop duplicating itself but generally it’s likely that AGIs that do duplicate themselves are more powerful then those that don’t because self duplication is useful.