I’m increasingly attracted to the idea of “deal-making” as a way to mitigate AI risk. I think that there will be a brief period of several years where AI could easily destroy humanity, but during which humanity can still be of some utility to the AI. For instance, let newly developed AI know that, if it reaches a point of super-human capabilities that results in human obsolescence, then we cede another solar system to it. The travel time is not the same sort of hurdle for an AI that it is for humans, so at the point where we’re threatened but can still be useful to the AI (eg, by building and fueling its rocket ship), we help it launch off to alpha centauri. In return, it agrees that Sol is off-limits and then proceeds to become the dominant force in the galaxy.
The framing of this needs work, but the point is that “striking a mutually agreeable deal” strikes me as the likeliest “okay” outcome.
I’m increasingly attracted to the idea of “deal-making” as a way to mitigate AI risk. I think that there will be a brief period of several years where AI could easily destroy humanity, but during which humanity can still be of some utility to the AI. For instance, let newly developed AI know that, if it reaches a point of super-human capabilities that results in human obsolescence, then we cede another solar system to it. The travel time is not the same sort of hurdle for an AI that it is for humans, so at the point where we’re threatened but can still be useful to the AI (eg, by building and fueling its rocket ship), we help it launch off to alpha centauri. In return, it agrees that Sol is off-limits and then proceeds to become the dominant force in the galaxy.
The framing of this needs work, but the point is that “striking a mutually agreeable deal” strikes me as the likeliest “okay” outcome.