Sam Altman: “multiple AGIs in the world I think is better than one”. Strongly disagree. if there is a finite probability than an AGI decides to capriciously/whimsically/carelessly end humanity (and many technological modalities by which it can) then each additional independent instance multiplies that probability to an end point where it near certain.
One of the main counterarguments here is that the existence of multiple AGIs allows them to compete with one another in ways that could benefit humanity. E.g. policing one another to ensure alignment of the AGI community with human interests. Of course, whether this actually would outweigh your concern in practice is highly uncertain and depends on a lot of implementation details.
Sam Altman: “multiple AGIs in the world I think is better than one”. Strongly disagree. if there is a finite probability than an AGI decides to capriciously/whimsically/carelessly end humanity (and many technological modalities by which it can) then each additional independent instance multiplies that probability to an end point where it near certain.
One of the main counterarguments here is that the existence of multiple AGIs allows them to compete with one another in ways that could benefit humanity. E.g. policing one another to ensure alignment of the AGI community with human interests. Of course, whether this actually would outweigh your concern in practice is highly uncertain and depends on a lot of implementation details.