One of the unstated assumptions here is that an AGI has the power to kill us. I think it’s at least feasible that the first AGI that tries to eradicate humanity will lack the capacity to eradicate humanity—and any discussion about what an omnipotent AGI would or would not do should be debated in a universe where a non-omnipotent AGI has already tried and failed to eradicate humanity.
any discussion about what an omnipotent AGI would or would not do should be debated in a universe where a non-omnipotent AGI has already tried and failed to eradicate humanity
That is, many of the worlds with an omnipotent AGI already had a non-omnipotent AGI that tried and failed to eradicate humanity. Therefore, when discussing worlds with an omnipotent AGI, it’s relevant to bring up the possibility that there was a near-miss in those worlds in the past.
(But the discussion itself can take place in a world without any near-misses, or in a world without any AGIs, with the referents of that discussion being other worlds, or possible futures of that world.)
One of the unstated assumptions here is that an AGI has the power to kill us. I think it’s at least feasible that the first AGI that tries to eradicate humanity will lack the capacity to eradicate humanity—and any discussion about what an omnipotent AGI would or would not do should be debated in a universe where a non-omnipotent AGI has already tried and failed to eradicate humanity.
That is, many of the worlds with an omnipotent AGI already had a non-omnipotent AGI that tried and failed to eradicate humanity. Therefore, when discussing worlds with an omnipotent AGI, it’s relevant to bring up the possibility that there was a near-miss in those worlds in the past.
(But the discussion itself can take place in a world without any near-misses, or in a world without any AGIs, with the referents of that discussion being other worlds, or possible futures of that world.)