It would be evidence against AGI being an existential risk, but not strong evidence. The strength would depend upon lots of other factors of the scenario, such as:
How long have we coexisted with AGI?
Has AGI improved to superintelligence?
Have we proven that the AGIs can’t feasibly improve to superintelligence?
Were there any near misses before coexistence?
Were the AGIs all developed via planned alignment techniques?
A long timescale for (1) implies that it’s less likely to just be a temporary state of affairs between AGI development and some AGI wrecking us.
If (2) doesn’t hold, then we still don’t know whether an superintelligent AGI will wreck us as soon as it develops. In conjunction with a long coexistence time, it might show that spontaneous progression to ASI is less likely than we thought which would be a point against x-risk. If there is a superintelligence that we are coexisting with, substantial risk may remain if it is not legible to us. It may be legible to us by design, or perhaps by a (post-)human intelligence explosion.
If (3) held, that would be good evidence against one major branch of x-risk. It might be due to diminishing returns on computation power, other physical limits, pivotal acts, or by aligned design. The first two would be much stronger evidence against general x-risk, but even the latter two would be evidence that limiting the risk is plausible.
In (4), zero near-misses would be only weak evidence against x-risk, since we may have just got lucky. One or two near-misses would be evidence in favour of x-risk, and many near-misses could be evidence against x-risk, or it might be anthropic selection.
If the AGIs were all developed via planned alignment in (5), then it does at least say that alignment seems to be possible, which reduces one branch of x-risk. It doesn’t say much about what happens if AGI is developed without those techniques, and risks from misaligned AGI might still be in such a world’s future.
These are just a few “think about it for 5 minutes” complications of evaluating evidence for and against existential risk from artificial general intelligence. A lot of it depends upon what we learn from our experience with AGI in this hypothetical future, and we can’t deduce very much of it in advance.
It would be evidence against AGI being an existential risk, but not strong evidence. The strength would depend upon lots of other factors of the scenario, such as:
How long have we coexisted with AGI?
Has AGI improved to superintelligence?
Have we proven that the AGIs can’t feasibly improve to superintelligence?
Were there any near misses before coexistence?
Were the AGIs all developed via planned alignment techniques?
A long timescale for (1) implies that it’s less likely to just be a temporary state of affairs between AGI development and some AGI wrecking us.
If (2) doesn’t hold, then we still don’t know whether an superintelligent AGI will wreck us as soon as it develops. In conjunction with a long coexistence time, it might show that spontaneous progression to ASI is less likely than we thought which would be a point against x-risk. If there is a superintelligence that we are coexisting with, substantial risk may remain if it is not legible to us. It may be legible to us by design, or perhaps by a (post-)human intelligence explosion.
If (3) held, that would be good evidence against one major branch of x-risk. It might be due to diminishing returns on computation power, other physical limits, pivotal acts, or by aligned design. The first two would be much stronger evidence against general x-risk, but even the latter two would be evidence that limiting the risk is plausible.
In (4), zero near-misses would be only weak evidence against x-risk, since we may have just got lucky. One or two near-misses would be evidence in favour of x-risk, and many near-misses could be evidence against x-risk, or it might be anthropic selection.
If the AGIs were all developed via planned alignment in (5), then it does at least say that alignment seems to be possible, which reduces one branch of x-risk. It doesn’t say much about what happens if AGI is developed without those techniques, and risks from misaligned AGI might still be in such a world’s future.
These are just a few “think about it for 5 minutes” complications of evaluating evidence for and against existential risk from artificial general intelligence. A lot of it depends upon what we learn from our experience with AGI in this hypothetical future, and we can’t deduce very much of it in advance.