Yes, but then how does this risk differ from asteroid impacts, solar flares
Asteroid impacts and solar flares are relatively ‘dumb’ risks, in that they can be defended against once you know how. They don’t constantly try to outsmart you.
bio weapons or nanotechnology?
This question is a bit like asking “yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons”.
Bioweapons and nanotechnology are particular special cases of “dangerous technologies that humans might come up with”. An AGI is potentially employing all of the dangerous technologies humans—or AGIs—might come up with.
Your comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn’t follow because if such an AGI is as likely as the other risks then it doesn’t matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.
Asteroid impacts and solar flares are relatively ‘dumb’ risks, in that they can be defended against once you know how. They don’t constantly try to outsmart you.
This question is a bit like asking “yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons”.
Bioweapons and nanotechnology are particular special cases of “dangerous technologies that humans might come up with”. An AGI is potentially employing all of the dangerous technologies humans—or AGIs—might come up with.
Your comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn’t follow because if such an AGI is as likely as the other risks then it doesn’t matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.