What current defenses do you think we have against nukes or pandemics?
For instance, the lesson from Covid seems to be that a small group of humans is already enough to trigger a pandemic. If one intended to develop an especially lethal pandemic via gain-of-function research, the task already doesn’t seem particularly hard for researchers with time and resources, so we’d expect a superintelligence to have a much easier job.
If getting access to nukes via hacking seems too implausible, then maybe it’s easier to imagine triggering nuclear war by tricking one nuclear power into thinking it’s under attack by another. We’ve had close calls in the past merely due to bad sensors!
More generally, given all the various x-risks we already think about, I just don’t consider humanity in its current position to be particularly secure. And that’s our current position, minus an adversary who could optimize the situation towards our extinction.
Regarding the safety of the AGI, you’d expect it not to do things that get it noticed until it’s sufficiently safe. So you’d expect it to only get noticed if it believes it can get away with it. I also think our civilization clearly lacks the ability to coordinate to e.g. turn off the Internet or something, if that was necessary to stop an AGI once it had reached the point of distributed computation.
Personal protective equipment and isolation can protect against infectious disease, at the very least. A more deadly and infectious virus than COVID would be taken far more seriously.
I think nuclear war is unlikely to wipe out humanity, since there are enough countries that are unlikely targets, and I don’t think all of the US would be wiped out anyway. I’m less sure about nuclear winter, but those in the community who’ve done research on it seem skeptical that it would wipe us out. Maybe it reduces the population enough for an AGI to target the rest of us or prevent us from rebuilding, though.
Some posts here:
https://forum.effectivealtruism.org/topics/nuclear-warfare-1https://forum.effectivealtruism.org/topics/nuclear-winter
Maybe it reduces the population enough for an AGI to target the rest of us or prevent us from rebuilding, though.
Yeah, I’m familiar with the arguments that neither pandemics nor nuclear war seem likely to be existential risks, i.e. ones that could cause human extinction; but I’d nonetheless expect such events to be damaging enough from the perspective of a nefarious actor trying to prevent resistance.
Ultimately this whole line of reasoning seems superfluous to me—it just seems so obvious that with sufficient cognitive power one can do ridiculous things—but for those who trip up on the suggested nanotech stuff, maybe a more palatable argument is: You know those other x-risks you’re already worrying about? A sufficiently intelligent antagonist can exacerbate those nigh-arbitrarily.
What current defenses do you think we have against nukes or pandemics?
For instance, the lesson from Covid seems to be that a small group of humans is already enough to trigger a pandemic. If one intended to develop an especially lethal pandemic via gain-of-function research, the task already doesn’t seem particularly hard for researchers with time and resources, so we’d expect a superintelligence to have a much easier job.
If getting access to nukes via hacking seems too implausible, then maybe it’s easier to imagine triggering nuclear war by tricking one nuclear power into thinking it’s under attack by another. We’ve had close calls in the past merely due to bad sensors!
More generally, given all the various x-risks we already think about, I just don’t consider humanity in its current position to be particularly secure. And that’s our current position, minus an adversary who could optimize the situation towards our extinction.
Regarding the safety of the AGI, you’d expect it not to do things that get it noticed until it’s sufficiently safe. So you’d expect it to only get noticed if it believes it can get away with it. I also think our civilization clearly lacks the ability to coordinate to e.g. turn off the Internet or something, if that was necessary to stop an AGI once it had reached the point of distributed computation.
Personal protective equipment and isolation can protect against infectious disease, at the very least. A more deadly and infectious virus than COVID would be taken far more seriously.
I think nuclear war is unlikely to wipe out humanity, since there are enough countries that are unlikely targets, and I don’t think all of the US would be wiped out anyway. I’m less sure about nuclear winter, but those in the community who’ve done research on it seem skeptical that it would wipe us out. Maybe it reduces the population enough for an AGI to target the rest of us or prevent us from rebuilding, though. Some posts here: https://forum.effectivealtruism.org/topics/nuclear-warfare-1 https://forum.effectivealtruism.org/topics/nuclear-winter
Yeah, I’m familiar with the arguments that neither pandemics nor nuclear war seem likely to be existential risks, i.e. ones that could cause human extinction; but I’d nonetheless expect such events to be damaging enough from the perspective of a nefarious actor trying to prevent resistance.
Ultimately this whole line of reasoning seems superfluous to me—it just seems so obvious that with sufficient cognitive power one can do ridiculous things—but for those who trip up on the suggested nanotech stuff, maybe a more palatable argument is: You know those other x-risks you’re already worrying about? A sufficiently intelligent antagonist can exacerbate those nigh-arbitrarily.