This (whether and how much AI can progress offensive tech before humans learn to defend) shouldn’t be a huge crux, IMO. The offensive technology to make the planet unsuitable for current human civilization ALREADY exists—the defense so far has consisted of convincing people not to use it.
We just can’t learn much from human-human conflict, where at almost any scale, the victor hopes to have a hospitable environment remaining afterward. We might be able to extrapolate from human-rainforest or human-buffalo or human-rat conflict, which doesn’t really fit into offense/defense, but more resource competition vs adaptability in shared environments.
The offensive technology to make the planet unsuitable for current human civilization ALREADY exists—the defense so far has consisted of convincing people not to use it.
I think this is true in the limit (assuming you’re referring primarily to nukes). But I think offense-defense reasoning is still very relevant here: For example, to know when/how much to worry about AIs using nuclear technology to cause human extinction, you would want to ask under what circumstances can humans defend command and control of nuclear weapons from AIs that want to seize them.
We just can’t learn much from human-human conflict, where at almost any scale, the victor hopes to have a hospitable environment remaining afterward.
I agree that the calculus changes dramatically if you assume that the AI does not need or want the earth to remain inhabitable by humans. I also agree that, in the limit, interspecies interactions are plausibly a better model than human-human conflicts. But I don’t agree that either of these implies that offense-defense logic is totally irrelevant.
Humans, as incumbents, inherently occupy the position of defenders as against the misaligned AIs in these scenarios, at least if we’re aware of the conflict (which I grant we might not be). The worry is that AIs will try to gain control in certain ways. Offense-defense thinking is important if we ask questions like:
Can we predict how AIs might try to seize control? I.e., what does control consist in from their perspective, and how might they achieve that given parties’ starting positions.
If we have some model of how AIs try to seize control, what does that imply about humanity’s ability to defend itself?
This (whether and how much AI can progress offensive tech before humans learn to defend) shouldn’t be a huge crux, IMO. The offensive technology to make the planet unsuitable for current human civilization ALREADY exists—the defense so far has consisted of convincing people not to use it.
We just can’t learn much from human-human conflict, where at almost any scale, the victor hopes to have a hospitable environment remaining afterward. We might be able to extrapolate from human-rainforest or human-buffalo or human-rat conflict, which doesn’t really fit into offense/defense, but more resource competition vs adaptability in shared environments.
Thanks!
I think this is true in the limit (assuming you’re referring primarily to nukes). But I think offense-defense reasoning is still very relevant here: For example, to know when/how much to worry about AIs using nuclear technology to cause human extinction, you would want to ask under what circumstances can humans defend command and control of nuclear weapons from AIs that want to seize them.
I agree that the calculus changes dramatically if you assume that the AI does not need or want the earth to remain inhabitable by humans. I also agree that, in the limit, interspecies interactions are plausibly a better model than human-human conflicts. But I don’t agree that either of these implies that offense-defense logic is totally irrelevant.
Humans, as incumbents, inherently occupy the position of defenders as against the misaligned AIs in these scenarios, at least if we’re aware of the conflict (which I grant we might not be). The worry is that AIs will try to gain control in certain ways. Offense-defense thinking is important if we ask questions like:
Can we predict how AIs might try to seize control? I.e., what does control consist in from their perspective, and how might they achieve that given parties’ starting positions.
If we have some model of how AIs try to seize control, what does that imply about humanity’s ability to defend itself?