I think you raise a good point about Offense-Defense balance predictions. There is an equilibrium around effort spent on defense, which reacts to offense on some particular dimension becoming easier. So long as there’s free energy which could be spent by the defender to bolster their defense and remain energy positive or neutral, and the defender has the affordances (time, intelligence, optimization pressure, etc.) to make the change, then you should predict that the defender will rebalance the equation.
That’s one way things work out, and it happens more often than naive extrapolation predicts because often the defense is in some way novel, changing the game along a new dimension.
On the other hand, there are different equilibria that adversarial dynamics can fall into besides rebalancing. Let’s look at some examples.
GAN example
A very clean example is Generative Adversarial Networks (GANs). This allows us to strip away many of the details and look at the patterns which emerge from the math. GANs are inherently unstable, because they have three equilibrium states: dynamic fluctuating balance (the goal of the developer, and the situation described in your post), attacker dominates, defender dominates. Anytime the system gets too far from the central equilibrium of dynamic balance, you fall towards the nearer of the other two states. And then stay there, without hope of recovery.
Ecosystem example
Another example I like to use for this situation is predator-prey relationships in ecosystems. The downside of using this as an example is that there is a dis-analogy to competition between humans, in that the predators and prey being examined have relatively little intelligence. Most of the optimization is coming from evolutionary selection pressure. On the plus side, we have a very long history with lots and lots of examples, and these examples occur in complex multi-actor dynamic states with existential stakes. So let’s take a look at an example.
Ecosystem example: Foxes and rabbits. Why do the foxes not simply eat all the rabbits? Well, there are multiple reasons. One is that the foxes depend on a supply of rabbits to have enough spare energy to successfully reproduce and raise young. As rabbit supply dwindles, foxes starve or choose not to reproduce, and fox population dwindles. See: https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations
In practice, this isn’t a perfect model of the dynamics, because more complicated factors are almost always in play. There are other agents involved, who interact in less strong but still significant ways. Mice can also be a food source for the foxes, and to some extent compete for some of the same food as rabbits. Viruses in the rabbit population spread more easily and have more opportunity to adversarially optimize against the rabbit immune systems as the rabbit population increases, as sick rabbits are culled less, and as rabbit health declines due to competition for food and territory with other rabbits. Nevertheless, you do see Red Queen Races occur in long-standing predator-prey relationships, where the two species gradually one-up each other (via evolutionary selection pressure) on offense then defense then offense.
The other thing you see happen is that the two species stop interacting. They become either locally extinguished such that they no longer have overlapping territories, or one or both species go completely extinct.
Military example
A closer analogy to the dynamics of future human conflict is… past human conflict. Before there were militaries, there were inter-tribal conflicts. Often there was similar armaments and fighting strengths on both sides, and the conflicts had no decisive winner. Instead the conflicts would drag on across many generations, waxing and waning in accordance with pressures of resources, populations, and territory.
Things changed with the advent agriculture and standing armies. War brought new dynamics, with winners sometimes exacting thorough elimination of losers. War has its own set of equations. When one human group decides to launch a coordinated attack against a weaker one, with the intent of exterminating the weaker one, we call this genocide. Relatively rarely do we see one ethnic group entirely exterminate another, because the dominant group usually keeps at least a few of the defeated as slaves and breeds with them. Humanity’s history is pretty grim when you dig into the details of the many conflicts we’ve recorded.
Current concern: AI
The concern currently at hand is AI. AI is accelerating, and promises to also accelerate other technologies, such as biotech, which offer offensive potential. I am concerned about the offense-defense balance that the trends suggest. The affordances of modern humanity far exceed those of ancient humanity. If, as I expect, it becomes possible in the next few years for an AI to produce a plan for a small group of humans to follow which instructs them in covertly gathering supplies, building equipment, and carrying out lab procedures in order to produce a potent bioweapon. This could be because the humans were omnicidal, or because the AI deceived them about what the results of the plan would be. Humanity may get no chance to adapt defensively. We may just go the way of the dodo and passenger pigeon. The new enemy may simply render us extinct.
Conclusion
We should expect the pattern of dynamic balance between adversaries to be maintained when the offense-defense balance changes relatively slowly compared to the population cycles and adaptation rates of the groups. When you anticipate a large rapid shift of the offense-defense balance, you should expect the fragile equilibrium to break and for one or the other group to dominate. The anticipated trends of AI power are exactly the sort of rapid shift that should suggest a risk of extinction.
Notably, the ecosystem example, while the populations are constantly fluctuating, it’s actually pretty difficult to generate a result that ends in one species’s total extinction/genocide, so there is a global stability/equilibrium in the offense/defense balance, even if it’s locally unstable.
Yes, the environments / habitats tend to be relatively large / complex / inaccessible to the agents involved. This allows for hiding, and for isolated niches. If the environment were smaller, or the agents had greater affordances / powers relative to their environments, then we’d expect outcomes to be less intermediate, more extreme.
As one can see in the microhabitats of sealed jars with plants and insects inside. I find it fascinating to watch timelapses of such mini-biospheres play out. Local extinction events are common in such small closed systems.
I think you raise a good point about Offense-Defense balance predictions. There is an equilibrium around effort spent on defense, which reacts to offense on some particular dimension becoming easier. So long as there’s free energy which could be spent by the defender to bolster their defense and remain energy positive or neutral, and the defender has the affordances (time, intelligence, optimization pressure, etc.) to make the change, then you should predict that the defender will rebalance the equation.
That’s one way things work out, and it happens more often than naive extrapolation predicts because often the defense is in some way novel, changing the game along a new dimension.
On the other hand, there are different equilibria that adversarial dynamics can fall into besides rebalancing. Let’s look at some examples.
GAN example
A very clean example is Generative Adversarial Networks (GANs). This allows us to strip away many of the details and look at the patterns which emerge from the math. GANs are inherently unstable, because they have three equilibrium states: dynamic fluctuating balance (the goal of the developer, and the situation described in your post), attacker dominates, defender dominates. Anytime the system gets too far from the central equilibrium of dynamic balance, you fall towards the nearer of the other two states. And then stay there, without hope of recovery.
Ecosystem example
Another example I like to use for this situation is predator-prey relationships in ecosystems. The downside of using this as an example is that there is a dis-analogy to competition between humans, in that the predators and prey being examined have relatively little intelligence. Most of the optimization is coming from evolutionary selection pressure. On the plus side, we have a very long history with lots and lots of examples, and these examples occur in complex multi-actor dynamic states with existential stakes. So let’s take a look at an example.
Ecosystem example: Foxes and rabbits. Why do the foxes not simply eat all the rabbits? Well, there are multiple reasons. One is that the foxes depend on a supply of rabbits to have enough spare energy to successfully reproduce and raise young. As rabbit supply dwindles, foxes starve or choose not to reproduce, and fox population dwindles. See: https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations
In practice, this isn’t a perfect model of the dynamics, because more complicated factors are almost always in play. There are other agents involved, who interact in less strong but still significant ways. Mice can also be a food source for the foxes, and to some extent compete for some of the same food as rabbits. Viruses in the rabbit population spread more easily and have more opportunity to adversarially optimize against the rabbit immune systems as the rabbit population increases, as sick rabbits are culled less, and as rabbit health declines due to competition for food and territory with other rabbits. Nevertheless, you do see Red Queen Races occur in long-standing predator-prey relationships, where the two species gradually one-up each other (via evolutionary selection pressure) on offense then defense then offense.
The other thing you see happen is that the two species stop interacting. They become either locally extinguished such that they no longer have overlapping territories, or one or both species go completely extinct.
Military example
A closer analogy to the dynamics of future human conflict is… past human conflict. Before there were militaries, there were inter-tribal conflicts. Often there was similar armaments and fighting strengths on both sides, and the conflicts had no decisive winner. Instead the conflicts would drag on across many generations, waxing and waning in accordance with pressures of resources, populations, and territory.
Things changed with the advent agriculture and standing armies. War brought new dynamics, with winners sometimes exacting thorough elimination of losers. War has its own set of equations. When one human group decides to launch a coordinated attack against a weaker one, with the intent of exterminating the weaker one, we call this genocide. Relatively rarely do we see one ethnic group entirely exterminate another, because the dominant group usually keeps at least a few of the defeated as slaves and breeds with them. Humanity’s history is pretty grim when you dig into the details of the many conflicts we’ve recorded.
Current concern: AI
The concern currently at hand is AI. AI is accelerating, and promises to also accelerate other technologies, such as biotech, which offer offensive potential. I am concerned about the offense-defense balance that the trends suggest. The affordances of modern humanity far exceed those of ancient humanity. If, as I expect, it becomes possible in the next few years for an AI to produce a plan for a small group of humans to follow which instructs them in covertly gathering supplies, building equipment, and carrying out lab procedures in order to produce a potent bioweapon. This could be because the humans were omnicidal, or because the AI deceived them about what the results of the plan would be. Humanity may get no chance to adapt defensively. We may just go the way of the dodo and passenger pigeon. The new enemy may simply render us extinct.
Conclusion
We should expect the pattern of dynamic balance between adversaries to be maintained when the offense-defense balance changes relatively slowly compared to the population cycles and adaptation rates of the groups. When you anticipate a large rapid shift of the offense-defense balance, you should expect the fragile equilibrium to break and for one or the other group to dominate. The anticipated trends of AI power are exactly the sort of rapid shift that should suggest a risk of extinction.
Notably, the ecosystem example, while the populations are constantly fluctuating, it’s actually pretty difficult to generate a result that ends in one species’s total extinction/genocide, so there is a global stability/equilibrium in the offense/defense balance, even if it’s locally unstable.
Yes, the environments / habitats tend to be relatively large / complex / inaccessible to the agents involved. This allows for hiding, and for isolated niches. If the environment were smaller, or the agents had greater affordances / powers relative to their environments, then we’d expect outcomes to be less intermediate, more extreme.
As one can see in the microhabitats of sealed jars with plants and insects inside. I find it fascinating to watch timelapses of such mini-biospheres play out. Local extinction events are common in such small closed systems.