AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.
Defense is a fundamentally harder problem than offense.
The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.
This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. The customary response is to develop resilient offensive ability, like second-strike...which leaves us with a huge surplus of distributed offensive power.
My confidence is low that catastrophic conflict can be averted in such a case.
The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.
But attacking a territory requires long supply lines, whereas defenders are on their home turf.
But defending a territory requires constant readiness, whereas attackers can make a single focused effort on a surprise attack.
But attacking a territory requires mobility for every single weapons system, whereas defenders can plug their weapons straight into huge power plants or incorporate mountains into their armor.
But defending against violence requires you to keep targets in good repair, whereas attackers have entropy on their side.
But attackers have to break a Schelling point, thereby risking retribution from otherwise neutral third parties, whereas defenders are less likely to face a coalition.
But defenders have to make enough of their military capacity public for the public knowledge to serve as a deterrent, whereas attackers can keep much of their capabilities a secret until the attack begins.
But attackers have to leave their targets in an economically useful state and/or in an immediately-militarily-crippled state for a first strike to be profitable, whereas defenders can credibly precommit to purely destructive retaliation.
I could probably go on for a long time in this vein.
Overall I’d still say you’re more likely to be right than wrong, but I have no confidence in the accuracy of that.
None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.
Any given popular military authority can be read, but if you’d like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.
Defense is a fundamentally harder problem than offense.
What matters is not whether defense is “harder” than offense, but what AI is most effective at improving. One of the things AIs are expected to be good at is monitoring those “360 * 90 degrees” for early signs of impending attacks, and thus enabling appropriate responses. You can view this as an “offensive” solution since it might very well require some sort of “second strike” reaction in order to neuter the attack, but most people would nonetheless regard such a response as part of “defense”. And “a huge surplus of distributed offensive power” is of little or no consequence if the equilibrium is such that the “offensive” power can be easily countered.
I disagree, for two reasons.
AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.
Defense is a fundamentally harder problem than offense.
The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.
This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. The customary response is to develop resilient offensive ability, like second-strike...which leaves us with a huge surplus of distributed offensive power.
My confidence is low that catastrophic conflict can be averted in such a case.
But attacking a territory requires long supply lines, whereas defenders are on their home turf.
But defending a territory requires constant readiness, whereas attackers can make a single focused effort on a surprise attack.
But attacking a territory requires mobility for every single weapons system, whereas defenders can plug their weapons straight into huge power plants or incorporate mountains into their armor.
But defending against violence requires you to keep targets in good repair, whereas attackers have entropy on their side.
But attackers have to break a Schelling point, thereby risking retribution from otherwise neutral third parties, whereas defenders are less likely to face a coalition.
But defenders have to make enough of their military capacity public for the public knowledge to serve as a deterrent, whereas attackers can keep much of their capabilities a secret until the attack begins.
But attackers have to leave their targets in an economically useful state and/or in an immediately-militarily-crippled state for a first strike to be profitable, whereas defenders can credibly precommit to purely destructive retaliation.
I could probably go on for a long time in this vein.
Overall I’d still say you’re more likely to be right than wrong, but I have no confidence in the accuracy of that.
None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.
Any given popular military authority can be read, but if you’d like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.
What matters is not whether defense is “harder” than offense, but what AI is most effective at improving. One of the things AIs are expected to be good at is monitoring those “360 * 90 degrees” for early signs of impending attacks, and thus enabling appropriate responses. You can view this as an “offensive” solution since it might very well require some sort of “second strike” reaction in order to neuter the attack, but most people would nonetheless regard such a response as part of “defense”. And “a huge surplus of distributed offensive power” is of little or no consequence if the equilibrium is such that the “offensive” power can be easily countered.