Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.
With this in mind, it’s not clear to me that it’d be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn’t differentiate between actions in an actual and a simulated or abstracted world: if we don’t make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It’s not a decision I’d enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that’s something we really really don’t want. Revenge needn’t enter into it.
(This assumes a no-first-use strategy, which the USSR at Petrov’s time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventional aggression, which can be modeled as a somewhat weaker deterrent against that lesser but still pretty nasty possibility.)
Of course, that all assumes that the parties involved are making a rational cost-benefit analysis with good information. I’m not sure offhand how the various less ideal scenarios would change the weighting, except that they seem to make pure MAD a less safe strategy than it’d otherwise be.
Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.
With this in mind, it’s not clear to me that it’d be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn’t differentiate between actions in an actual and a simulated or abstracted world: if we don’t make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It’s not a decision I’d enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that’s something we really really don’t want. Revenge needn’t enter into it.
(This assumes a no-first-use strategy, which the USSR at Petrov’s time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventional aggression, which can be modeled as a somewhat weaker deterrent against that lesser but still pretty nasty possibility.)
Of course, that all assumes that the parties involved are making a rational cost-benefit analysis with good information. I’m not sure offhand how the various less ideal scenarios would change the weighting, except that they seem to make pure MAD a less safe strategy than it’d otherwise be.