If I had been one of those persons with the missile warning and red button, I wouldn’t have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn’t save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.
Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.
With this in mind, it’s not clear to me that it’d be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn’t differentiate between actions in an actual and a simulated or abstracted world: if we don’t make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It’s not a decision I’d enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that’s something we really really don’t want. Revenge needn’t enter into it.
(This assumes a no-first-use strategy, which the USSR at Petrov’s time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventional aggression, which can be modeled as a somewhat weaker deterrent against that lesser but still pretty nasty possibility.)
Of course, that all assumes that the parties involved are making a rational cost-benefit analysis with good information. I’m not sure offhand how the various less ideal scenarios would change the weighting, except that they seem to make pure MAD a less safe strategy than it’d otherwise be.
From a game-theoretic perspective, if the other side knew you thought that way then they should launch on your watch.
MAD only works if both sides believe the other is willing to retaliate. If one side is willing to push the button and the other is not willing to retaliate, then the side willing to push the button nukes the other and takes over the world.
If you can be absolutely certain the other side never finds out you aren’t willing to retaliate, then yours is the optimal policy.
MAD only works if both sides believe the other is willing to retaliate.
“Willing” can be unpacked.
Having the other party believe you are operating under a mixed strategy would be optimal, so long as: a) each side values the other side winning more than mutual destruction, which as humans they probably do, and b) accidental/irrational launches are possible but not significantly higher when facing a perceived mixed strategy.
If, say, the USSR and the USA were willing to strike first to win, but not willing to incur a 95% risk of mutual destruction for a 5% chance of total victory, the optimal retaliatory strategy is to (have the other believe you will) retaliate based on a roll of 1d20 - a roll of a natural one has one refrain from retaliating. That way, an accidental launch has a 5% chance of not destroying the world.
In practice, declaring a mixed strategy will probably be seen as setting up an excuse to update one’s actions based on the expected payoff considering the circumstances that have happened—i.e. to use CDT rather than TDT. Declaring an updateless strategy is a good way to convey one is operating under a mixed one.
This is why you would not have been hired to sit in front of the button, even given the Soviets’ dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.
This is why you would not have been hired to sit in front of the button, even given the Soviets’ dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.
I wouldn’t say that. Someone who cares about the issues is likely to lie for signalling purposes and do what he or she can to get the role.
I could indeed simply lie and play the role of an obeying soldier to get the position I were looking for. However, it is of course true that if I had born and lived in a country where people are continiously fed with nationalist propaganda, I would be less likely to disobey the rules or to think it’s wrong to retalite.
Followup question: if someone was about to be placed in front of that red button, would you rather it be someone who had previously expressed the same opinion, or someone who had credibly committed to retaliate in case of a nuclear strike (however useless or foolish such retaliation might be)?
Conversely, if someone were to be placed in front of the corresponding red button of a country your leaders were about to launch a barrage of nuclear weapons against, which category would you prefer they be in?
Not that I disagree with your conclusion, but there was a significant selection pressure in the process of qualifying to get into the chair in front of the button. Political leaders don’t like to give power to subordinates who are not likely to implement leadership’s desires.
Having gone through the process and its accompanying ideological training makes Petrov’s refusal to risk nuclear armageddon even more impressive. Even though moral courage was [ETA: not] a criteria in selecting him, Petrov showed more that anyone could reasonably expect.
If I had been one of those persons with the missile warning and red button, I wouldn’t have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn’t save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.
Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.
With this in mind, it’s not clear to me that it’d be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn’t differentiate between actions in an actual and a simulated or abstracted world: if we don’t make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It’s not a decision I’d enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that’s something we really really don’t want. Revenge needn’t enter into it.
(This assumes a no-first-use strategy, which the USSR at Petrov’s time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventional aggression, which can be modeled as a somewhat weaker deterrent against that lesser but still pretty nasty possibility.)
Of course, that all assumes that the parties involved are making a rational cost-benefit analysis with good information. I’m not sure offhand how the various less ideal scenarios would change the weighting, except that they seem to make pure MAD a less safe strategy than it’d otherwise be.
From a game-theoretic perspective, if the other side knew you thought that way then they should launch on your watch.
MAD only works if both sides believe the other is willing to retaliate. If one side is willing to push the button and the other is not willing to retaliate, then the side willing to push the button nukes the other and takes over the world.
If you can be absolutely certain the other side never finds out you aren’t willing to retaliate, then yours is the optimal policy.
“Willing” can be unpacked.
Having the other party believe you are operating under a mixed strategy would be optimal, so long as: a) each side values the other side winning more than mutual destruction, which as humans they probably do, and b) accidental/irrational launches are possible but not significantly higher when facing a perceived mixed strategy.
If, say, the USSR and the USA were willing to strike first to win, but not willing to incur a 95% risk of mutual destruction for a 5% chance of total victory, the optimal retaliatory strategy is to (have the other believe you will) retaliate based on a roll of 1d20 - a roll of a natural one has one refrain from retaliating. That way, an accidental launch has a 5% chance of not destroying the world.
In practice, declaring a mixed strategy will probably be seen as setting up an excuse to update one’s actions based on the expected payoff considering the circumstances that have happened—i.e. to use CDT rather than TDT. Declaring an updateless strategy is a good way to convey one is operating under a mixed one.
This is why you would not have been hired to sit in front of the button, even given the Soviets’ dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.
I wouldn’t say that. Someone who cares about the issues is likely to lie for signalling purposes and do what he or she can to get the role.
But less likely to have had the foresight to have gotten into the right job track at age 15.
I could indeed simply lie and play the role of an obeying soldier to get the position I were looking for. However, it is of course true that if I had born and lived in a country where people are continiously fed with nationalist propaganda, I would be less likely to disobey the rules or to think it’s wrong to retalite.
Followup question: if someone was about to be placed in front of that red button, would you rather it be someone who had previously expressed the same opinion, or someone who had credibly committed to retaliate in case of a nuclear strike (however useless or foolish such retaliation might be)?
Conversely, if someone were to be placed in front of the corresponding red button of a country your leaders were about to launch a barrage of nuclear weapons against, which category would you prefer they be in?
Not that I disagree with your conclusion, but there was a significant selection pressure in the process of qualifying to get into the chair in front of the button. Political leaders don’t like to give power to subordinates who are not likely to implement leadership’s desires.
Having gone through the process and its accompanying ideological training makes Petrov’s refusal to risk nuclear armageddon even more impressive. Even though moral courage was [ETA: not] a criteria in selecting him, Petrov showed more that anyone could reasonably expect.
Primitive need for revenge can be even more vital with today’s technology. It is the only thing holding the most powerful players in check.