But we are not in a game theory situation. We are in an imperfect world with imperfect information. There are malfunctioning warning systems and liars. And we are humans and not programs that get to read each others source code. There are no perfect commitments and if there where, there would be no way of verifying them.
So I think that the lesson is, that what ever your public stance, and whether or not you think that there are counterfactual situation where you should nuke. In practice, you should not nuke.
Game theory was pioneered by Schelling with the central and most important application being handling nuclear armed conflicts. To say that game theory doesn’t apply to nuclear conflict because we live in an imperfect world is just not accurate. Game theory doesn’t require a perfect world nor does it require that actors know each other’s source code. It is designed to guide decisions made in the real world.
I know that it is designed to guide decisions made in the real world. This does not force me to agree with the conclusions in all circumstances. Lots of models are not up to the task they are designed to deal with.
But I should have said “not in that game theory situation”, becasue there is probably a way to construct some game theory game that applies here. That was my bad.
However, I stand by the claim that the full information game is too far from reality to be a good guide in this case. With stakes this high even small uncertainty becomes important.
Game theory is very much applicable to the real world. Imperfect information is just a different game. You are correct that assuming perfect information is a simplification. But assuming imperfect information, what does that change?
You want to lie to the Enemy, convince them that you will always push the button if they cross the line, then never actually do it, and the Enemy knows this!
Sometimes all available options are risky. Betting your life on a coin flip is not generally a good idea, but if the only alternative is a lottery ticket, the coin flip looks pretty good. If the Enemy knows there’s a significant chance that you won’t press the button, in a sufficiently desperate situation, the Enemy might bet on that and strike first. But if the Enemy knows self-destruction is assured, then striking first looks like a bad option.
What possible reason could Petrov or those in similar situations have had for not pushing the button? Maybe he believed that the US would retaliate and kill his family at home, and that deterred him. In other words, he believed his enemy would push the button.
Applied to the real world, game theory is not just about how to play the games. It’s also about the effects of changing the rules.
What possible reason could Petrov or those in similar situations have had for not pushing the button? Maybe he believed that the US would retaliate and kill his family at home, and that deterred him. In other words, he believed his enemy would push the button.
Or maybe he just did not want to kill millions of people?
In Petrov’s case in particular, the new satellite-based early warning system was unproven so he didn’t completely trust it, and he didn’t believe a US first strike would use only one missile, or later, only four more, instead of hundreds. Furthermore, ground radar didn’t confirm. And, of course, attacking on a false alarm would be suicidal because he believed the Enemy would push the button, so striking first “just in case”, failed his cost-benefit analysis.
I should probably have said “we are not in that game theory situation”. (Though I do think that the real world is more complex that current game theory can handle. E.g. I don’t think current game theory can fully handle unknown-unknown, but I could be wrong on this point)
The game of mutually assured destruction is very different even when just including known unknown.
But we are not in a game theory situation. We are in an imperfect world with imperfect information. There are malfunctioning warning systems and liars. And we are humans and not programs that get to read each others source code. There are no perfect commitments and if there where, there would be no way of verifying them.
So I think that the lesson is, that what ever your public stance, and whether or not you think that there are counterfactual situation where you should nuke. In practice, you should not nuke.
Do you see what I’m getting at?
Game theory was pioneered by Schelling with the central and most important application being handling nuclear armed conflicts. To say that game theory doesn’t apply to nuclear conflict because we live in an imperfect world is just not accurate. Game theory doesn’t require a perfect world nor does it require that actors know each other’s source code. It is designed to guide decisions made in the real world.
I know that it is designed to guide decisions made in the real world. This does not force me to agree with the conclusions in all circumstances. Lots of models are not up to the task they are designed to deal with.
But I should have said “not in that game theory situation”, becasue there is probably a way to construct some game theory game that applies here. That was my bad.
However, I stand by the claim that the full information game is too far from reality to be a good guide in this case. With stakes this high even small uncertainty becomes important.
Game theory is very much applicable to the real world. Imperfect information is just a different game. You are correct that assuming perfect information is a simplification. But assuming imperfect information, what does that change?
You want to lie to the Enemy, convince them that you will always push the button if they cross the line, then never actually do it, and the Enemy knows this!
Sometimes all available options are risky. Betting your life on a coin flip is not generally a good idea, but if the only alternative is a lottery ticket, the coin flip looks pretty good. If the Enemy knows there’s a significant chance that you won’t press the button, in a sufficiently desperate situation, the Enemy might bet on that and strike first. But if the Enemy knows self-destruction is assured, then striking first looks like a bad option.
What possible reason could Petrov or those in similar situations have had for not pushing the button? Maybe he believed that the US would retaliate and kill his family at home, and that deterred him. In other words, he believed his enemy would push the button.
Applied to the real world, game theory is not just about how to play the games. It’s also about the effects of changing the rules.
Or maybe he just did not want to kill millions of people?
In Petrov’s case in particular, the new satellite-based early warning system was unproven so he didn’t completely trust it, and he didn’t believe a US first strike would use only one missile, or later, only four more, instead of hundreds. Furthermore, ground radar didn’t confirm. And, of course, attacking on a false alarm would be suicidal because he believed the Enemy would push the button, so striking first “just in case”, failed his cost-benefit analysis.
It was not “just” a commitment to pacifism.
I should probably have said “we are not in that game theory situation”.
(Though I do think that the real world is more complex that current game theory can handle. E.g. I don’t think current game theory can fully handle unknown-unknown, but I could be wrong on this point)
The game of mutually assured destruction is very different even when just including known unknown.