Game theory is very much applicable to the real world. Imperfect information is just a different game. You are correct that assuming perfect information is a simplification. But assuming imperfect information, what does that change?
You want to lie to the Enemy, convince them that you will always push the button if they cross the line, then never actually do it, and the Enemy knows this!
Sometimes all available options are risky. Betting your life on a coin flip is not generally a good idea, but if the only alternative is a lottery ticket, the coin flip looks pretty good. If the Enemy knows there’s a significant chance that you won’t press the button, in a sufficiently desperate situation, the Enemy might bet on that and strike first. But if the Enemy knows self-destruction is assured, then striking first looks like a bad option.
What possible reason could Petrov or those in similar situations have had for not pushing the button? Maybe he believed that the US would retaliate and kill his family at home, and that deterred him. In other words, he believed his enemy would push the button.
Applied to the real world, game theory is not just about how to play the games. It’s also about the effects of changing the rules.
What possible reason could Petrov or those in similar situations have had for not pushing the button? Maybe he believed that the US would retaliate and kill his family at home, and that deterred him. In other words, he believed his enemy would push the button.
Or maybe he just did not want to kill millions of people?
In Petrov’s case in particular, the new satellite-based early warning system was unproven so he didn’t completely trust it, and he didn’t believe a US first strike would use only one missile, or later, only four more, instead of hundreds. Furthermore, ground radar didn’t confirm. And, of course, attacking on a false alarm would be suicidal because he believed the Enemy would push the button, so striking first “just in case”, failed his cost-benefit analysis.
I should probably have said “we are not in that game theory situation”. (Though I do think that the real world is more complex that current game theory can handle. E.g. I don’t think current game theory can fully handle unknown-unknown, but I could be wrong on this point)
The game of mutually assured destruction is very different even when just including known unknown.
Game theory is very much applicable to the real world. Imperfect information is just a different game. You are correct that assuming perfect information is a simplification. But assuming imperfect information, what does that change?
You want to lie to the Enemy, convince them that you will always push the button if they cross the line, then never actually do it, and the Enemy knows this!
Sometimes all available options are risky. Betting your life on a coin flip is not generally a good idea, but if the only alternative is a lottery ticket, the coin flip looks pretty good. If the Enemy knows there’s a significant chance that you won’t press the button, in a sufficiently desperate situation, the Enemy might bet on that and strike first. But if the Enemy knows self-destruction is assured, then striking first looks like a bad option.
What possible reason could Petrov or those in similar situations have had for not pushing the button? Maybe he believed that the US would retaliate and kill his family at home, and that deterred him. In other words, he believed his enemy would push the button.
Applied to the real world, game theory is not just about how to play the games. It’s also about the effects of changing the rules.
Or maybe he just did not want to kill millions of people?
In Petrov’s case in particular, the new satellite-based early warning system was unproven so he didn’t completely trust it, and he didn’t believe a US first strike would use only one missile, or later, only four more, instead of hundreds. Furthermore, ground radar didn’t confirm. And, of course, attacking on a false alarm would be suicidal because he believed the Enemy would push the button, so striking first “just in case”, failed his cost-benefit analysis.
It was not “just” a commitment to pacifism.
I should probably have said “we are not in that game theory situation”.
(Though I do think that the real world is more complex that current game theory can handle. E.g. I don’t think current game theory can fully handle unknown-unknown, but I could be wrong on this point)
The game of mutually assured destruction is very different even when just including known unknown.