An interesting strategy, which seems related to FDT’s prescription to ignore threats, which seems to have worked:
From the very beginning, the People’s Republic of China had to maneuver in a triangular relationship with the two nuclear powers, each of which was individually capable of posing a great threat and, together, were in a position to overwhelm China. Mao dealt with this endemic state of affairs by pretending it did not exist. He claimed to be impervious to nuclear threats; indeed, he developed a public posture of being willing to accept hundreds of millions of casualties, even welcoming it as a guarantee for the more rapid victory of Communist ideology. Whether Mao believed his own pronouncements on nuclear war it is impossible to say. But he clearly succeeded in making much of the rest of the world believe that he meant it—an ultimate test of credibility.
FDT doesn’t unconditionally prescribe ignoring threats. The idea of ignoring threats has merit, but FDT specifically only points out that ignoring a threat sometimes has the effect of the threat (or other threats) not getting made (even if only counterfactually). Which is not always the case.
Consider a ThreatBot that always makes threats (and follows through on them), regardless of whether you ignore them. If you ignore ThreatBot’s threats, you are worse off. On the other hand, there might be a prior ThreatBotMaker that decides whether to make a ThreatBot depending on whether you ignore ThreatBot’s threats. What FDT prescribes in this case is not directly ignoring ThreatBot’s threats, but rather taking notice of ThreatBotMaker’s behavior, namely that it won’t make a ThreatBot if you ignore ThreatBot’s threats. This argument only goes through when there is/was a ThreatBotMaker, it doesn’t work if there is only a ThreatBot.
If a ThreatBot appears through some process that doesn’t respond to your decision to respond to ThreatBot’s threats, then FDT prescribes responding to ThreatBot’s threats. But also if something (else) makes threats depending on your reputation for responding to threats, then responding to even an unconditionally manifesting ThreatBot’s threats is not recommended by FDT. Not directly as a recommendation to ignore something, rather as a consequence of taking notice of the process that responds to your having a reputation of not responding to any threats. Similarly with stances where you merely claim that you won’t respond to threats.
China under Mao definitely seemed to do more than say they won’t respond to threats. Thus, the Korean war, and notably no nuclear threats were made, proving conventional war was still possible in a post-nuclear world.
For practical decisions, I don’t think threatbots actually exist if you’re a state by form other than natural disasters. Mao’s china was not good at natural disasters, but probably because Mao was a marxist and legalist, not because he conspicuously ignored them. When his subordinates made mistakes which let him know something was going wrong in their province, I think he would punish the subordinate and try to fix it.
I don’t think FDT has anything to do with purely causal interactions. Insofar as threats were actually deterred here this can be understood in standard causal game theory terms. (I.e., you claim in a convincing manner that you won’t give in → People assign high probability to you being serious → Standard EV calculation says not to commit to threat against you.) Also see this post.
Thus why I said related. Nobody was doing any mind-reading of course, but the principles still apply, since people are often actually quite good at reading each other.
What principles? It doesn’t seem like there’s anything more at work here than “Humans sometimes become more confident that other humans will follow through on their commitments if they, e.g., repeatedly say they’ll follow through”. I don’t see what that has to do with FDT, more than any other decision theory.
If the idea is that Mao’s forming the intention is supposed to have logically-caused his adversaries to update on his intention, that just seems wrong (see this section of the mentioned post).
(Separately I’m not sure what this has to do with not giving into threats in particular, as opposed to preemptive commitment in general. Why were Mao’s adversaries not able to coerce him by committing to nuclear threats, using the same principles? See this section of the mentioned post.)
Far more interesting, and probably effective, than the boring classical game theory doctrine of MAD, and even Schelling’s doctrine of strategic irrationality!
The book says this strategy worked for similar reasons as the strategy in the story The Romance of the Three Kingdoms:
One of the classic tales of the Chinese strategic tradition was that of Zhuge Liang’s “Empty City Stratagem” from The Romance of the Three Kingdoms. In it, a commander notices an approaching army far superior to his own. Since resistance guarantees destruction, and surrender would bring about loss of control over the future, the commander opts for a stratagem. He opens the gates of his city, places himself there in a posture of repose, playing a lute, and behind him shows normal life without any sign of panic or concern. The general of the invading army interprets this sangfroid as a sign of the existence of hidden reserves, stops his advance, and withdraws.
But Mao obviously wasn’t fooling anyone about China’s military might!
An interesting strategy, which seems related to FDT’s prescription to ignore threats, which seems to have worked:
From Kissinger’s On China, chapter 4 (loc 173.9).
FDT doesn’t unconditionally prescribe ignoring threats. The idea of ignoring threats has merit, but FDT specifically only points out that ignoring a threat sometimes has the effect of the threat (or other threats) not getting made (even if only counterfactually). Which is not always the case.
Consider a ThreatBot that always makes threats (and follows through on them), regardless of whether you ignore them. If you ignore ThreatBot’s threats, you are worse off. On the other hand, there might be a prior ThreatBotMaker that decides whether to make a ThreatBot depending on whether you ignore ThreatBot’s threats. What FDT prescribes in this case is not directly ignoring ThreatBot’s threats, but rather taking notice of ThreatBotMaker’s behavior, namely that it won’t make a ThreatBot if you ignore ThreatBot’s threats. This argument only goes through when there is/was a ThreatBotMaker, it doesn’t work if there is only a ThreatBot.
If a ThreatBot appears through some process that doesn’t respond to your decision to respond to ThreatBot’s threats, then FDT prescribes responding to ThreatBot’s threats. But also if something (else) makes threats depending on your reputation for responding to threats, then responding to even an unconditionally manifesting ThreatBot’s threats is not recommended by FDT. Not directly as a recommendation to ignore something, rather as a consequence of taking notice of the process that responds to your having a reputation of not responding to any threats. Similarly with stances where you merely claim that you won’t respond to threats.
China under Mao definitely seemed to do more than say they won’t respond to threats. Thus, the Korean war, and notably no nuclear threats were made, proving conventional war was still possible in a post-nuclear world.
For practical decisions, I don’t think threatbots actually exist if you’re a state by form other than natural disasters. Mao’s china was not good at natural disasters, but probably because Mao was a marxist and legalist, not because he conspicuously ignored them. When his subordinates made mistakes which let him know something was going wrong in their province, I think he would punish the subordinate and try to fix it.
I don’t think FDT has anything to do with purely causal interactions. Insofar as threats were actually deterred here this can be understood in standard causal game theory terms. (I.e., you claim in a convincing manner that you won’t give in → People assign high probability to you being serious → Standard EV calculation says not to commit to threat against you.) Also see this post.
Thus why I said related. Nobody was doing any mind-reading of course, but the principles still apply, since people are often actually quite good at reading each other.
What principles? It doesn’t seem like there’s anything more at work here than “Humans sometimes become more confident that other humans will follow through on their commitments if they, e.g., repeatedly say they’ll follow through”. I don’t see what that has to do with FDT, more than any other decision theory.
If the idea is that Mao’s forming the intention is supposed to have logically-caused his adversaries to update on his intention, that just seems wrong (see this section of the mentioned post).
(Separately I’m not sure what this has to do with not giving into threats in particular, as opposed to preemptive commitment in general. Why were Mao’s adversaries not able to coerce him by committing to nuclear threats, using the same principles? See this section of the mentioned post.)
Far more interesting, and probably effective, than the boring classical game theory doctrine of MAD, and even Schelling’s doctrine of strategic irrationality!
The book says this strategy worked for similar reasons as the strategy in the story The Romance of the Three Kingdoms:
But Mao obviously wasn’t fooling anyone about China’s military might!