There are a few ways we might not be doomed. The first and less likely is that people will just decide not to go to their doom, even though it’s the Nash equilibrium. To give a totally crazy example, suppose there were two countries playing a game where the first one to launch missiles had a huge advantage. And neither country trusts the other, and there are multiple false alarms—thus pushing the situation to the stable Nash equilibrium of both countries trying to launch first. Except imagine that somehow, through some heroic spasm of insanity, these two countries just decided not to nuke each other. That’s the sort of thing it would take.
This would be true if the game being played were, say, Prisoner’s Dilemma, but in the actual nuclear arms race the game was that if either side launched their weapons the other side would retaliate, resulting in large losses on both sides. (In general, I think LW overuses PD in game theory discussions, when games like Chicken or Stag Hunt would be better.) Wikipedia on mutually assured destruction:
The strategy is a form of Nash equilibrium in which neither side, once armed, has any incentive to initiate a conflict … neither side will dare to launch a first strike because the other side will launch on warning (also called fail-deadly) or with secondary forces (a second strike), resulting in unacceptable losses for both parties. The payoff of the MAD doctrine is expected to be a tense but stable global peace.
That said, the US didn’t pre-emptively obliterate the USSR, or later all the other countries, and this does support your point, since obliterating the USSR would have been positive-expected-utility for the US, for some dubious values of utility. Which may or may not have been the prudent thing to do—we can’t judge decisions only on their actual outcomes. Maybe there was a 90% chance of us dying, and we just got lucky, or maybe there are very few people around to discuss this in worlds that weren’t lucky. (This is anthropics though, which is confusing.) Incidentally, von Neumann advocated for a pre-emptive strike on the USSR, and later the USSR advocated for a pre-emptive strike on China.
Re your actual point, that cooperation often arises not in spite of but because of self-interest, and that we might be able to cooperate to preserve our value: I agree with you, and Scott seems to agree that it’s possible though he considers it unlikely. Our society already does this to some extent—witness the social sanctions imposed on people who achieve enough that their peers look bad by comparison. Essentially we just need to impose enough social costs to make it negative-expected-utility to sacrifice your children to Moloch. (And impose social costs on people who don’t impose social costs in accordance with this rule and the preceding rule.) But this might be too hard, especially when there are instabilities, e.g. when the first brain emulations arrive.
PD is not a suitable model for MAD. It would be if a pre-emptive attack on an opponent would guarantee his utter destruction and eliminate a threat. But that’s not the case—even in case of a carefully orchestrated attack, there is a great chance of rebuttal.
Since military advantage of pre-emptive attack is not preferred over a lack of war, this game doesn’t necessarily indicate to defect-defect scenario.
This could probably be better modeled with some form of iterated PD with number of iterations and values of outcomes based on decisions made along the game. Which I guess would be non-linear.
Yeah, I was modeling nuclear war as the stag hunt game. Peace is a Nash equilibrium because starting nuclear war is bad even for the aggressor, but it’s not as stable under violations of trust, worries about a trembling hand on the big red button, etc.
This would be true if the game being played were, say, Prisoner’s Dilemma, but in the actual nuclear arms race the game was that if either side launched their weapons the other side would retaliate, resulting in large losses on both sides. (In general, I think LW overuses PD in game theory discussions, when games like Chicken or Stag Hunt would be better.) Wikipedia on mutually assured destruction:
That said, the US didn’t pre-emptively obliterate the USSR, or later all the other countries, and this does support your point, since obliterating the USSR would have been positive-expected-utility for the US, for some dubious values of utility. Which may or may not have been the prudent thing to do—we can’t judge decisions only on their actual outcomes. Maybe there was a 90% chance of us dying, and we just got lucky, or maybe there are very few people around to discuss this in worlds that weren’t lucky. (This is anthropics though, which is confusing.) Incidentally, von Neumann advocated for a pre-emptive strike on the USSR, and later the USSR advocated for a pre-emptive strike on China.
Re your actual point, that cooperation often arises not in spite of but because of self-interest, and that we might be able to cooperate to preserve our value: I agree with you, and Scott seems to agree that it’s possible though he considers it unlikely. Our society already does this to some extent—witness the social sanctions imposed on people who achieve enough that their peers look bad by comparison. Essentially we just need to impose enough social costs to make it negative-expected-utility to sacrifice your children to Moloch. (And impose social costs on people who don’t impose social costs in accordance with this rule and the preceding rule.) But this might be too hard, especially when there are instabilities, e.g. when the first brain emulations arrive.
PD is not a suitable model for MAD. It would be if a pre-emptive attack on an opponent would guarantee his utter destruction and eliminate a threat. But that’s not the case—even in case of a carefully orchestrated attack, there is a great chance of rebuttal. Since military advantage of pre-emptive attack is not preferred over a lack of war, this game doesn’t necessarily indicate to defect-defect scenario.
This could probably be better modeled with some form of iterated PD with number of iterations and values of outcomes based on decisions made along the game. Which I guess would be non-linear.
Yeah, I was modeling nuclear war as the stag hunt game. Peace is a Nash equilibrium because starting nuclear war is bad even for the aggressor, but it’s not as stable under violations of trust, worries about a trembling hand on the big red button, etc.