While I am not sure I agree fully with the panel, an implication to be drawn from their arguments is that from an equilibrium of treaty compliance, maintaining the ability to race can disincentivize the other side from treaty violation: it increases the cost to the other side of gaining advantage, and that can be especially decisive if your side has an economic advantage.
This is an idea/argument I hadn’t encountered before, and seems plausible, so it seems valuable that you shared it.
But it seems to me that there’s probably an effect pushing in the opposite direction:
Even from an equilibrium of treaty compliance, if one state has the ability to race, that might incentivise the other side to develop the ability to race as well. That wouldn’t necessarily require treaty violation.
Either or especially both sides having the ability to race can increase risks if they could race covertly until they have gained an advantage, or race so quickly that they gain an advantage before the other side can get properly started, or if the states don’t always act as rational cohesive entities (e.g., if leaders are more focused on preventing regime change than preventing millions of deaths in their own country), or probably under other conditions.
I think the term “arms race stability” captures the sort of thing I’m referring to, though I haven’t yet looked into the relevant theoretical work much.
In contrast, if we could reach a situation where neither side currently had the ability to race, that might be fairly stable. This could be true if building up that ability would take some time and be detectable early enough to be responded to (by sanctions, a targeted strike, the other side building up their own ability, or whatever).
Does this seem accurate to you?
I guess an analogy could be to whether you’d rather be part of a pair of cowboys who both have guns but haven’t drawn them (capability but not yet racing), or part of a pair who don’t have guns but could go buy one. It seems like we’d have more opportunities for de-escalation, less risk from nerves and hair-triggers, etc. in the latter scenario than the former.
I think this overlaps with some of Schelling’s points in The Strategy of Conflict (see also my notes on that), but I can’t remember for sure.
Interesting post.
You or other readers might also find the idea of epistemic security interesting, as discussed in the report “Tackling threats to informed decisionmaking in democratic societies: Promoting epistemic security in a technologically-advanced world”. The report is by researchers at CSER and some other institutions. I’ve only read the executive summary myself.
There’s also a BBC Futures article on the topic by some of the same authors.