If someone convinces themself that a full nuclear exchange would prevent the development of superhuman AI
I think the problem here is “convinces themself”. If you are capable to trigger nuclear war, you are probably capable to do something else which is not that, if you put your mind in that.
Does the” something else which is not that but is in the same difficulty class” also accomplish the goal of “ensure that nobody has access to what you think is enough compute to build an ASI?” If not, I think that implies that the “anything that probably kills less than a billion people is fair game” policy is a bad one.
I think the problem here is “convinces themself”. If you are capable to trigger nuclear war, you are probably capable to do something else which is not that, if you put your mind in that.
Does the” something else which is not that but is in the same difficulty class” also accomplish the goal of “ensure that nobody has access to what you think is enough compute to build an ASI?” If not, I think that implies that the “anything that probably kills less than a billion people is fair game” policy is a bad one.