A strategic nuclear arsenal is there to deter other nuclear powers from destroying you...There are very strong incentives against escalation from tactical nuclear exchange to strategic nuclear exchange.
You seem to assume a very rational evaluation of the situation on both sides, even given a highly escalated situation like the destruction of the opposite army by means of nuclear strikes. If the people who run Pakistan and India are that rational, doesn’t that mean that people who are able to design AGI’s, capable of undergoing recursive self-improvement, shouldn’t be at least as rational and therefore take the risks associated with their work seriously? And even if the problem is simply that they don’t know about the associated risks, given that people in general do keep great care that they and their country are not destroyed, wouldn’t it be most effective to tell them about the risks as it would provide a strong incentives to make their AI friendly?
My personal opinion is that people are not as rational and estimating as you seem to believe. Take for example the German attack on Russia. The Soviets were surprised because their intelligence had no information that the German army was trying to acquire winterproof machinery and therefore concluded that the Germans wouldn’t attack them, that would be crazy. Or take the Yom Kippur War. Israel’s intelligence knew that nobody in the region could win a war with them at that time, it would be crazy to start a war with Israel, therefore they concluded that war is unlikely.
My personal opinion is that people are not as rational and estimating as you seem to believe.
An even more important factor is that a disparate set of people in a state apparatus do not even combine into something as (in)coherent as a human being.
You seem to assume a very rational evaluation of the situation on both sides, even given a highly escalated situation like the destruction of the opposite army by means of nuclear strikes. If the people who run Pakistan and India are that rational, doesn’t that mean that people who are able to design AGI’s, capable of undergoing recursive self-improvement, shouldn’t be at least as rational and therefore take the risks associated with their work seriously? And even if the problem is simply that they don’t know about the associated risks, given that people in general do keep great care that they and their country are not destroyed, wouldn’t it be most effective to tell them about the risks as it would provide a strong incentives to make their AI friendly?
My personal opinion is that people are not as rational and estimating as you seem to believe. Take for example the German attack on Russia. The Soviets were surprised because their intelligence had no information that the German army was trying to acquire winterproof machinery and therefore concluded that the Germans wouldn’t attack them, that would be crazy. Or take the Yom Kippur War. Israel’s intelligence knew that nobody in the region could win a war with them at that time, it would be crazy to start a war with Israel, therefore they concluded that war is unlikely.
Everyone forgets that people are batshit crazy.
An even more important factor is that a disparate set of people in a state apparatus do not even combine into something as (in)coherent as a human being.