Seems like unilateralism and coordination failure is a good way of summing up humanity’s general plight re nuclear weapons, which makes it relevant to a day called “Petrov Day” in a high-level way. Putting the emphasis here makes the holiday feel more like “a holiday about x-risk and a thanksgiving for our not having died to nuclear war”, and less like “a holiday about the virtues of Stanislav Petrov and emulating his conduct”.
If Petrov’s decision was correct, or incorrect-but-reflecting-good-virtues, the relevant virtue is something like “heroic responsibility”, not “refusal to be a unilateralist”. I could imagine a holiday that focuses instead on heroic responsibility, or that has a dual focus. (‘Lord, grant me the humility to cooperate in good equilibria, the audacity to defect from bad ones, and the wisdom to know the difference.’) I’m not sure which of these options is most useful.
Well, that’s one of the questions I’m raising. I’m not sure we want to encourage more “heroic responsibility” with AI technologies. Do we want someone like Stanislav Petrov to decide, “No, the warnings are false, and the AI is safe after all,” and release a potentially unfriendly general AI? I would much rather not have AI at all than have it in the hands of someone who decides without consultation that their instruments are lying to them and that they know the correct thing to do based upon their judgment and intuition alone.
Do you have a source on Petrov consulting the radar operators? The Wikipedia article on the 1983 incident seems to imply that he did not.
Shortly after midnight, the bunker’s computers reported that one intercontinental ballistic missile was heading toward the Soviet Union from the United States. Petrov considered the detection a computer error, since a first-strike nuclear attack by the United States was likely to involve hundreds of simultaneous missile launches in order to disable any Soviet means of a counterattack. Furthermore, the satellite system’s reliability had been questioned in the past. Petrov dismissed the warning as a false alarm, though accounts of the event differ as to whether he notified his superiors or not after he concluded that the computer detections were false and that no missile had been launched. Petrov’s suspicion that the warning system was malfunctioning was confirmed when no missile in fact arrived. Later, the computers identified four additional missiles in the air, all directed towards the Soviet Union. Petrov suspected that the computer system was malfunctioning again, despite having no direct means to confirm this. The Soviet Union’s land radar was incapable of detecting missiles beyond the horizon.
From the passage above, it seems like, at the time of the decision, Petrov had no way of confirming whether the missile launches were real or not. He decided that the missile launch warnings were the result of equipment malfunction, and then followed up with land-based radar operators later to confirm that his decision had been correct.
Petrov’s choice was not about dismissing warnings, it’s about picking on which side to err. Wrongfully alerting his superiors could cause a nuclear war, and wrongfully not alerting them would disadvantage his country in the nuclear war that just started. I’m not saying he did all the numbers, used Bayes’s law to figure the probability there is an actual nuclear attack going on, assigned utilities to all four cases and performed the final decision theory calculations—but his reasoning did take into account the possibility of error both ways. Though… it does seem like his intuition gave utility much more weight than to probabilities.
So, if we take that rule for deciding what to do with a AGI, it won’t be just “ignore everything the instruments are saying” but “weight the dangers of UFAI against the missed opportunities from not releasing it”.
Which means the UFAI only needs to convince such a gatekeeper that releasing it is the only way to prevent a catastrophe, without having to convince the gatekeeper that the probabilities of the catastrophe are high or that the probabilities of the AI being unfriently are low.
Seems like unilateralism and coordination failure is a good way of summing up humanity’s general plight re nuclear weapons, which makes it relevant to a day called “Petrov Day” in a high-level way. Putting the emphasis here makes the holiday feel more like “a holiday about x-risk and a thanksgiving for our not having died to nuclear war”, and less like “a holiday about the virtues of Stanislav Petrov and emulating his conduct”.
If Petrov’s decision was correct, or incorrect-but-reflecting-good-virtues, the relevant virtue is something like “heroic responsibility”, not “refusal to be a unilateralist”. I could imagine a holiday that focuses instead on heroic responsibility, or that has a dual focus. (‘Lord, grant me the humility to cooperate in good equilibria, the audacity to defect from bad ones, and the wisdom to know the difference.’) I’m not sure which of these options is most useful.
Well, that’s one of the questions I’m raising. I’m not sure we want to encourage more “heroic responsibility” with AI technologies. Do we want someone like Stanislav Petrov to decide, “No, the warnings are false, and the AI is safe after all,” and release a potentially unfriendly general AI? I would much rather not have AI at all than have it in the hands of someone who decides without consultation that their instruments are lying to them and that they know the correct thing to do based upon their judgment and intuition alone.
Petrov did consult with the satellite radar operators, who said they detected nothing.
Do you have a source on Petrov consulting the radar operators? The Wikipedia article on the 1983 incident seems to imply that he did not.
From the passage above, it seems like, at the time of the decision, Petrov had no way of confirming whether the missile launches were real or not. He decided that the missile launch warnings were the result of equipment malfunction, and then followed up with land-based radar operators later to confirm that his decision had been correct.
Petrov’s choice was not about dismissing warnings, it’s about picking on which side to err. Wrongfully alerting his superiors could cause a nuclear war, and wrongfully not alerting them would disadvantage his country in the nuclear war that just started. I’m not saying he did all the numbers, used Bayes’s law to figure the probability there is an actual nuclear attack going on, assigned utilities to all four cases and performed the final decision theory calculations—but his reasoning did take into account the possibility of error both ways. Though… it does seem like his intuition gave utility much more weight than to probabilities.
So, if we take that rule for deciding what to do with a AGI, it won’t be just “ignore everything the instruments are saying” but “weight the dangers of UFAI against the missed opportunities from not releasing it”.
Which means the UFAI only needs to convince such a gatekeeper that releasing it is the only way to prevent a catastrophe, without having to convince the gatekeeper that the probabilities of the catastrophe are high or that the probabilities of the AI being unfriently are low.