If one nation is confident that a rival nation will not retaliate in a nuclear conflict, then the selfish choice is to strike. By refusing orders, Petrov was being the type of agent who would not retaliate in a conflict. Therefore, in a certain sense, by being that type of agent, he arguably raised the risk of a nuclear strike. [Note: I still think his decision to not retaliate was the correct choice]
Petrov’s choice was obviously the correct one in hindsight. What I’m questioning is whether Petrov’s choice was obviously correct in foresight. The rationality community takes as a given Petrov’s assertion that it was obviously silly for the United States to attack the Soviet Union with a single ICBM. Was that actually as silly as Petrov suggested? There were scenarios where small numbers of ICBMs were launched in a surprise attack against an unsuspecting adversary in order to kill leadership, and disrupt command and control systems. How confident was Petrov that this was not one of those scenarios?
Another assumption that the community makes is that Petrov choosing to report the detection would have immediately resulted in a nuclear “counterattack” by the Soviet Union. But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union. We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn’t know for certain that it was one, Petrov was defecting against the system. He was deliberately feeding false data to his superiors, betting that his model of the world was more accurate than his commanders’. Is that the sort of behavior we really want to lionize?
But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union.
This is obviously true in terms of Soviet policy, but it sounds like you’re making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.
We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn’t know for certain that it was one, Petrov was defecting against the system.
Indeed. But we do not cooperate in prisoners’ dilemmas “just because”; we cooperate because doing so leads to higher utility. Petrov’s defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.
Is that the sort of behavior we really want to lionize?
If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it’s hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.
If you will not honor literally saving the world, what will you honor?
I find it extremely troubling that we’re honoring someone defecting against their side in a matter as serious as global nuclear war, merely because in this case, the outcome happened to be good.
(but deterrence would have helped no one if he had launched)
That is exactly the crux of my disagreement. We act as if there were a direct lever between Petrov and the keys and buttons that launch a retaliatory counterstrike. But there wasn’t. There were other people in the chain of command. There were other sensors. Do we really find it that difficult to believe that the Soviets would not have attempted to verify Petrov’s claim before retaliating? That there would not have been practiced procedures to carry out this verification? From what I’ve read of the Soviet Union, their systems of positive control were far ahead of the United States’ as a result of the much lower level of trust the Soviet Politburo had in their military. I find it exceedingly unlikely that the Soviets would have launched without conducting at least some kind of verification with a secondary system. They knew the consequences of nuclear attack just as well as we did.
In that context, Petrov’s actions are far less justifiable. He threw away all of the procedures and training that he had… for a hunch. While everything did turn out okay in this instance, it’s certainly not a mode of behavior I’d want to see established as a precedent. As I said above: Petrov’s actions were just as unilateralist as the people releasing the GPT-2 models, and I find it discomfiting that a holiday opposing that sort of unilateral action is named after someone who, arguably, was maximally unilateralist in his thinking.
I’m not entirely sure we can ever have a correct choice in foresight.
With regard to Petrov, he did seem to make a good, and reasoned call: The US launching a first strike with 5 missiles just does not make much sense without some very serious assumptions that don’t seem to be merited.
I do like the observation that Petrov was being just as unilateralist as what is feared in this thread.
Do we want to lionize such behavior? Perhaps. You argument seems to lend itself to the lens of an AI problem—and Petrov’s behavior then a control on that AI.
I also think it’s weird that The Sequences, Thinking Fast and Slow, and other rationalist works such as Good and Real all emphasize gathering data and trusting data over intuition, because human intuition is fallible, subject to bias, taken in by narratives, etc… and then we’re celebrating someone who did the opposite of all that and got away with it.
The steelman interpretation is that Petrov made a Bayesian assessment, starting with a prior that a nuclear attack (and especially a nuclear attack with five missiles) was an extremely unlikely scenario, and appropriately discounted the evidence being given to him by the satellite detection system because the detection system was new and therefore prone to false alarms, and found that the posterior probability of attack did not justify his passing the attack warning on. However, this seems to me like a post-hoc justification of a decision that was made on intuition.
He thought it unlikely that the US would launch a strike with 5 ICBMs only, since a first strike would likely be comprehensive. As far as Bayesian reasoning goes, this seems pretty good.
Also, a big part of being good at Bayesian reasoning is refining your ability to reason even when you can’t gather data, when you can’t view the same scenario “redrawn” ten thousand times and gather statistics on it.
ETA: the satellite radar operators reported all-clear; however, instructions were to only make decisions based on the computer readouts.
I’ve replied below with a similar question, but do you have a source on “satellite radar operators”? The published accounts of the incident imply that Petrov was the satellite radar operator. He followed up with the operators of the ground-based radar later, but at the time he made the decision to stay silent, he had no data that contradicted what the satellite sensors were saying.
As far as the Bayesian justification goes, I think this is bottom-line reasoning. We’re starting with, “Petrov made a good decision,” and looking backwards in order to find reasons as to why his reasoning was reasonable and justifiable.
A group of satellite radar operators told him they had registered no missiles.
BBC
I don’t see why this is bottom-line reasoning. It is in fact implausible that the US would first-strike with only five missiles, as that would leave the USSR able to respond.
I imagined if I’d assume the responsibility for unleashing the third World War...
...and I said, no, I wouldn’t. … I always thought of it. Whenever I came on duty, I always refreshed it in my memory.
I don’t think it’s obvious that Petrov’s choice was correct in foresight, I think he didn’t know whether it was a false alarm—my current understanding is that he just didn’t want to destroy the world, and that’s why he disobeyed his orders. It’s a fascinating historical case where someone actually got to make the choice, and made the right one. Real world situations are messy and it’s hard to say exactly what his reasoning process is and how justifiable it was—it’s really bad like decisions like these have to be made, and it doesn’t seem likely to me there’s some simple decision rule that gets the right answer in all situations (or even most). I didn’t make any explicit claim about his reasoning in the post. I simply celebrate that he managed to make the correct choice.
The rationality community takes as a given Petrov’s assertion that it was obviously silly for the United States to attack the Soviet Union with a single ICBM.
I don’t take it as a given. It seems like I should push back on claims about ‘the rationality community’ believing something before you first point to a single person who does, and when the person who wrote the post you’re commenting on explicitly doesn’t.
I agree with you that while LW’s red-button has some similarities with Petrov’s situation it doesn’t reflect many parts of it. As I say in the exchange with Zvi, I think it is instead representative of the broader situation with nukes and other destructive technologies, where we’re building them for little good reason and putting ourselves in increasingly precarious positions—which Petrov’s 1983 incident illustrates. We honour Petrov Day by not destroying the world, and I think it’s good to practice that in this way.
I think we can celebrate that Petrov didn’t want to destroy the world and this was a good impulse on his part. I think if we think it’s doubtful that he made the correct decision, or that it’s complicated, then we should be very, very upfront about that (your comment is upfront, the OP didn’t make this fact stick with me). The fact the holiday is named after him made me think (implicitly if not explicitly) that people (including you, Ben) generally endorsed Petrov’s reasoning/actions/etc. and so I did take the whole celebration as a claim about his reasoning. I mean, if Petrov reasoned poorly but happened to get a good result, we should celebrate the result yet condemn Petrov (or at least his reasoning). If Petrov reasoned poorly and took actions there were poor in expectation, doesn’t that mean something like in the majority of world’s Petrov caused bad stuff to happen (or at the algorithm which is Petrov generally would)?
. . .
I think it is extremely extremely weird to make a holiday about avoiding unilateralist’s curse and name it after who did exactly that. I hadn’t thought about it, but if Quanticle is right, then Petrov was taking unilateralist action. (We could celebrate that he his unilateralist action was good, but then the way Petrov day is being themed is here is weird.)
As an aside for those at home, I actually objected to Ben about the “unilateralist”/”unilateralist’s curse” framing separately because our button seemed a very non-obvious instance of Bostrom’s original meaning* to apply this to Petrov. Unilateralist’s curse (Bostrom, 2013) is about when a group of people all have the same goals but have different estimates whether an action which effects them all would be beneficial goal. The curse is that the more people in the group, the more likely someone is to have a bad estimate of the value of the goal that they act separate from everyone else. In the case of US vs Russia, this is an adversarial/enemy situation. If two people are enemies and one decides to attack the other, while it is perhaps correct to say they do so “unilaterally” but it’s not the phenomenon Bostrom was trying to point out with his paper/introduction of that term and I’m the kind of person who dislike when people’s introduced terminology gets eroded.
I was thinking this at the time I objected, but we could say Petrov had the same values as the rest of the military command but had different estimate of the value of a particular action (what to report), in which case we’re back to the above where he was taking unilateralist action.
Our button scenario is another question. I initially thought that someone would only press it if they were a troll (typical mind fallacy, you know?) in which case I’d call it more “defection” than “unilateralist” action and so it wasn’t a good fit for the framing either. If we consider that some people might actually believe the correct thing (by our true, joint underlying values) is to press the button, e.g. to save a life via donation, then that actually does fit the original intended meaning.
There other lessons of:
Practice not pressing big red buttons that would have bad effects, and
Isn’t it terrible that the world is making it easier and easier to do great harm, let’s point this out by doing the same . . . (ironically, I guess)
*I somewhat dislike that the OP has the section header “unilateralist action”, a term taken from Bostrom in one place, but then quotes something he said elsewhere maybe implying that the “building technologies that could destroy the world” was part of the original context for unilateralist’s curse.
. . .
Okay, those be the objections/comments I had brewing beneath the surface. Overall, I think our celebration of Petrov was pretty damn good with good effects and a lot of fun (although maybe it was supposed to be serious...). Ben did a tonne of work to make it happen (so did the rest of the team, especially Ray working hard to make the button).
Agree that it was a significant historical event and case study. My comments are limited to the “unilateralist” angle mostly and a bit the we should be clear which behavior/reasoning we’re endorsing. I look forward to doing the overall thing again.
If one nation is confident that a rival nation will not retaliate in a nuclear conflict, then the selfish choice is to strike. By refusing orders, Petrov was being the type of agent who would not retaliate in a conflict. Therefore, in a certain sense, by being that type of agent, he arguably raised the risk of a nuclear strike. [Note: I still think his decision to not retaliate was the correct choice]
Petrov’s choice was obviously the correct one in hindsight. What I’m questioning is whether Petrov’s choice was obviously correct in foresight. The rationality community takes as a given Petrov’s assertion that it was obviously silly for the United States to attack the Soviet Union with a single ICBM. Was that actually as silly as Petrov suggested? There were scenarios where small numbers of ICBMs were launched in a surprise attack against an unsuspecting adversary in order to kill leadership, and disrupt command and control systems. How confident was Petrov that this was not one of those scenarios?
Another assumption that the community makes is that Petrov choosing to report the detection would have immediately resulted in a nuclear “counterattack” by the Soviet Union. But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union. We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn’t know for certain that it was one, Petrov was defecting against the system. He was deliberately feeding false data to his superiors, betting that his model of the world was more accurate than his commanders’. Is that the sort of behavior we really want to lionize?
This is obviously true in terms of Soviet policy, but it sounds like you’re making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.
Indeed. But we do not cooperate in prisoners’ dilemmas “just because”; we cooperate because doing so leads to higher utility. Petrov’s defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.
If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it’s hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.
I find it extremely troubling that we’re honoring someone defecting against their side in a matter as serious as global nuclear war, merely because in this case, the outcome happened to be good.
That is exactly the crux of my disagreement. We act as if there were a direct lever between Petrov and the keys and buttons that launch a retaliatory counterstrike. But there wasn’t. There were other people in the chain of command. There were other sensors. Do we really find it that difficult to believe that the Soviets would not have attempted to verify Petrov’s claim before retaliating? That there would not have been practiced procedures to carry out this verification? From what I’ve read of the Soviet Union, their systems of positive control were far ahead of the United States’ as a result of the much lower level of trust the Soviet Politburo had in their military. I find it exceedingly unlikely that the Soviets would have launched without conducting at least some kind of verification with a secondary system. They knew the consequences of nuclear attack just as well as we did.
In that context, Petrov’s actions are far less justifiable. He threw away all of the procedures and training that he had… for a hunch. While everything did turn out okay in this instance, it’s certainly not a mode of behavior I’d want to see established as a precedent. As I said above: Petrov’s actions were just as unilateralist as the people releasing the GPT-2 models, and I find it discomfiting that a holiday opposing that sort of unilateral action is named after someone who, arguably, was maximally unilateralist in his thinking.
I’m not entirely sure we can ever have a correct choice in foresight.
With regard to Petrov, he did seem to make a good, and reasoned call: The US launching a first strike with 5 missiles just does not make much sense without some very serious assumptions that don’t seem to be merited.
I do like the observation that Petrov was being just as unilateralist as what is feared in this thread.
Do we want to lionize such behavior? Perhaps. You argument seems to lend itself to the lens of an AI problem—and Petrov’s behavior then a control on that AI.
I also think it’s weird that The Sequences, Thinking Fast and Slow, and other rationalist works such as Good and Real all emphasize gathering data and trusting data over intuition, because human intuition is fallible, subject to bias, taken in by narratives, etc… and then we’re celebrating someone who did the opposite of all that and got away with it.
The steelman interpretation is that Petrov made a Bayesian assessment, starting with a prior that a nuclear attack (and especially a nuclear attack with five missiles) was an extremely unlikely scenario, and appropriately discounted the evidence being given to him by the satellite detection system because the detection system was new and therefore prone to false alarms, and found that the posterior probability of attack did not justify his passing the attack warning on. However, this seems to me like a post-hoc justification of a decision that was made on intuition.
He thought it unlikely that the US would launch a strike with 5 ICBMs only, since a first strike would likely be comprehensive. As far as Bayesian reasoning goes, this seems pretty good.
Also, a big part of being good at Bayesian reasoning is refining your ability to reason even when you can’t gather data, when you can’t view the same scenario “redrawn” ten thousand times and gather statistics on it.
ETA: the satellite radar operators reported all-clear; however, instructions were to only make decisions based on the computer readouts.
I’ve replied below with a similar question, but do you have a source on “satellite radar operators”? The published accounts of the incident imply that Petrov was the satellite radar operator. He followed up with the operators of the ground-based radar later, but at the time he made the decision to stay silent, he had no data that contradicted what the satellite sensors were saying.
As far as the Bayesian justification goes, I think this is bottom-line reasoning. We’re starting with, “Petrov made a good decision,” and looking backwards in order to find reasons as to why his reasoning was reasonable and justifiable.
I don’t see why this is bottom-line reasoning. It is in fact implausible that the US would first-strike with only five missiles, as that would leave the USSR able to respond.
To quote Stanislav himself:
I don’t think it’s obvious that Petrov’s choice was correct in foresight, I think he didn’t know whether it was a false alarm—my current understanding is that he just didn’t want to destroy the world, and that’s why he disobeyed his orders. It’s a fascinating historical case where someone actually got to make the choice, and made the right one. Real world situations are messy and it’s hard to say exactly what his reasoning process is and how justifiable it was—it’s really bad like decisions like these have to be made, and it doesn’t seem likely to me there’s some simple decision rule that gets the right answer in all situations (or even most). I didn’t make any explicit claim about his reasoning in the post. I simply celebrate that he managed to make the correct choice.
I don’t take it as a given. It seems like I should push back on claims about ‘the rationality community’ believing something before you first point to a single person who does, and when the person who wrote the post you’re commenting on explicitly doesn’t.
I agree with you that while LW’s red-button has some similarities with Petrov’s situation it doesn’t reflect many parts of it. As I say in the exchange with Zvi, I think it is instead representative of the broader situation with nukes and other destructive technologies, where we’re building them for little good reason and putting ourselves in increasingly precarious positions—which Petrov’s 1983 incident illustrates. We honour Petrov Day by not destroying the world, and I think it’s good to practice that in this way.
I think we can celebrate that Petrov didn’t want to destroy the world and this was a good impulse on his part. I think if we think it’s doubtful that he made the correct decision, or that it’s complicated, then we should be very, very upfront about that (your comment is upfront, the OP didn’t make this fact stick with me). The fact the holiday is named after him made me think (implicitly if not explicitly) that people (including you, Ben) generally endorsed Petrov’s reasoning/actions/etc. and so I did take the whole celebration as a claim about his reasoning. I mean, if Petrov reasoned poorly but happened to get a good result, we should celebrate the result yet condemn Petrov (or at least his reasoning). If Petrov reasoned poorly and took actions there were poor in expectation, doesn’t that mean something like in the majority of world’s Petrov caused bad stuff to happen (or at the algorithm which is Petrov generally would)?
. . .
I think it is extremely extremely weird to make a holiday about avoiding unilateralist’s curse and name it after who did exactly that. I hadn’t thought about it, but if Quanticle is right, then Petrov was taking unilateralist action. (We could celebrate that he his unilateralist action was good, but then the way Petrov day is being themed is here is weird.)
As an aside for those at home, I actually objected to Ben about the “unilateralist”/”unilateralist’s curse” framing separately because our button seemed a very non-obvious instance of Bostrom’s original meaning* to apply this to Petrov. Unilateralist’s curse (Bostrom, 2013) is about when a group of people all have the same goals but have different estimates whether an action which effects them all would be beneficial goal. The curse is that the more people in the group, the more likely someone is to have a bad estimate of the value of the goal that they act separate from everyone else. In the case of US vs Russia, this is an adversarial/enemy situation. If two people are enemies and one decides to attack the other, while it is perhaps correct to say they do so “unilaterally” but it’s not the phenomenon Bostrom was trying to point out with his paper/introduction of that term and I’m the kind of person who dislike when people’s introduced terminology gets eroded.
I was thinking this at the time I objected, but we could say Petrov had the same values as the rest of the military command but had different estimate of the value of a particular action (what to report), in which case we’re back to the above where he was taking unilateralist action.
Our button scenario is another question. I initially thought that someone would only press it if they were a troll (typical mind fallacy, you know?) in which case I’d call it more “defection” than “unilateralist” action and so it wasn’t a good fit for the framing either. If we consider that some people might actually believe the correct thing (by our true, joint underlying values) is to press the button, e.g. to save a life via donation, then that actually does fit the original intended meaning.
There other lessons of:
Practice not pressing big red buttons that would have bad effects, and
Isn’t it terrible that the world is making it easier and easier to do great harm, let’s point this out by doing the same . . . (ironically, I guess)
*I somewhat dislike that the OP has the section header “unilateralist action”, a term taken from Bostrom in one place, but then quotes something he said elsewhere maybe implying that the “building technologies that could destroy the world” was part of the original context for unilateralist’s curse.
. . .
Okay, those be the objections/comments I had brewing beneath the surface. Overall, I think our celebration of Petrov was pretty damn good with good effects and a lot of fun (although maybe it was supposed to be serious...). Ben did a tonne of work to make it happen (so did the rest of the team, especially Ray working hard to make the button).
Agree that it was a significant historical event and case study. My comments are limited to the “unilateralist” angle mostly and a bit the we should be clear which behavior/reasoning we’re endorsing. I look forward to doing the overall thing again.
FWIW, I had taken that as a given.