On September 26, 1983, Soviet officer Stanislav Petrov saved the world.
Allegedly saved the world. It actually seems pretty unlikely that the world was saved by Petrov. For one thing, Wikipedia says:
There are varying reports whether Petrov actually reported the alert to his superiors and questions over the part his decision played in preventing nuclear war, because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.{2}.
because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.
Given that this is coming from the sort of people who thought that setting up the Dead Hand was a good idea, and given that ass-covering and telling the public less than the truth was standard operating procedure in Russia, and given everything we know about the American government’s incompetence, paranoia, greed, destructive experiments & actions (like setting PAL locks to zero, to pick a nuclear example) and that nuclear authority really was delegated to individual officers (this and other scandalous aspects came up recently in the New Yorker, actually: http://www.newyorker.com/online/blogs/newsdesk/2014/01/strangelove-for-real.html )...
I see zero reason to place any credence in their claims. This is fruit of the poisonous tree. They have reason to lie. I have no more reason to disbelieve Petrov than other similar incidents (like the Cuban Missile Crisis’s submarine incident).
Very interesting. But the standard account says that Russian authorities were afraid of American attack at the time, and likely to make the wrong decision regardless of standard procedure. So the parent by itself doesn’t address the relevant claim.
Also, the Wikipedia quote made it sound like Petrov might have reported sighting missiles after all (perhaps with a disclaimer). This is neither cited nor credible. If one of his superiors arguably saved the world by following protocol, high probability Putin’s people would have mentioned it in their press release.
This seems to beg the question: what is wrong with our established methods for dealing with these risks? The information you posted, if it is credible, would completely change the story that this post tells. Rather than a scary story about how we may be on the brink of annihilation, it becomes a story about how our organizations have changed to recognize the risks posed by technology, in order to avert these risks. In the Cold War, our methods of doing so were crude, but they sufficed, and we no longer have the same problems.
Is x-risk nevertheless an under-appreciated concern? Maybe, but I don’t find this article convincing. You could make the argument that, along with the development of a technology, understanding of its risks and how to mitigate them also advances. Then it would not require a dedicated effort to understand these risks in advance. So, why is the best approach to analyse possible future risks, rather than working on projects which solve immediate problems, and dealing with issues as they arise?
Don’t get me wrong, I respect what the guys at SIAI do, but I don’t know the answer to this question. And it seems quite important.
Presumably, in the long term, extinction risk will decrease, as civilisation spreads out.
Increased risks have been accompanied by increased risk control—and it is not obvious how these things balance out. Pinker suggests going by death by violence in his latest book—and indeed the risk of death by violence is in decline.
Superpowers and world-spanning companies duking it out does not necessarily lead to global security in the short term, though. Most current trends seem positive and probably things will continue to improve—but it is hard to be sure—since technological history is still fairly short.
Allegedly saved the world. It actually seems pretty unlikely that the world was saved by Petrov. For one thing, Wikipedia says:
Given that this is coming from the sort of people who thought that setting up the Dead Hand was a good idea, and given that ass-covering and telling the public less than the truth was standard operating procedure in Russia, and given everything we know about the American government’s incompetence, paranoia, greed, destructive experiments & actions (like setting PAL locks to zero, to pick a nuclear example) and that nuclear authority really was delegated to individual officers (this and other scandalous aspects came up recently in the New Yorker, actually: http://www.newyorker.com/online/blogs/newsdesk/2014/01/strangelove-for-real.html )...
I see zero reason to place any credence in their claims. This is fruit of the poisonous tree. They have reason to lie. I have no more reason to disbelieve Petrov than other similar incidents (like the Cuban Missile Crisis’s submarine incident).
Very interesting. But the standard account says that Russian authorities were afraid of American attack at the time, and likely to make the wrong decision regardless of standard procedure. So the parent by itself doesn’t address the relevant claim.
Also, the Wikipedia quote made it sound like Petrov might have reported sighting missiles after all (perhaps with a disclaimer). This is neither cited nor credible. If one of his superiors arguably saved the world by following protocol, high probability Putin’s people would have mentioned it in their press release.
And that’s why I hate Petrov story. It’s ridiculous how otherwise sensible people are willing to believe in it.
This seems to beg the question: what is wrong with our established methods for dealing with these risks? The information you posted, if it is credible, would completely change the story that this post tells. Rather than a scary story about how we may be on the brink of annihilation, it becomes a story about how our organizations have changed to recognize the risks posed by technology, in order to avert these risks. In the Cold War, our methods of doing so were crude, but they sufficed, and we no longer have the same problems.
Is x-risk nevertheless an under-appreciated concern? Maybe, but I don’t find this article convincing. You could make the argument that, along with the development of a technology, understanding of its risks and how to mitigate them also advances. Then it would not require a dedicated effort to understand these risks in advance. So, why is the best approach to analyse possible future risks, rather than working on projects which solve immediate problems, and dealing with issues as they arise?
Don’t get me wrong, I respect what the guys at SIAI do, but I don’t know the answer to this question. And it seems quite important.
Presumably, in the long term, extinction risk will decrease, as civilisation spreads out.
Increased risks have been accompanied by increased risk control—and it is not obvious how these things balance out. Pinker suggests going by death by violence in his latest book—and indeed the risk of death by violence is in decline.
Superpowers and world-spanning companies duking it out does not necessarily lead to global security in the short term, though. Most current trends seem positive and probably things will continue to improve—but it is hard to be sure—since technological history is still fairly short.