This seems to beg the question: what is wrong with our established methods for dealing with these risks? The information you posted, if it is credible, would completely change the story that this post tells. Rather than a scary story about how we may be on the brink of annihilation, it becomes a story about how our organizations have changed to recognize the risks posed by technology, in order to avert these risks. In the Cold War, our methods of doing so were crude, but they sufficed, and we no longer have the same problems.
Is x-risk nevertheless an under-appreciated concern? Maybe, but I don’t find this article convincing. You could make the argument that, along with the development of a technology, understanding of its risks and how to mitigate them also advances. Then it would not require a dedicated effort to understand these risks in advance. So, why is the best approach to analyse possible future risks, rather than working on projects which solve immediate problems, and dealing with issues as they arise?
Don’t get me wrong, I respect what the guys at SIAI do, but I don’t know the answer to this question. And it seems quite important.
Presumably, in the long term, extinction risk will decrease, as civilisation spreads out.
Increased risks have been accompanied by increased risk control—and it is not obvious how these things balance out. Pinker suggests going by death by violence in his latest book—and indeed the risk of death by violence is in decline.
Superpowers and world-spanning companies duking it out does not necessarily lead to global security in the short term, though. Most current trends seem positive and probably things will continue to improve—but it is hard to be sure—since technological history is still fairly short.
This seems to beg the question: what is wrong with our established methods for dealing with these risks? The information you posted, if it is credible, would completely change the story that this post tells. Rather than a scary story about how we may be on the brink of annihilation, it becomes a story about how our organizations have changed to recognize the risks posed by technology, in order to avert these risks. In the Cold War, our methods of doing so were crude, but they sufficed, and we no longer have the same problems.
Is x-risk nevertheless an under-appreciated concern? Maybe, but I don’t find this article convincing. You could make the argument that, along with the development of a technology, understanding of its risks and how to mitigate them also advances. Then it would not require a dedicated effort to understand these risks in advance. So, why is the best approach to analyse possible future risks, rather than working on projects which solve immediate problems, and dealing with issues as they arise?
Don’t get me wrong, I respect what the guys at SIAI do, but I don’t know the answer to this question. And it seems quite important.
Presumably, in the long term, extinction risk will decrease, as civilisation spreads out.
Increased risks have been accompanied by increased risk control—and it is not obvious how these things balance out. Pinker suggests going by death by violence in his latest book—and indeed the risk of death by violence is in decline.
Superpowers and world-spanning companies duking it out does not necessarily lead to global security in the short term, though. Most current trends seem positive and probably things will continue to improve—but it is hard to be sure—since technological history is still fairly short.