When I’ve previously considered human extinction caused by nuclear wars, I’ve known that the immediate blasts wouldn’t kill everyone. However, what are the effects of a lower overall population with fewer habitable areas and less access to resources? That’s doubly true since the areas that will have more people survive will almost definitionally be developing countries that are now suddenly cutoff from imports. I believe that humans as a species would likely survive, but I also suspect that it would be the end of modern civilization. Adding to that, I’ve seen hypotheses before that the remaining resources left underground but easily accessibly by non-modern technology would not be enough to “reboot” civilization, especially fossil fuels. Overall, despite the very likely non-extinction of the human species, it would much more likely be an extinction of the human race as a space-faring species.
I really don’t think fossil fuel depletion is very likely to permanently curtail humanity’s potential in a nuclear or other collapse situation. I’ve seen this point argued a bunch though, so I think it’s worth taking the hypothesis seriously. I’d love to see an in depth analysis of this question.
I appreciate how Toby Ord considers “knock-on effects” in his modelling of existential risks, as presented in “The Precipice”. A catastrophe doesn’t necessarily have to cause extinction for it to be considered an existential threat. The reason being knock-on effects, which would undoubtedly impair our preparedness for what comes next.
Is this isomorphic to the framing of existential risk as one category? (Which is not something I came up with, I just distilled the idea.) It still seems to me like the idea that first- and second-order effects are qualitatively different is just a mistake born of the “risk from xxx” framing.
Also consider whether a “second-try/rebooted” human civilization would be more or less probable to eventually succeed at space-faring / AGI construction.
Speculating about this is a fun exercise. I argue that the answer is less probable.
The survivors might have a more substantial commitment to life affirmation given that the fragility of life is so fresh in their minds following Armageddon. I argue that this would have a minimal effect. We know that the dinosaurs went extinct, and we know that the average lifespan for a mammalian species is about one million years. We know that we have fought world wars, and we know that life is precious and unreasonably rare in the universe. Yet, in aggregate, we still don’t care that much about safeguarding the future of life. Existential risk reduction doesn’t factor heavily into our decision making and how we allocate our resources.
In terms of technological progress, I think a post-nuclear war civilisation would likely quickly bootstrap itself, given the abundance of already extracted raw materials and knowledge left behind. However, on the whole, Armageddon would be a severe hindrance to technological progress. Political, social and economic stability appears advantageous to the creation of knowledge.
When I’ve previously considered human extinction caused by nuclear wars, I’ve known that the immediate blasts wouldn’t kill everyone. However, what are the effects of a lower overall population with fewer habitable areas and less access to resources? That’s doubly true since the areas that will have more people survive will almost definitionally be developing countries that are now suddenly cutoff from imports. I believe that humans as a species would likely survive, but I also suspect that it would be the end of modern civilization. Adding to that, I’ve seen hypotheses before that the remaining resources left underground but easily accessibly by non-modern technology would not be enough to “reboot” civilization, especially fossil fuels. Overall, despite the very likely non-extinction of the human species, it would much more likely be an extinction of the human race as a space-faring species.
I really don’t think fossil fuel depletion is very likely to permanently curtail humanity’s potential in a nuclear or other collapse situation. I’ve seen this point argued a bunch though, so I think it’s worth taking the hypothesis seriously. I’d love to see an in depth analysis of this question.
I appreciate how Toby Ord considers “knock-on effects” in his modelling of existential risks, as presented in “The Precipice”. A catastrophe doesn’t necessarily have to cause extinction for it to be considered an existential threat. The reason being knock-on effects, which would undoubtedly impair our preparedness for what comes next.
Is this isomorphic to the framing of existential risk as one category? (Which is not something I came up with, I just distilled the idea.) It still seems to me like the idea that first- and second-order effects are qualitatively different is just a mistake born of the “risk from xxx” framing.
Yes, it looks isomorphic. Thanks for sharing your write-up. You’ve captured this idea well.
Also consider whether a “second-try/rebooted” human civilization would be more or less probable to eventually succeed at space-faring / AGI construction.
Speculating about this is a fun exercise. I argue that the answer is less probable.
The survivors might have a more substantial commitment to life affirmation given that the fragility of life is so fresh in their minds following Armageddon. I argue that this would have a minimal effect. We know that the dinosaurs went extinct, and we know that the average lifespan for a mammalian species is about one million years. We know that we have fought world wars, and we know that life is precious and unreasonably rare in the universe. Yet, in aggregate, we still don’t care that much about safeguarding the future of life. Existential risk reduction doesn’t factor heavily into our decision making and how we allocate our resources.
In terms of technological progress, I think a post-nuclear war civilisation would likely quickly bootstrap itself, given the abundance of already extracted raw materials and knowledge left behind. However, on the whole, Armageddon would be a severe hindrance to technological progress. Political, social and economic stability appears advantageous to the creation of knowledge.