A good default seems to be that increasing the rate of technical progress is neutral towards different outcomes.
I think the second-order terms are important here. Increasing technological progress benefits hard ideas (AI, nanotech) more than comparatively easy ones (atomic bombs, biotech?). Both categories are scary, but I think the second is scarier, especially since we can use AI to counteract existential risk much more than we can use the others. Humans will die ‘by default’ - we already have the technology to kill ourselves, but not that which could prevent such a thing.
I think the second-order terms are important here. Increasing technological progress benefits hard ideas (AI, nanotech) more than comparatively easy ones (atomic bombs, biotech?). Both categories are scary, but I think the second is scarier, especially since we can use AI to counteract existential risk much more than we can use the others. Humans will die ‘by default’ - we already have the technology to kill ourselves, but not that which could prevent such a thing.