Although there are an infinite number of existential risks which might cause human extinction, I still think that AI with a utility that conflicts with human existence is the one issue we should spend the most resources to fight. Why?
First, an AI would be really useful, so you can be relatively sure that work on it will continue until the job is done. Other disasters like asteroid strikes, nuclear war, and massive pandemics are all possible, but at least they do not have a large economic and social incentive to get us closer to one.
Second, we have already done a lot of preparation for how to survive other threats, once we know it is too late to stop them. We have tabs on the largest asteroids in the solar system, and can predict their future courses for decades to come fairly well, so if we discovered one with a >1% chance of hitting the earth, I think even our current space program would be enough to establish an emergency colony on Mars / a moon of Jupiter. And although there are diseases we cannot cure, we at the very least have quarantine systems and weapons to isolate people with a pandemic disease. on top of that, we have immune systems that have survived threat after threat for thousands of years by quickly adapting, and medical technology that is only getting better at diagnosis and treatment, so the fast majority of potential human-destroyers are stopped before they ever get anywhere. An unfriendly Superintelligence would be able to adapt to our defences faster than we created them, could wait as long as necessary for the ideal time to strike, and could very easily conceal any behaviours which would act as a warning to humans until it had reached the point of being unstoppable. I really cannot think of risk management system that could be put into place to stop an AI once it was fully developed and in use. [Edit: Didn’t mean to make such a long post]
Although there are an infinite number of existential risks which might cause human extinction, I still think that AI with a utility that conflicts with human existence is the one issue we should spend the most resources to fight. Why? First, an AI would be really useful, so you can be relatively sure that work on it will continue until the job is done. Other disasters like asteroid strikes, nuclear war, and massive pandemics are all possible, but at least they do not have a large economic and social incentive to get us closer to one.
Second, we have already done a lot of preparation for how to survive other threats, once we know it is too late to stop them. We have tabs on the largest asteroids in the solar system, and can predict their future courses for decades to come fairly well, so if we discovered one with a >1% chance of hitting the earth, I think even our current space program would be enough to establish an emergency colony on Mars / a moon of Jupiter. And although there are diseases we cannot cure, we at the very least have quarantine systems and weapons to isolate people with a pandemic disease. on top of that, we have immune systems that have survived threat after threat for thousands of years by quickly adapting, and medical technology that is only getting better at diagnosis and treatment, so the fast majority of potential human-destroyers are stopped before they ever get anywhere. An unfriendly Superintelligence would be able to adapt to our defences faster than we created them, could wait as long as necessary for the ideal time to strike, and could very easily conceal any behaviours which would act as a warning to humans until it had reached the point of being unstoppable. I really cannot think of risk management system that could be put into place to stop an AI once it was fully developed and in use. [Edit: Didn’t mean to make such a long post]