I’m not sure how to rank these if the ordering relation is “nearer / more probable than”. Nuclear war seems like the most imminent threat, and UFAI the most inevitable.
We all know the arguments regarding UFAI. The only things that could stop the development of general AI at this point are themselves existential threats. Hence the inevitability. I think we already agree that FAI is a more difficult problem than superintelligence. But we might underestimate how much more difficult. The naiive approach is to solve ethics in advance. Right. That’s not going to happen in time. Our best known alternative is to somehow bootstrap machine learning into solving ethics for us without it killing us in the mean time. This still seems really damn difficult.
We’ve already had several close calls with nukes during the cold war. The USA has been able to reduce her stockpile since the collapse of the Soviet Union, but nukes have since proliferated to other countries. (And Russia, of course, sill has leftover Soviet nukes.) If the NPT system fails due to the influence of rogue states like Iran and North Korea, there could be a domino effect as the majority of nations that can afford it race to develop arms to counter their neighbors. This has arguably already happened in the case of Pakistan countering India, which didn’t join the NPT. Now notice that Iran borders Pakistan. How long can we hold the line there?
I should also point out that there are risks worse than even existential, which Bostrom called “hellish”, meaning that is a human extinction event would be a better outcome than a hellish one. A perverse kind of near miss with AI is the most likely to produce such an outcome. The AI would have to be friendly enough not to kill us all for spare atoms, and yet not friendly enough to produce an outcome we would consider desirable.
There are many other known existential risks, and probably some that are unknown. I’ve pointed out that AMR seems like a low risk, but I also think bioweapons are the next most imminent threat after nukes. Nukes are expensive. We can kind of see them coming and apply sanctions. We’ve developed game theory strategies to make use of the existing weapons unlikely. But bioweapons will be comparatively cheap and stealthy. Even so, I expect any such catastrophe to likely be self limiting. The more deadly an infection, the less it spreads. Zombies are not realistic. There would have to be a long incubation period or an animal reservoir, which would give us time to detect and treat it. One would have to engineer a pathogen very carefully to overcome these many limitations, to get to existential threat level, but most actors motivated to produce bioweapons would consider the self-limiting nature a benefit, to avoid blowback. These limitations are also what makes me think that AMR events are less risk than bioweapons.
I’m not sure how to rank these if the ordering relation is “nearer / more probable than”. Nuclear war seems like the most imminent threat, and UFAI the most inevitable.
We all know the arguments regarding UFAI. The only things that could stop the development of general AI at this point are themselves existential threats. Hence the inevitability. I think we already agree that FAI is a more difficult problem than superintelligence. But we might underestimate how much more difficult. The naiive approach is to solve ethics in advance. Right. That’s not going to happen in time. Our best known alternative is to somehow bootstrap machine learning into solving ethics for us without it killing us in the mean time. This still seems really damn difficult.
We’ve already had several close calls with nukes during the cold war. The USA has been able to reduce her stockpile since the collapse of the Soviet Union, but nukes have since proliferated to other countries. (And Russia, of course, sill has leftover Soviet nukes.) If the NPT system fails due to the influence of rogue states like Iran and North Korea, there could be a domino effect as the majority of nations that can afford it race to develop arms to counter their neighbors. This has arguably already happened in the case of Pakistan countering India, which didn’t join the NPT. Now notice that Iran borders Pakistan. How long can we hold the line there?
I should also point out that there are risks worse than even existential, which Bostrom called “hellish”, meaning that is a human extinction event would be a better outcome than a hellish one. A perverse kind of near miss with AI is the most likely to produce such an outcome. The AI would have to be friendly enough not to kill us all for spare atoms, and yet not friendly enough to produce an outcome we would consider desirable.
There are many other known existential risks, and probably some that are unknown. I’ve pointed out that AMR seems like a low risk, but I also think bioweapons are the next most imminent threat after nukes. Nukes are expensive. We can kind of see them coming and apply sanctions. We’ve developed game theory strategies to make use of the existing weapons unlikely. But bioweapons will be comparatively cheap and stealthy. Even so, I expect any such catastrophe to likely be self limiting. The more deadly an infection, the less it spreads. Zombies are not realistic. There would have to be a long incubation period or an animal reservoir, which would give us time to detect and treat it. One would have to engineer a pathogen very carefully to overcome these many limitations, to get to existential threat level, but most actors motivated to produce bioweapons would consider the self-limiting nature a benefit, to avoid blowback. These limitations are also what makes me think that AMR events are less risk than bioweapons.