Yes, I have. Nuclear war lost its top spot to antimicrobial resistance.
Given recent events on the Korean peninsula it may seem strange to downgrade the risk of nuclear war. Explanation:
While the probability of conflict is at a local high, the potential severity of the conflict is lower than I’d thought. This is because I’ve downgraded my estimate of how many nukes DPRK is likely to successfully deploy. (Any shooting war would still be a terrible event, especially for Seoul, which is only about 60 km from the border—firmly within conventional artillery range.)
An actual conflict with DPRK may deter other aspiring nuclear states, while a perpetual lack of conflict may have the opposite effect. As the number of nuclear states rises, both the probability and severity of a nuclear war rise, so the expected damage rises as the square. The chance of accident or terrorist use of nukes rises too.
Rising tensions with DPRK, even without a war, can result in a larger global push for stronger anti-proliferation measures.
Perhaps paradoxically, because (a) DPRK’s capabilities are improving over time and (b) a conflict now ends the potential for a future conflict, a higher chance of a sooner (and smaller) conflict means a lower chance of a later (and larger) conflict.
You say:
I ended up to believe that now nuclear war > runaway biotech > UFAI
What was your ranking before, and on what information did you update?
Why does antimicrobial resistance rank so high in your estimation? It seems like a catastrophic risk at worst, not an existential one. New antibiotics are developed rather infrequently because they’re currently not that profitable. Incentives would change if the resistance problem got worse. I don’t think we’ve anywhere near exhausted antibiotic candidates found in nature, and even if we had, there are alternatives like phage therapy and monoclonal antibodies that we could potentially use instead.
It’s true that the probability of an existential-level AMR event is very low. But the probability of any existential-level threat event is very low; it’s the extreme severity, not the high probability, that makes such risks worth considering.
Concretely? I’m not sure. One way is for a pathogen to jump from animals (or a lab) to humans, and then manage to infect and kill billions of people.
Humanity existed for the great majority of its history without antibiotics.
True. But it’s much easier for a disease to spread long distances and among populations than in the past.
Note: I just realized there might be some terminological confusion, so I checked Bostrom’s terminology. My “billions of deaths” scenario would not be “existential,” in Bostrom’s sense, because it isn’t terminal: Many people would survive, and civilization would eventually recover. But if a pandemic reduced today’s civilization to the state in which humanity existed for the majority of its history, that would be much worse than most nuclear scenarios, right?
if a pandemic reduced today’s civilization to the state in which humanity existed for the majority of its history
Why would it? A pandemic wouldn’t destroy knowledge or technology.
Consider Black Death—it reduced the population of Europe by something like a third, I think. Was it a big deal? Sure it was. Did it send Europe back to the time when it was populated by some hunter-gatherer bands? Nope, not even close.
We have a lot of systems that depend on one another; perhaps a severe enough pandemic would cause a sort of cascade of collapse. I’d think it would have to be really bad, though, certainly worse than killing 1⁄3 of the population.
I am sure there would be some collapse, the question is how long will it take to rebuild. I would imagine that the survivors would just abandon large swathes of land and concentrate themselves. Having low population density overall is not a problem—look at e.g. Australia or Canada.
But we are now really in movie-plots land. Are you prepared for the zombie apocalypse?
I’m not sure how to rank these if the ordering relation is “nearer / more probable than”. Nuclear war seems like the most imminent threat, and UFAI the most inevitable.
We all know the arguments regarding UFAI. The only things that could stop the development of general AI at this point are themselves existential threats. Hence the inevitability. I think we already agree that FAI is a more difficult problem than superintelligence. But we might underestimate how much more difficult. The naiive approach is to solve ethics in advance. Right. That’s not going to happen in time. Our best known alternative is to somehow bootstrap machine learning into solving ethics for us without it killing us in the mean time. This still seems really damn difficult.
We’ve already had several close calls with nukes during the cold war. The USA has been able to reduce her stockpile since the collapse of the Soviet Union, but nukes have since proliferated to other countries. (And Russia, of course, sill has leftover Soviet nukes.) If the NPT system fails due to the influence of rogue states like Iran and North Korea, there could be a domino effect as the majority of nations that can afford it race to develop arms to counter their neighbors. This has arguably already happened in the case of Pakistan countering India, which didn’t join the NPT. Now notice that Iran borders Pakistan. How long can we hold the line there?
I should also point out that there are risks worse than even existential, which Bostrom called “hellish”, meaning that is a human extinction event would be a better outcome than a hellish one. A perverse kind of near miss with AI is the most likely to produce such an outcome. The AI would have to be friendly enough not to kill us all for spare atoms, and yet not friendly enough to produce an outcome we would consider desirable.
There are many other known existential risks, and probably some that are unknown. I’ve pointed out that AMR seems like a low risk, but I also think bioweapons are the next most imminent threat after nukes. Nukes are expensive. We can kind of see them coming and apply sanctions. We’ve developed game theory strategies to make use of the existing weapons unlikely. But bioweapons will be comparatively cheap and stealthy. Even so, I expect any such catastrophe to likely be self limiting. The more deadly an infection, the less it spreads. Zombies are not realistic. There would have to be a long incubation period or an animal reservoir, which would give us time to detect and treat it. One would have to engineer a pathogen very carefully to overcome these many limitations, to get to existential threat level, but most actors motivated to produce bioweapons would consider the self-limiting nature a benefit, to avoid blowback. These limitations are also what makes me think that AMR events are less risk than bioweapons.
What was your ranking before, and on what information did you update?
Well, before it was: runaway bioweapon > UFAI > nuclear extinction, but the recent news about the international situation made me update. As I said elsewhere, I’m adopting the outside view on all these subjects, so I will gladly stand corrected.
Yes, I have. Nuclear war lost its top spot to antimicrobial resistance.
Given recent events on the Korean peninsula it may seem strange to downgrade the risk of nuclear war. Explanation:
While the probability of conflict is at a local high, the potential severity of the conflict is lower than I’d thought. This is because I’ve downgraded my estimate of how many nukes DPRK is likely to successfully deploy. (Any shooting war would still be a terrible event, especially for Seoul, which is only about 60 km from the border—firmly within conventional artillery range.)
An actual conflict with DPRK may deter other aspiring nuclear states, while a perpetual lack of conflict may have the opposite effect. As the number of nuclear states rises, both the probability and severity of a nuclear war rise, so the expected damage rises as the square. The chance of accident or terrorist use of nukes rises too.
Rising tensions with DPRK, even without a war, can result in a larger global push for stronger anti-proliferation measures.
Perhaps paradoxically, because (a) DPRK’s capabilities are improving over time and (b) a conflict now ends the potential for a future conflict, a higher chance of a sooner (and smaller) conflict means a lower chance of a later (and larger) conflict.
You say:
What was your ranking before, and on what information did you update?
Why does antimicrobial resistance rank so high in your estimation? It seems like a catastrophic risk at worst, not an existential one. New antibiotics are developed rather infrequently because they’re currently not that profitable. Incentives would change if the resistance problem got worse. I don’t think we’ve anywhere near exhausted antibiotic candidates found in nature, and even if we had, there are alternatives like phage therapy and monoclonal antibodies that we could potentially use instead.
It’s true that the probability of an existential-level AMR event is very low. But the probability of any existential-level threat event is very low; it’s the extreme severity, not the high probability, that makes such risks worth considering.
What, in your view, gets the top spot?
What would that look like? Humanity existed for the great majority of its history without antibiotics.
Concretely? I’m not sure. One way is for a pathogen to jump from animals (or a lab) to humans, and then manage to infect and kill billions of people.
True. But it’s much easier for a disease to spread long distances and among populations than in the past.
Note: I just realized there might be some terminological confusion, so I checked Bostrom’s terminology. My “billions of deaths” scenario would not be “existential,” in Bostrom’s sense, because it isn’t terminal: Many people would survive, and civilization would eventually recover. But if a pandemic reduced today’s civilization to the state in which humanity existed for the majority of its history, that would be much worse than most nuclear scenarios, right?
Why would it? A pandemic wouldn’t destroy knowledge or technology.
Consider Black Death—it reduced the population of Europe by something like a third, I think. Was it a big deal? Sure it was. Did it send Europe back to the time when it was populated by some hunter-gatherer bands? Nope, not even close.
We have a lot of systems that depend on one another; perhaps a severe enough pandemic would cause a sort of cascade of collapse. I’d think it would have to be really bad, though, certainly worse than killing 1⁄3 of the population.
I am sure there would be some collapse, the question is how long will it take to rebuild. I would imagine that the survivors would just abandon large swathes of land and concentrate themselves. Having low population density overall is not a problem—look at e.g. Australia or Canada.
But we are now really in movie-plots land. Are you prepared for the zombie apocalypse?
I’m not sure how to rank these if the ordering relation is “nearer / more probable than”. Nuclear war seems like the most imminent threat, and UFAI the most inevitable.
We all know the arguments regarding UFAI. The only things that could stop the development of general AI at this point are themselves existential threats. Hence the inevitability. I think we already agree that FAI is a more difficult problem than superintelligence. But we might underestimate how much more difficult. The naiive approach is to solve ethics in advance. Right. That’s not going to happen in time. Our best known alternative is to somehow bootstrap machine learning into solving ethics for us without it killing us in the mean time. This still seems really damn difficult.
We’ve already had several close calls with nukes during the cold war. The USA has been able to reduce her stockpile since the collapse of the Soviet Union, but nukes have since proliferated to other countries. (And Russia, of course, sill has leftover Soviet nukes.) If the NPT system fails due to the influence of rogue states like Iran and North Korea, there could be a domino effect as the majority of nations that can afford it race to develop arms to counter their neighbors. This has arguably already happened in the case of Pakistan countering India, which didn’t join the NPT. Now notice that Iran borders Pakistan. How long can we hold the line there?
I should also point out that there are risks worse than even existential, which Bostrom called “hellish”, meaning that is a human extinction event would be a better outcome than a hellish one. A perverse kind of near miss with AI is the most likely to produce such an outcome. The AI would have to be friendly enough not to kill us all for spare atoms, and yet not friendly enough to produce an outcome we would consider desirable.
There are many other known existential risks, and probably some that are unknown. I’ve pointed out that AMR seems like a low risk, but I also think bioweapons are the next most imminent threat after nukes. Nukes are expensive. We can kind of see them coming and apply sanctions. We’ve developed game theory strategies to make use of the existing weapons unlikely. But bioweapons will be comparatively cheap and stealthy. Even so, I expect any such catastrophe to likely be self limiting. The more deadly an infection, the less it spreads. Zombies are not realistic. There would have to be a long incubation period or an animal reservoir, which would give us time to detect and treat it. One would have to engineer a pathogen very carefully to overcome these many limitations, to get to existential threat level, but most actors motivated to produce bioweapons would consider the self-limiting nature a benefit, to avoid blowback. These limitations are also what makes me think that AMR events are less risk than bioweapons.
Well, before it was: runaway bioweapon > UFAI > nuclear extinction, but the recent news about the international situation made me update. As I said elsewhere, I’m adopting the outside view on all these subjects, so I will gladly stand corrected.