I was fully expecting having to write yet another comment about how human-level AI will not be very useful for a nuclear weapon program. I concede that the dangers mentioned instead (someone putting an AI in charge of a reactor or nuke) seem much more realistic.
Of course, the utility of avoiding sub-extinction negative outcomes with AI in the near future is highly dependent on p(doom). For example, if there is no x-risk, then the first order effects of avoiding locally bad outcomes related to CBRN hazards are clearly beneficial.
On the other hand, if your p(doom) is 90%, then making sure that non-superhuman AI systems work without incident is alike to clothing kids in asbestos gear so they don’t hurt themselves while playing with matches.
Basically, if you think a road leads somewhere useful, you would prefer that the road goes smoothly, while if a road leads off a cliff you would prefer it to be full of potholes so that travelers might think twice about taking it.
Personally, I tend to favor first-order effects (like fewer crazies being able to develop chemical weapons) over hypothetical higher order effects (like chemical attacks by AI-empowered crazies leading to a Butlerian Jihad and preventing an unaligned AI killing all humans). “This looks locally bad, but is actually part of a brilliant 5-dimensional chess move which will lead to better global outcomes” seems like the excuse of every other movie villain.
I was fully expecting having to write yet another comment about how human-level AI will not be very useful for a nuclear weapon program. I concede that the dangers mentioned instead (someone putting an AI in charge of a reactor or nuke) seem much more realistic.
Of course, the utility of avoiding sub-extinction negative outcomes with AI in the near future is highly dependent on p(doom). For example, if there is no x-risk, then the first order effects of avoiding locally bad outcomes related to CBRN hazards are clearly beneficial.
On the other hand, if your p(doom) is 90%, then making sure that non-superhuman AI systems work without incident is alike to clothing kids in asbestos gear so they don’t hurt themselves while playing with matches.
Basically, if you think a road leads somewhere useful, you would prefer that the road goes smoothly, while if a road leads off a cliff you would prefer it to be full of potholes so that travelers might think twice about taking it.
Personally, I tend to favor first-order effects (like fewer crazies being able to develop chemical weapons) over hypothetical higher order effects (like chemical attacks by AI-empowered crazies leading to a Butlerian Jihad and preventing an unaligned AI killing all humans). “This looks locally bad, but is actually part of a brilliant 5-dimensional chess move which will lead to better global outcomes” seems like the excuse of every other movie villain.