Nukes and bioweapons don’t FOOM in quite the way AGI is often thought to, because there’s a nontrivial proliferation step following the initial development of the technology. (Perhaps they resemble Oracle AGI in that respect; subsequent to being created, the technology has to unlock itself, either suddenly or by a gradual increase in influence, before it can have a direct catastrophic impact.)
I raise this point because the relationship between technology proliferation and GDP may differ from that between technology development and GDP. More, global risks tied to poverty (regional conflicts resulting in biological or nuclear war; poor sanitation resulting in pandemic diseases; etc.) may compete with ones tried to prosperity.
Of course, these risks might be good things if they provided the slowdown Eliezer wants, gravely injuring civilization without killing it. But I suspect most non-existential catastrophes would have the opposite effect. Long-term thinking and careful risk assessment are easier when societies (and/or theorists) feel less immediately threatened; post-apocalyptic AI research may be more likely to be militarized, centralized, short-sighted, and philosophically unsophisticated, which could actually speed up UFAI development.
Two counter-arguments to the anti-apocalypse argument:
A catastrophe that didn’t devastate our intellectual elites would make them more cautious and sensitive to existential risks in general, including UFAI. An AI-related crisis (that didn’t kill everyone, and came soon enough to alter our technological momentum) would be particularly helpful.
A catastrophe would probably favor strong, relatively undemocratic leadership, which might make for better research priorities, since it’s easier to explain AI risk to a few dictators than to a lot of voters.
So unless you’re quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies
As an alternative to being quite sure that the benefits somewhat outweigh the risks, you could somewhat less confidently believe that the benefits overwhelmingly outweigh the risks. In the end, inaction requires just as much moral and evidential justification as action.
Nukes and bioweapons don’t FOOM in quite the way AGI is often thought to, because there’s a nontrivial proliferation step following the initial development of the technology. (Perhaps they resemble Oracle AGI in that respect; subsequent to being created, the technology has to unlock itself, either suddenly or by a gradual increase in influence, before it can have a direct catastrophic impact.)
I raise this point because the relationship between technology proliferation and GDP may differ from that between technology development and GDP. More, global risks tied to poverty (regional conflicts resulting in biological or nuclear war; poor sanitation resulting in pandemic diseases; etc.) may compete with ones tried to prosperity.
Of course, these risks might be good things if they provided the slowdown Eliezer wants, gravely injuring civilization without killing it. But I suspect most non-existential catastrophes would have the opposite effect. Long-term thinking and careful risk assessment are easier when societies (and/or theorists) feel less immediately threatened; post-apocalyptic AI research may be more likely to be militarized, centralized, short-sighted, and philosophically unsophisticated, which could actually speed up UFAI development.
Two counter-arguments to the anti-apocalypse argument:
A catastrophe that didn’t devastate our intellectual elites would make them more cautious and sensitive to existential risks in general, including UFAI. An AI-related crisis (that didn’t kill everyone, and came soon enough to alter our technological momentum) would be particularly helpful.
A catastrophe would probably favor strong, relatively undemocratic leadership, which might make for better research priorities, since it’s easier to explain AI risk to a few dictators than to a lot of voters.
As an alternative to being quite sure that the benefits somewhat outweigh the risks, you could somewhat less confidently believe that the benefits overwhelmingly outweigh the risks. In the end, inaction requires just as much moral and evidential justification as action.