Because, so the argument goes, if the AI is powerful enough to pose any threat at all, then it is surely powerful enough to improve itself (in the slowest case, coercing or bribing human researchers, until eventually being able to self-modify). Unlike humans, the AI has no skill ceiling, and so the recursive feedback loop of improvement will go FOOM in a relatively short amount of time, though how long that is is a matter of question.
I don’t doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.
Because, so the argument goes, if the AI is powerful enough to pose any threat at all, then it is surely powerful enough to improve itself (in the slowest case, coercing or bribing human researchers, until eventually being able to self-modify). Unlike humans, the AI has no skill ceiling, and so the recursive feedback loop of improvement will go FOOM in a relatively short amount of time, though how long that is is a matter of question.
Isn’t there a certain amount of disagreement about whether FOOM is the necessary thing to happen?
People also talk about a slow takeoff being risky. See the “Why Does This Matter” section from here.
I don’t doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.