I wrote once about levels of AI self-improvement and come to a similar conclusion: any more advance version of such AI will require more and more extensive testing, to ensure its stability and alignment, and the complexity of the testing task will grow very quickly, thus slowing down any intelligent explosion. This, however, doesn’t preclude creation of Dangerous AI (capable to solve the task of human extinction and just slight superhuman in some domains).
I wrote once about levels of AI self-improvement and come to a similar conclusion: any more advance version of such AI will require more and more extensive testing, to ensure its stability and alignment, and the complexity of the testing task will grow very quickly, thus slowing down any intelligent explosion. This, however, doesn’t preclude creation of Dangerous AI (capable to solve the task of human extinction and just slight superhuman in some domains).