Here are some things that shouldn’t happen, on my analysis:
An ad-hoc self-modifying AI as in (1) undergoes a cycle of self-improvement, starting from stupidity, that carries it up to the level of a very smart human—and then stops, unable to progress any further.
I’m sure this has been discussed elsewhere, but to me it seems possible that progress may stop when the mind becomes too complex to make working changes to.
I used to think that a self-improving AI would foom because as it gets smarter, it gets easier for it to improve itself. But it may get harder for it to improve itself, because as it self-improves it may turn itself into more and more of an unmaintainable mess.
What if creating unmaintainable messes is the only way that intelligences up to very-smart-human-level know how to create intelligences up to very-smart-human level? That would make that level a hard upper limit on a self-improving AI.
I’m sure this has been discussed elsewhere, but to me it seems possible that progress may stop when the mind becomes too complex to make working changes to.
I used to think that a self-improving AI would foom because as it gets smarter, it gets easier for it to improve itself. But it may get harder for it to improve itself, because as it self-improves it may turn itself into more and more of an unmaintainable mess.
What if creating unmaintainable messes is the only way that intelligences up to very-smart-human-level know how to create intelligences up to very-smart-human level? That would make that level a hard upper limit on a self-improving AI.