It is my intuition that if something as complex and powerful as human-level intelligence can be engineered in the foreseeable future, than it would have to use some kind of bootstrapping. I admit it is possible than I’m wrong and that in fact progress in AGI would be through a very long sequence of small improvements and that the AGI will be given no introspective / self-modification powers. In this scenario, a “proto-singularity” is a real possibility. However, what I think will happen is that we won’t make significant progress before we develop a powerful mathematical formalism. Once such a formalism exists, it will be much more efficient to use it in order to build a pseudo-narrow self-modifying AI than keep improving AI “brick by brick”.
It is my intuition that if something as complex and powerful as human-level intelligence can be engineered in the foreseeable future, than it would have to use some kind of bootstrapping. I admit it is possible than I’m wrong and that in fact progress in AGI would be through a very long sequence of small improvements and that the AGI will be given no introspective / self-modification powers. In this scenario, a “proto-singularity” is a real possibility. However, what I think will happen is that we won’t make significant progress before we develop a powerful mathematical formalism. Once such a formalism exists, it will be much more efficient to use it in order to build a pseudo-narrow self-modifying AI than keep improving AI “brick by brick”.