I expect that human-level language processing is enough to construct human-level programming and mathematical research ability. Aka, complete a research diary the way a human would, by matching with patterns it has previously seen, just as human mathematicians do. That should be capability enough to go as foom as possible.
If AI is limited by hardware rather than insight, I find it unlikely that a 300 trillion parameter Transformer trained to reproduce math/CS papers would be able to “go foom.” In other words, while I agree that the system I have described would likely be able to do human-level programming (though it would still make mistakes, just like human programmers!) I doubt that this would necessarily cause it to enter a quick transition to superintelligence of any sort.
I suspect the system that I have described above would be well suited for automating some types of jobs, but would not necessarily alter the structure of the economy by a radical degree.
It wouldn’t necessarily cause such a quick transition, but it could easily be made to. A human with access to this tool could iterate designs very quickly, and he could take himself out of the loop by letting the tool predict and execute his actions as well, or by piping its code ideas directly into a compiler, or some other way the tool thinks up.
My skepticism is mainly that this would be quicker than normal human iteration, or that this would substantially improve upon the strategy of simply buying more hardware. However, as we see in the recent case of eg. roBERTa, there are a few insights which substantially improve upon a single AI system. I just remain skeptical that a single human-level AI system would produce these insights faster than a regular human team of experts.
In other words, my opinion of recursive self improvement in this narrow case is that it isn’t a fundamentally different strategy from human oversight and iteration. It can be used to automate some parts of the process, but I don’t think that foom is necessarily implied in any strong sense.
The default argument that such a development would lead to a foom is that an insight-based regular doubling of speed mathematically reaches a singularity in finite time when the speed increases pay insight dividends. You can’t reach that singularity with a fleshbag in the loop (though it may be unlikely to matter if with him in the loop, you merely double every day).
For certain shapes of how speed increases depend on insight and oversight, there may be a perverse incentive to cut yourself out of your loop before the other guy cuts himself out.
I expect that human-level language processing is enough to construct human-level programming and mathematical research ability. Aka, complete a research diary the way a human would, by matching with patterns it has previously seen, just as human mathematicians do. That should be capability enough to go as foom as possible.
If AI is limited by hardware rather than insight, I find it unlikely that a 300 trillion parameter Transformer trained to reproduce math/CS papers would be able to “go foom.” In other words, while I agree that the system I have described would likely be able to do human-level programming (though it would still make mistakes, just like human programmers!) I doubt that this would necessarily cause it to enter a quick transition to superintelligence of any sort.
I suspect the system that I have described above would be well suited for automating some types of jobs, but would not necessarily alter the structure of the economy by a radical degree.
It wouldn’t necessarily cause such a quick transition, but it could easily be made to. A human with access to this tool could iterate designs very quickly, and he could take himself out of the loop by letting the tool predict and execute his actions as well, or by piping its code ideas directly into a compiler, or some other way the tool thinks up.
My skepticism is mainly that this would be quicker than normal human iteration, or that this would substantially improve upon the strategy of simply buying more hardware. However, as we see in the recent case of eg. roBERTa, there are a few insights which substantially improve upon a single AI system. I just remain skeptical that a single human-level AI system would produce these insights faster than a regular human team of experts.
In other words, my opinion of recursive self improvement in this narrow case is that it isn’t a fundamentally different strategy from human oversight and iteration. It can be used to automate some parts of the process, but I don’t think that foom is necessarily implied in any strong sense.
The default argument that such a development would lead to a foom is that an insight-based regular doubling of speed mathematically reaches a singularity in finite time when the speed increases pay insight dividends. You can’t reach that singularity with a fleshbag in the loop (though it may be unlikely to matter if with him in the loop, you merely double every day).
For certain shapes of how speed increases depend on insight and oversight, there may be a perverse incentive to cut yourself out of your loop before the other guy cuts himself out.