My skepticism is mainly that this would be quicker than normal human iteration, or that this would substantially improve upon the strategy of simply buying more hardware. However, as we see in the recent case of eg. roBERTa, there are a few insights which substantially improve upon a single AI system. I just remain skeptical that a single human-level AI system would produce these insights faster than a regular human team of experts.
In other words, my opinion of recursive self improvement in this narrow case is that it isn’t a fundamentally different strategy from human oversight and iteration. It can be used to automate some parts of the process, but I don’t think that foom is necessarily implied in any strong sense.
The default argument that such a development would lead to a foom is that an insight-based regular doubling of speed mathematically reaches a singularity in finite time when the speed increases pay insight dividends. You can’t reach that singularity with a fleshbag in the loop (though it may be unlikely to matter if with him in the loop, you merely double every day).
For certain shapes of how speed increases depend on insight and oversight, there may be a perverse incentive to cut yourself out of your loop before the other guy cuts himself out.
My skepticism is mainly that this would be quicker than normal human iteration, or that this would substantially improve upon the strategy of simply buying more hardware. However, as we see in the recent case of eg. roBERTa, there are a few insights which substantially improve upon a single AI system. I just remain skeptical that a single human-level AI system would produce these insights faster than a regular human team of experts.
In other words, my opinion of recursive self improvement in this narrow case is that it isn’t a fundamentally different strategy from human oversight and iteration. It can be used to automate some parts of the process, but I don’t think that foom is necessarily implied in any strong sense.
The default argument that such a development would lead to a foom is that an insight-based regular doubling of speed mathematically reaches a singularity in finite time when the speed increases pay insight dividends. You can’t reach that singularity with a fleshbag in the loop (though it may be unlikely to matter if with him in the loop, you merely double every day).
For certain shapes of how speed increases depend on insight and oversight, there may be a perverse incentive to cut yourself out of your loop before the other guy cuts himself out.