Isn’t it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn’t it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn’t sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn’t it?
Isn’t it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn’t it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn’t sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn’t it?