You’re right, this is a rather mealy-mouthed claim. I’ve edited it to read as follows:
the empirical claim that we’ll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single strongly superhuman AGI
This would be false if doing well at human jobs requires capabilities that are near AGI. I do expect a phase transition—roughly speaking I expect progress in automation to mostly require more data and engineering, and progress towards AGI to require algorithmic advances and a cognition-first approach. But the thing I’m trying to endorse in the post is a weaker claim which I think Eric would agree with.
You’re right, this is a rather mealy-mouthed claim. I’ve edited it to read as follows:
This would be false if doing well at human jobs requires capabilities that are near AGI. I do expect a phase transition—roughly speaking I expect progress in automation to mostly require more data and engineering, and progress towards AGI to require algorithmic advances and a cognition-first approach. But the thing I’m trying to endorse in the post is a weaker claim which I think Eric would agree with.