I’m broadly sympathetic to the empirical claim that we’ll develop AI services which are limited but still superhuman in many ways significantly before we develop any single strongly superhuman AGI
Isn’t this claim just already true? (Ever since we made computers that were better than us at arithmetic.)
Do you expect any phase transition between what we have now and what you refer to as “AI services which are limited but still superhuman in many ways”, or is this just a claim that as we go forward more and more tasks will continue to fall into the category of “things computers are better at than people”?
Edit: in other words, what would it mean for this claim to turn out to be false? Just like, extremely limited progress from here on out, and then at some point: boom, AGI?
You’re right, this is a rather mealy-mouthed claim. I’ve edited it to read as follows:
the empirical claim that we’ll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single strongly superhuman AGI
This would be false if doing well at human jobs requires capabilities that are near AGI. I do expect a phase transition—roughly speaking I expect progress in automation to mostly require more data and engineering, and progress towards AGI to require algorithmic advances and a cognition-first approach. But the thing I’m trying to endorse in the post is a weaker claim which I think Eric would agree with.
Isn’t this claim just already true? (Ever since we made computers that were better than us at arithmetic.)
Do you expect any phase transition between what we have now and what you refer to as “AI services which are limited but still superhuman in many ways”, or is this just a claim that as we go forward more and more tasks will continue to fall into the category of “things computers are better at than people”?
Edit: in other words, what would it mean for this claim to turn out to be false? Just like, extremely limited progress from here on out, and then at some point: boom, AGI?
You’re right, this is a rather mealy-mouthed claim. I’ve edited it to read as follows:
This would be false if doing well at human jobs requires capabilities that are near AGI. I do expect a phase transition—roughly speaking I expect progress in automation to mostly require more data and engineering, and progress towards AGI to require algorithmic advances and a cognition-first approach. But the thing I’m trying to endorse in the post is a weaker claim which I think Eric would agree with.