Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.
Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.