I don’t have the reference handy, but he wasn’t saying let’s spend 20 years of armchair thought developing AGI theory before we start writing any code (I’m sure he knows better than that), he was saying forget about AGI completely until we’ve got another 20 years of general technological progress under our belts.
Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis—which for most of us, is a cheap desktop PC—and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).
I don’t have the reference handy, but he wasn’t saying let’s spend 20 years of armchair thought developing AGI theory before we start writing any code (I’m sure he knows better than that), he was saying forget about AGI completely until we’ve got another 20 years of general technological progress under our belts.
Not general technological progress surely, but the theory and tools developed by working on particular machine learning problems and methodologies?
Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis—which for most of us, is a cheap desktop PC—and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).