My short answer is that I think you’re right enough here that I should probably walk back my claim somewhat, or at least justify it better than it currently is. (I.e. I notice that I have a hard time answering this in a way I feel confident and good about)
The mechanism by which I updated wasn’t about AI boosting science and economy. It’s more like:
Prior to the past few years, my understanding of how AI’s might behave was almost entirely theoretical. In the absence of being able to do empiricism, working with theory is important. But by now we’ve seen things that before we could only theorize about.
I think my “2 years” remark is maybe framed slightly better as 3 years. In the past 3 years, we’ve seen 3 milestones that struck me as significant:
Deepmind creating a thing that can solve arbitrary atari games given pixel input. (I think this was 3 or 4 years ago, and was a significant update for me about agent-like-things being able to interact with the world)
AlphaGo (which came sooner than people were expecting, even after the Atari stuff)
AlphaGoZero (which my impression was also sooner than people expected, even after AlphaGo, and where the improvements came from simplifying the architecture)
I do think I was overfixated on AlphaGo in particular. While writing the OP, rereading Sarah’s posts that emphasize the other domains where progress isn’t so incredibly did slightly reverse my “oh god timelines are short” belief.
But Sarah’s posts still note that we’ve been seeing improvements in gameplay in particular, which seems like the domain most relevant to AGI, even if the mechanism is “deep learning allows us to better leverage hardware improvements.”
My short answer is that I think you’re right enough here that I should probably walk back my claim somewhat, or at least justify it better than it currently is. (I.e. I notice that I have a hard time answering this in a way I feel confident and good about)
The mechanism by which I updated wasn’t about AI boosting science and economy. It’s more like:
Prior to the past few years, my understanding of how AI’s might behave was almost entirely theoretical. In the absence of being able to do empiricism, working with theory is important. But by now we’ve seen things that before we could only theorize about.
I think my “2 years” remark is maybe framed slightly better as 3 years. In the past 3 years, we’ve seen 3 milestones that struck me as significant:
Deepmind creating a thing that can solve arbitrary atari games given pixel input. (I think this was 3 or 4 years ago, and was a significant update for me about agent-like-things being able to interact with the world)
AlphaGo (which came sooner than people were expecting, even after the Atari stuff)
AlphaGoZero (which my impression was also sooner than people expected, even after AlphaGo, and where the improvements came from simplifying the architecture)
I do think I was overfixated on AlphaGo in particular. While writing the OP, rereading Sarah’s posts that emphasize the other domains where progress isn’t so incredibly did slightly reverse my “oh god timelines are short” belief.
But Sarah’s posts still note that we’ve been seeing improvements in gameplay in particular, which seems like the domain most relevant to AGI, even if the mechanism is “deep learning allows us to better leverage hardware improvements.”