Whatever your estimates two years ago for AGI timelines, they should probably be shorter and more explicit this year.
Should they? (the rest of this comment is me thinking out loud)
AlphaGo and breakthroughs in the Atari domain were reasonable things to update on, but those were 2015 and early 2016 so around two years ago. Thinking about progress since then, GANs have done interesting stuff and there’s been progress especially in image recognition and generation; but many of the results seem more like incremental than qualitative progress (the DeepDream stuff in 2015 already caused me to guess that image generation stuff would be on the horizon).
Intuitively, it does feel like AI results have been coming out faster recently, so in that sense there might be reason to update somewhat in the direction of shorter timelines—it shows that the qualitative breakthroughs of the earlier years could be successfully built on. But off the top of my head, it’s not clear to me that anything would obviously contradict a model of “we’re seeing another temporary AI boom enabled by new discoveries, which will run out of steam once the newest low-hanging fruit get picked”—a model which one might also have predicted two years ago.
While I’m seeing the incremental progress proceeding faster now than I probably would have predicted earlier, it mostly seems within the boundaries of the deep learning paradigm as implied by the 2015-early 2016 state of the art. So it feels like we may end up running against the limitations of the current paradigm (which people are already writing papers about) faster than expected, but there isn’t any indication that we would get past those limitations any faster. In 2015/e2016 there seemed to be rough agreement among experts that deep learning was enabling us to implement decades-old ideas because we finally had the hardware for it, but that there hadn’t been any new ideas nor real progress in understanding intelligence; I’m under the impression that this still mostly reflects the current consensus.
One exception to the “looks like mostly incremental progress” is Google Neural Machine Translation (November 2016, which we should probably count as less than two years) which I wouldn’t have predicted based on the earlier stuff.
On the other hand, one could make the argument that this wave of AI is going to boost economic growth and science; if e.g. various fields of science end up incorporating more AI techniques and accelerate as a result, then that could end up feeding back into AI and accelerating it further. In particular, advances in something like neuroscience could accelerate timelines, and deep learning is indeed being applied to stuff like neuroimaging.
Overall, it does feel to me like a reasonable claim that we should expect somewhat shorter AGI timelines now than two years ago, with most of the update coming from AI boosting science and the economy; but I worry that this feels more like an intuition driven by the ease of having found a plausible-sounding story (“AI will boost science in general and some of that progress will come back to boost AGI development”) rather than any particularly rigorous evidence.
My short answer is that I think you’re right enough here that I should probably walk back my claim somewhat, or at least justify it better than it currently is. (I.e. I notice that I have a hard time answering this in a way I feel confident and good about)
The mechanism by which I updated wasn’t about AI boosting science and economy. It’s more like:
Prior to the past few years, my understanding of how AI’s might behave was almost entirely theoretical. In the absence of being able to do empiricism, working with theory is important. But by now we’ve seen things that before we could only theorize about.
I think my “2 years” remark is maybe framed slightly better as 3 years. In the past 3 years, we’ve seen 3 milestones that struck me as significant:
Deepmind creating a thing that can solve arbitrary atari games given pixel input. (I think this was 3 or 4 years ago, and was a significant update for me about agent-like-things being able to interact with the world)
AlphaGo (which came sooner than people were expecting, even after the Atari stuff)
AlphaGoZero (which my impression was also sooner than people expected, even after AlphaGo, and where the improvements came from simplifying the architecture)
I do think I was overfixated on AlphaGo in particular. While writing the OP, rereading Sarah’s posts that emphasize the other domains where progress isn’t so incredibly did slightly reverse my “oh god timelines are short” belief.
But Sarah’s posts still note that we’ve been seeing improvements in gameplay in particular, which seems like the domain most relevant to AGI, even if the mechanism is “deep learning allows us to better leverage hardware improvements.”
“On the other hand, one could make the argument that this wave of AI is going to boost economic growth and science”—One can make a much more direct argument than this. The rate of incremental progress is important because that determines the amount of money flowing into the field and the amount of programmers studying AI. Now that the scope of tasks solvable by AI has increased vastly, the size of the field has been permanently raised and this increases the chance that innovations in general will occur. Further, there has been an increase in optimism about the power of AI which encourages people to be more ambitious.
“AI” may be too broad of a category, though. As an analogy, consider that there is currently a huge demand for programmers who do all kind of website development, but as far as I know, this hasn’t translated into an increased number of academics studying—say—models of computation, even though both arguably fall under “computer science”.
Similarly, the current wave of AI may get us a lot of people into doing deep learning and building machine learning models for specific customer applications, without increasing the number of people working on AGI much.
It’s true that there is now more excitement for AI, including more excitement for AGI. On the other hand, more excitement followed by disillusionment has previously led to AI winters.
Should they? (the rest of this comment is me thinking out loud)
AlphaGo and breakthroughs in the Atari domain were reasonable things to update on, but those were 2015 and early 2016 so around two years ago. Thinking about progress since then, GANs have done interesting stuff and there’s been progress especially in image recognition and generation; but many of the results seem more like incremental than qualitative progress (the DeepDream stuff in 2015 already caused me to guess that image generation stuff would be on the horizon).
Intuitively, it does feel like AI results have been coming out faster recently, so in that sense there might be reason to update somewhat in the direction of shorter timelines—it shows that the qualitative breakthroughs of the earlier years could be successfully built on. But off the top of my head, it’s not clear to me that anything would obviously contradict a model of “we’re seeing another temporary AI boom enabled by new discoveries, which will run out of steam once the newest low-hanging fruit get picked”—a model which one might also have predicted two years ago.
While I’m seeing the incremental progress proceeding faster now than I probably would have predicted earlier, it mostly seems within the boundaries of the deep learning paradigm as implied by the 2015-early 2016 state of the art. So it feels like we may end up running against the limitations of the current paradigm (which people are already writing papers about) faster than expected, but there isn’t any indication that we would get past those limitations any faster. In 2015/e2016 there seemed to be rough agreement among experts that deep learning was enabling us to implement decades-old ideas because we finally had the hardware for it, but that there hadn’t been any new ideas nor real progress in understanding intelligence; I’m under the impression that this still mostly reflects the current consensus.
One exception to the “looks like mostly incremental progress” is Google Neural Machine Translation (November 2016, which we should probably count as less than two years) which I wouldn’t have predicted based on the earlier stuff.
On the other hand, one could make the argument that this wave of AI is going to boost economic growth and science; if e.g. various fields of science end up incorporating more AI techniques and accelerate as a result, then that could end up feeding back into AI and accelerating it further. In particular, advances in something like neuroscience could accelerate timelines, and deep learning is indeed being applied to stuff like neuroimaging.
Overall, it does feel to me like a reasonable claim that we should expect somewhat shorter AGI timelines now than two years ago, with most of the update coming from AI boosting science and the economy; but I worry that this feels more like an intuition driven by the ease of having found a plausible-sounding story (“AI will boost science in general and some of that progress will come back to boost AGI development”) rather than any particularly rigorous evidence.
My short answer is that I think you’re right enough here that I should probably walk back my claim somewhat, or at least justify it better than it currently is. (I.e. I notice that I have a hard time answering this in a way I feel confident and good about)
The mechanism by which I updated wasn’t about AI boosting science and economy. It’s more like:
Prior to the past few years, my understanding of how AI’s might behave was almost entirely theoretical. In the absence of being able to do empiricism, working with theory is important. But by now we’ve seen things that before we could only theorize about.
I think my “2 years” remark is maybe framed slightly better as 3 years. In the past 3 years, we’ve seen 3 milestones that struck me as significant:
Deepmind creating a thing that can solve arbitrary atari games given pixel input. (I think this was 3 or 4 years ago, and was a significant update for me about agent-like-things being able to interact with the world)
AlphaGo (which came sooner than people were expecting, even after the Atari stuff)
AlphaGoZero (which my impression was also sooner than people expected, even after AlphaGo, and where the improvements came from simplifying the architecture)
I do think I was overfixated on AlphaGo in particular. While writing the OP, rereading Sarah’s posts that emphasize the other domains where progress isn’t so incredibly did slightly reverse my “oh god timelines are short” belief.
But Sarah’s posts still note that we’ve been seeing improvements in gameplay in particular, which seems like the domain most relevant to AGI, even if the mechanism is “deep learning allows us to better leverage hardware improvements.”
“On the other hand, one could make the argument that this wave of AI is going to boost economic growth and science”—One can make a much more direct argument than this. The rate of incremental progress is important because that determines the amount of money flowing into the field and the amount of programmers studying AI. Now that the scope of tasks solvable by AI has increased vastly, the size of the field has been permanently raised and this increases the chance that innovations in general will occur. Further, there has been an increase in optimism about the power of AI which encourages people to be more ambitious.
“AI” may be too broad of a category, though. As an analogy, consider that there is currently a huge demand for programmers who do all kind of website development, but as far as I know, this hasn’t translated into an increased number of academics studying—say—models of computation, even though both arguably fall under “computer science”.
Similarly, the current wave of AI may get us a lot of people into doing deep learning and building machine learning models for specific customer applications, without increasing the number of people working on AGI much.
It’s true that there is now more excitement for AI, including more excitement for AGI. On the other hand, more excitement followed by disillusionment has previously led to AI winters.