Christiano predicts progress will be (approximately) a smooth curve, whereas Yudkowsky predicts there will be discontinuous-ish “jumps”, but there’s another thing that can happen that both of them seem to dismiss: progress hitting a major obstacle and plateauing for a while (i.e. the progress curve looking locally like a sigmoid). I guess that the reason they dismiss it is related to this quote by Soares:
I observe that, 15 years ago, everyone was saying AGI is far off because of what it couldn’t do—basic image recognition, go, starcraft, winograd schemas, programmer assistance. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles.
However, I think this is not entirely accurate. Some games are still unsolved without “cheating”, where by cheating I mean using human demonstrations or handcrafted rewards, and that includes Montezuma’s Revenge, StarCraft II and Dota 2 (and Dota 2 with unlimited hero selection is even more unsolved). Moreover, we haven’t seen RL show superhuman performance on any task in which the environment is substantially more complex than the agent in important ways (this rules out all video games, unless if winning the game requires a good theory of mind of your opponents[1], which is arguably never the case for zero-sum two-player games). Language models made impressive progress, but I don’t think they are superhuman along any interesting dimension. Classifiers still struggle with adversarial examples (although, this is not necessarily an important limitation, maybe humans have “adversarial examples” too).
So, it is certainly possible that it’s a “clear runway” from here to superintelligence. But I don’t think it’s obvious.
My Eliezer-model is a lot less surprised by lulls than my Paul-model (because we’re missing key insights for AGI, progress on insights is jumpy and hard to predict, the future is generally very unpredictable, etc.). I don’t know exactly how large of a lull or winter would start to surprise Eliezer (or how much that surprise would change if the lull is occurring two years from now, vs. ten years from now, for example).
I have a rough intuitive feeling that it [AI progress] was going faster in 2015-2017 than 2018-2020.
So in that sense Eliezer thinks we’re already in a slowdown to some degree (as of 2020), though I gather you’re talking about a much larger and more long-lasting slowdown.
I generally expect smoother progress, but predictions about lulls are probably dominated by Eliezer’s shorter timelines. Also lulls are generally easier than spurts, e.g. I think that if you just slow investment growth you get a lull and that’s not too unlikely (whereas part of why it’s hard to get a spurt is that investment rises to levels where you can’t rapidly grow it further).
Makes some sense, but Yudkowsky’s prediction that TAI will arrive before AI has large economic impact does forbid a lot of plateau scenarios. Given a plateau that’s sufficiently high and sufficiently long, AI will land in the market, I think. Even if regulatory hurdles are the bottleneck for a lot of things atm, eventually in some country AI will become important and the others will have to follow or fall behind.
Christiano predicts progress will be (approximately) a smooth curve, whereas Yudkowsky predicts there will be discontinuous-ish “jumps”, but there’s another thing that can happen that both of them seem to dismiss: progress hitting a major obstacle and plateauing for a while (i.e. the progress curve looking locally like a sigmoid). I guess that the reason they dismiss it is related to this quote by Soares:
However, I think this is not entirely accurate. Some games are still unsolved without “cheating”, where by cheating I mean using human demonstrations or handcrafted rewards, and that includes Montezuma’s Revenge, StarCraft II and Dota 2 (and Dota 2 with unlimited hero selection is even more unsolved). Moreover, we haven’t seen RL show superhuman performance on any task in which the environment is substantially more complex than the agent in important ways (this rules out all video games, unless if winning the game requires a good theory of mind of your opponents[1], which is arguably never the case for zero-sum two-player games). Language models made impressive progress, but I don’t think they are superhuman along any interesting dimension. Classifiers still struggle with adversarial examples (although, this is not necessarily an important limitation, maybe humans have “adversarial examples” too).
So, it is certainly possible that it’s a “clear runway” from here to superintelligence. But I don’t think it’s obvious.
I know there are strong poker AIs, but I suspect they win via something other than theory of mind. Maybe someone who knows the topic can comment.
My Eliezer-model is a lot less surprised by lulls than my Paul-model (because we’re missing key insights for AGI, progress on insights is jumpy and hard to predict, the future is generally very unpredictable, etc.). I don’t know exactly how large of a lull or winter would start to surprise Eliezer (or how much that surprise would change if the lull is occurring two years from now, vs. ten years from now, for example).
In Yudkowsky and Christiano Discuss “Takeoff Speeds”, Eliezer says:
So in that sense Eliezer thinks we’re already in a slowdown to some degree (as of 2020), though I gather you’re talking about a much larger and more long-lasting slowdown.
I generally expect smoother progress, but predictions about lulls are probably dominated by Eliezer’s shorter timelines. Also lulls are generally easier than spurts, e.g. I think that if you just slow investment growth you get a lull and that’s not too unlikely (whereas part of why it’s hard to get a spurt is that investment rises to levels where you can’t rapidly grow it further).
Makes some sense, but Yudkowsky’s prediction that TAI will arrive before AI has large economic impact does forbid a lot of plateau scenarios. Given a plateau that’s sufficiently high and sufficiently long, AI will land in the market, I think. Even if regulatory hurdles are the bottleneck for a lot of things atm, eventually in some country AI will become important and the others will have to follow or fall behind.