When there is a simple enlightening experiment that can be constructed out of available parts (including theories that inform construction), it can be found with expert intuition, without clear understanding. When there are no new parts for a while, and many experiments have been tried, this is evidence that further blind search becomes less likely to produce results, that more complicated experiments are necessary that can only be designed with stronger understanding.
Recently, there are many new parts for AI tinkering, some themselves obtained from blind experimentation (scaling gives new capabilities that couldn’t be predicted to result from particular scaling experiments). Not enough time and effort has passed to rule out further significant advancement by simple tinkering with these new parts, and scaling itself hasn’t run out of steam yet, it by itself might deliver even more new parts for further tinkering.
So while it’s true that there is no reason to expect specific advancements, there is still reason to expect advancements of unspecified character for at least a few years, more of them than usually. This wave of progress might run out of steam before AGI, or it might not, there is no clear theory to say which is true. Current capabilities seem sufficiently impressive that even modest unpredictable advancement might prove sufficient, which is an observation that distinguishes the current wave of AI progress from previous ones.
The point is, it’s still a matter of intuitively converting impressiveness of current capabilities and new parts available for tinkering that hasn’t been done yet into probability of this wave petering out before AGI. The arguments for AGI “being overdetermined” can be amended to become arguments for particular (kinds of) sequences of experiments looking promising, shifting the estimate once taken into account. Since failure of such experiments is not independent, the estimate can start going down as soon as scaling stops producing novel capabilities, or reaches the limits of economic feasibility, or there is a year or two without significant breakthroughs.
Right now, it’s looking grim, but a claim I agree with is that planning for the possibility of AGI taking 20+ years is still relevant, nobody actually knows it’s inevitable. I think the following few years will change this estimate significantly either way.
I’m not really sure whether or not we disagree. I did put “3%-10% probability of AGI in the next 10-15ish years”.
I think the following few years will change this estimate significantly either way.
Well, I hope that this is a one-time thing. I hope that if in a few years we’re still around, people go “Damn! We maybe should have been putting a bit more juice into decades-long plans! And we should do so now, though a couple more years belatedly!”, rather than going “This time for sure!” and continuing to not invest in the decades-long plans. My impression is that a lot of people used to work on decades-long plans and then shifted recently to 3-10 year plans, so it’s not like everyone’s being obviously incoherent. But I also have an impression that the investment in decades-plans is mistakenly low; when I propose decades-plans, pretty nearly everyone isn’t interested, with their cited reason being that AGI comes within a decade.
When there is a simple enlightening experiment that can be constructed out of available parts (including theories that inform construction), it can be found with expert intuition, without clear understanding. When there are no new parts for a while, and many experiments have been tried, this is evidence that further blind search becomes less likely to produce results, that more complicated experiments are necessary that can only be designed with stronger understanding.
Recently, there are many new parts for AI tinkering, some themselves obtained from blind experimentation (scaling gives new capabilities that couldn’t be predicted to result from particular scaling experiments). Not enough time and effort has passed to rule out further significant advancement by simple tinkering with these new parts, and scaling itself hasn’t run out of steam yet, it by itself might deliver even more new parts for further tinkering.
So while it’s true that there is no reason to expect specific advancements, there is still reason to expect advancements of unspecified character for at least a few years, more of them than usually. This wave of progress might run out of steam before AGI, or it might not, there is no clear theory to say which is true. Current capabilities seem sufficiently impressive that even modest unpredictable advancement might prove sufficient, which is an observation that distinguishes the current wave of AI progress from previous ones.
I think the current wave is special, but that’s a very far cry from being clearly on the ramp up to AGI.
The point is, it’s still a matter of intuitively converting impressiveness of current capabilities and new parts available for tinkering that hasn’t been done yet into probability of this wave petering out before AGI. The arguments for AGI “being overdetermined” can be amended to become arguments for particular (kinds of) sequences of experiments looking promising, shifting the estimate once taken into account. Since failure of such experiments is not independent, the estimate can start going down as soon as scaling stops producing novel capabilities, or reaches the limits of economic feasibility, or there is a year or two without significant breakthroughs.
Right now, it’s looking grim, but a claim I agree with is that planning for the possibility of AGI taking 20+ years is still relevant, nobody actually knows it’s inevitable. I think the following few years will change this estimate significantly either way.
I’m not really sure whether or not we disagree. I did put “3%-10% probability of AGI in the next 10-15ish years”.
Well, I hope that this is a one-time thing. I hope that if in a few years we’re still around, people go “Damn! We maybe should have been putting a bit more juice into decades-long plans! And we should do so now, though a couple more years belatedly!”, rather than going “This time for sure!” and continuing to not invest in the decades-long plans. My impression is that a lot of people used to work on decades-long plans and then shifted recently to 3-10 year plans, so it’s not like everyone’s being obviously incoherent. But I also have an impression that the investment in decades-plans is mistakenly low; when I propose decades-plans, pretty nearly everyone isn’t interested, with their cited reason being that AGI comes within a decade.