You could think that there is a lot of machine learning progress to be made between here and AGI, such that even upper bounds on current progress leave decades to go.
You could think that even a lot of the right machine learning progress won’t lead to AGI at all. Perhaps it is an entirely different type of thought. Perhaps it does not qualify as thought at all. We find more and more practical tasks that AIs can do with machine learning, but one can think both ‘there are a lot of tasks machine learning will learn to do’ and ‘machine learning in anything like its current form cannot, even fully developed, do all tasks needed for AGI.’
And so on.
Most of those don’t predict much about the next two years, other than a non-binding upper bound. With these models, when machine learning does a new thing, that teaches us more about that problem’s difficulty than about how fast machine learning is advancing.
Under these models, Go and Heads Up No-Limit Hold ’Em Poker are easier problems than we expected. We should update in favor of well-defined adversarial problems with compact state expressions but large branch trees being easier to solve. That doesn’t mean we shouldn’t update our progress estimates at all, but perhaps we shouldn’t update much.
This goes with everything AI learns to do ceasing to be AI.
Thus, one can reasonably have a model where impressiveness of short-term advances does not much move our AGI timelines.
And why should we privilege those models? Why should we assign significant priors to hypotheses that permit amazing short term AI improvements to not alter long term AI timelines? What evidence has brought those hypotheses to our attention?
What makes such a model reasonable? One can “reasonably have a model where impressiveness of short-term advances does not much move our AGI timelines”, but why should we trust such a model? Why would such a model be good in the first place.
How probable so we believe the underlying reasons behind such models are? For example, consider the “different kinds of thought” that Sarah mentions; I think Sarah completely misses the point.
The strength of a model is not what it permits, but what it doesn’t. A model that predicts A or !A is always accurate, but confers zero information. A model of AGI that predicts AGI being distant, but does not update towards AGI being near when impressive ML achievements occur is a model that makes me sceptical. It sounds a little too liberal for my tastes, and is bordering on being difficult to falsify.
A model that doesn’t update it’s AGI timelines nearer when impressive ML achievements occurs and doesn’t update its AGI timelines when ML predictions fail to realise seems like an unfalsifiable (and thus unscientific) model to me.
A model that doesn’t update it’s AGI timelines nearer when impressive ML achievements occurs and updates its AGI timelines farther when ML predictions fail to realise seems like a model that violates conservation of expected evidence.
A model that doesn’t update AGI timelines nearer proportional to the awesomeness of the ML achievement is a model I’m sceptical of. I’m sceptical of a model hat looks at an amazing ML achievement, and instead updates towards the problem being easier than initially expected—that’s a fully general counterargument and can apply to all ML achievements.
And why should we privilege those models? Why should we assign significant priors to hypotheses that permit amazing short term AI improvements to not alter long term AI timelines? What evidence has brought those hypotheses to our attention?
What makes such a model reasonable? One can “reasonably have a model where impressiveness of short-term advances does not much move our AGI timelines”, but why should we trust such a model? Why would such a model be good in the first place.
How probable so we believe the underlying reasons behind such models are? For example, consider the “different kinds of thought” that Sarah mentions; I think Sarah completely misses the point.
The strength of a model is not what it permits, but what it doesn’t. A model that predicts A or !A is always accurate, but confers zero information. A model of AGI that predicts AGI being distant, but does not update towards AGI being near when impressive ML achievements occur is a model that makes me sceptical. It sounds a little too liberal for my tastes, and is bordering on being difficult to falsify.
A model that doesn’t update it’s AGI timelines nearer when impressive ML achievements occurs and doesn’t update its AGI timelines when ML predictions fail to realise seems like an unfalsifiable (and thus unscientific) model to me.
A model that doesn’t update it’s AGI timelines nearer when impressive ML achievements occurs and updates its AGI timelines farther when ML predictions fail to realise seems like a model that violates conservation of expected evidence.
A model that doesn’t update AGI timelines nearer proportional to the awesomeness of the ML achievement is a model I’m sceptical of. I’m sceptical of a model hat looks at an amazing ML achievement, and instead updates towards the problem being easier than initially expected—that’s a fully general counterargument and can apply to all ML achievements.
Emphatic agreement.
P.S: I cannot quote on mobile; the above is an improvisation.
Fixed it for you
Thank you very much.