My model is very discontinues, I try to think of AI as AI (and avoid the term AGI).
And sure intelligence has some G measure, and everything we have built so far is low G[1] (humans have high G).
Anyway, at the core I think the jump will happen when an AI system learns the meta task / goal “Search and evaluate”[2], once that happens[3] G would start increasing very fast (versus earlier), and adding resources to such a thing would just accelerate this[4].
And I don’t see how that diverges from this reality or a reality where its not possible to get there, until obviously we get there.
And my intuition says that requires a system that has much higher G than current once, although looking at how that likely played out for us, it might be much lower than my intuition leads me to believe.
My model is very discontinues, I try to think of AI as AI (and avoid the term AGI).
And sure intelligence has some G measure, and everything we have built so far is low G[1] (humans have high G).
Anyway, at the core I think the jump will happen when an AI system learns the meta task / goal “Search and evaluate”[2], once that happens[3] G would start increasing very fast (versus earlier), and adding resources to such a thing would just accelerate this[4].
And I don’t see how that diverges from this reality or a reality where its not possible to get there, until obviously we get there.
I can’t speak to what people have built / are building in private.
Whenever people say AGI, I think AI that can do “search and evaluate” recursively.
And my intuition says that requires a system that has much higher G than current once, although looking at how that likely played out for us, it might be much lower than my intuition leads me to believe.
That is contingent on architecture, if we built a system that cannot scale easily or at all, then this wont happen.