I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.
Explain the inside view of “algorithmic overhang”?
I don’t have an inside view. If I did, that would be pretty powerful capabilities information.
I’m pointing at the possibility that we already have more than sufficient resources for AGI and we’re only separated from it by a few insights (a la transformers) and clever system architecture. I’m not predicting this is true just that it’s plausible based on existing intelligent systems (humans).
Epistemic status: pondering aloud to coalsce my own fuzzy thoughts a bit
I’d speculate that the missing pieces are conceptually tricky things like self-referential “strange loops”, continual learning with updateable memory, and agentic interactions with an environment. These are only vague ideas in my mind but, for some reason, feel difficult to solve but don’t feel like things that require massive data and training resources so much as useful connections to reality and itself.