“agi”, to different people, means any of: - ai capable of a wide range of tasks - ai capable of meta-learning - consistently superhuman ai - superintelligence
the word almost always creates confusion and degrades discourse
Agency is what defines the difference, not generality. Current LLMs are general, but not superhuman or starkly superintelligent. LLMs work out that they can’t do it without more capabilities—and tell you so. You can give them the capabilities, but not being hyperagentic, they aren’t desperate for it. But a reinforcement learner, being highly agentic, would be.
If you’re interested in formalism behind this, I’d suggest attempting to at least digest the abstract and intro to https://arxiv.org/abs/2208.08345 - it’s my current favorite formalization of what agency is. Though there’s also great and slightly less formal discussion of it on lesswrong.
https://twitter.com/parafactual/status/1640537814608793600
Agency is what defines the difference, not generality. Current LLMs are general, but not superhuman or starkly superintelligent. LLMs work out that they can’t do it without more capabilities—and tell you so. You can give them the capabilities, but not being hyperagentic, they aren’t desperate for it. But a reinforcement learner, being highly agentic, would be.
If you’re interested in formalism behind this, I’d suggest attempting to at least digest the abstract and intro to https://arxiv.org/abs/2208.08345 - it’s my current favorite formalization of what agency is. Though there’s also great and slightly less formal discussion of it on lesswrong.