That’s great analogy. To me the strength of the OP is to pinpoint that LLMs already exhibit the kind of general ability we would expect from AGI, and the weakness is to forget that LLMs do not exhibit some specific ability most thought easy, such as the agency that even clownfishes exhibit.
In a way this sounds like again the universe is telling us we should rethink what intelligence is. Chess is hard and doing the dishes is easy? Nope. Language is hard and agency is central? Nope.
I’m not even sure where I would try to start but do wonder if John Wemtworth’s concept of Natural Latents might not offer a useful framework for better grounding the subject for this type of discussion.
My understanding of this framework is probably too raw to go sane (A natural latent is a convolution basis useful for analyzing natural inputs, and it’s powerful because function composition is powerful) but it could fit nicely with Agency is what neurons in the biological movement area detect.
That’s great analogy. To me the strength of the OP is to pinpoint that LLMs already exhibit the kind of general ability we would expect from AGI, and the weakness is to forget that LLMs do not exhibit some specific ability most thought easy, such as the agency that even clownfishes exhibit.
In a way this sounds like again the universe is telling us we should rethink what intelligence is. Chess is hard and doing the dishes is easy? Nope. Language is hard and agency is central? Nope.
I’m not even sure where I would try to start but do wonder if John Wemtworth’s concept of Natural Latents might not offer a useful framework for better grounding the subject for this type of discussion.
My understanding of this framework is probably too raw to go sane (A natural latent is a convolution basis useful for analyzing natural inputs, and it’s powerful because function composition is powerful) but it could fit nicely with Agency is what neurons in the biological movement area detect.