In-context learning in LLMs maps fairly well onto the concept of fluid intelligence. There are several papers now indicating that general learning algorithms emerge in LLMs to facilitate in-context learning.
Even if they did, it’s not the right type of “general learning algorithm”, in my view. See here, plus a paragraph in Section 6 about how “general in the limit” doesn’t mean “actually reaches generality in finite time with finite data”.
In-context learning in LLMs maps fairly well onto the concept of fluid intelligence. There are several papers now indicating that general learning algorithms emerge in LLMs to facilitate in-context learning.
I assume you’re talking about things like that?
These papers probably don’t mean what they seem to.
Even if they did, it’s not the right type of “general learning algorithm”, in my view. See here, plus a paragraph in Section 6 about how “general in the limit” doesn’t mean “actually reaches generality in finite time with finite data”.
I’ll grant that it does have a spooky vibe.