To me, it really looks like brains and LLMs are both using embedding spaces to represent information. Embedding spaces ground symbols by automatically relating all concepts they contain, including the grammar for manipulating these concepts.
There are some papers suggesting this could indeed be the case, at least for language processing e.g. Shared computational principles for language processing in humans and deep language models, Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain.
To me, it really looks like brains and LLMs are both using embedding spaces to represent information. Embedding spaces ground symbols by automatically relating all concepts they contain, including the grammar for manipulating these concepts.
There are some papers suggesting this could indeed be the case, at least for language processing e.g. Shared computational principles for language processing in humans and deep language models, Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain.