I more-or-less endorse the model described in larger language models may disappoint you [or, an eternally unfinished draft], and moreover I think language is an inherently lossy instrument such that the minimally-lossy model won’t have perfectly learned the causal processes or whatever behind its production.
Why do you think that LLMs will hit a wall in the future?
I more-or-less endorse the model described in larger language models may disappoint you [or, an eternally unfinished draft], and moreover I think language is an inherently lossy instrument such that the minimally-lossy model won’t have perfectly learned the causal processes or whatever behind its production.