This hypothesis is equivalent to stating that if the Language of Thought Hypothesis is true, and also if natural language is very close to the LoT, then if you can encode a lossy compression of natural language you are also encoding a lossy compression of the language of thought, and therefore you have obtained an approximation of thought itself. As such, the argument hinges on the Language of Thought hypothesis, which is still an open question for cognitive science. Conversely if it is empirically observed that LLMs are indeed able to reason despite having “only” been trained on language data (again, ongoing research), then that could be considered as strong evidence in favour of LoT.
This hypothesis is equivalent to stating that if the Language of Thought Hypothesis is true, and also if natural language is very close to the LoT, then if you can encode a lossy compression of natural language you are also encoding a lossy compression of the language of thought, and therefore you have obtained an approximation of thought itself. As such, the argument hinges on the Language of Thought hypothesis, which is still an open question for cognitive science. Conversely if it is empirically observed that LLMs are indeed able to reason despite having “only” been trained on language data (again, ongoing research), then that could be considered as strong evidence in favour of LoT.