I’ve recently read a few books on cognition and psycholinguistics which has put a name to a concept I’ve become increasing the familiar with, the Language of Thought Hypothesis; wherein researchers theorize that mentalese, and subsequently most of human thought, follow syntactical rules.
They attempt to translate sentences into language independent symbolic propositions, for example:
“Sam spray-painted the walls.” Becomes
“(Sam spray paint) cause (paint go to (on wall))”
I find this endlessly fascinating but I struggle to think of an actual application of this kind of knowledge. What could we actually do with an understanding of how thoughts are structured syntactically?
The one and only thing that comes to mind would be intelligently designing future languages to fit more snugly with cognition, which is no small thing admittedly. But I would love to discuss the implications of this research if anything comes to mind to anyone.
[written with talon voice recognition in mixed mode, excuse typos]
similar to artificial neural networks, humans follow approximate rules defined by the interference patterns of excitatory and inhibitory weights. designing languages to fit with cognition is an interesting idea and I am excited about the concept, however I don’t agree that language is the only referent that we should be using to estimate the structures of how the brain does tokenization and breaking sensory information packets into parts. there is ample evidence from math that the universe follows some sort of grammar. but much of what is known about how the brain learns is from neuroscience and very small scale experiments on few neurons building up to large networks, and so we know quite a bit about the connectivity patterns in, for example, the visual system and especially the visual cortex are fairly well understood in terms of the types of reasoning they do in the early layers and those wikipedia pages are good intros.
[I’m not terribly impressed with my own comment but it may be better than nothing and I need to do other stuff so I’m just posting it pre-downvoted]
No worries thanks for engaging. Regarding the language of thought hypothesis; any proponent of the hypothesis would readily admit that many of forms/methods of logic exist in cognition. The interesting finding is that all languages seem to share a ‘universal grammar’ which is broken down further than ‘subject, verb, noun’ but instead into ‘Heads, tails, qualifiers, modifiers’ and potentially other units I’m still unfamiliar with. Looking at languages like this coincidentally (or perhaps too conveniently depending on ones opinion) allows you two divide all human languages into ‘head first’ or ‘tail first’, English or Japanese, for example, respectively. Despite the fact that these units could theoretically have numerous compositions that do not and have not existed as a spoken human language.
The implication is that humans are not just designed to speak, but to speak a very certain way with marginal room for variety.
I suspect that that may be because that’s actually a really fundamental way for words to work and anything that invents words even if it isn’t human would invent words that work more or less the same way; it’s also possible it’s an artifact of something unique about us.
[Question] Significance of the Language of Thought Hypothesis?
I’ve recently read a few books on cognition and psycholinguistics which has put a name to a concept I’ve become increasing the familiar with, the Language of Thought Hypothesis; wherein researchers theorize that mentalese, and subsequently most of human thought, follow syntactical rules.
They attempt to translate sentences into language independent symbolic propositions, for example: “Sam spray-painted the walls.” Becomes “(Sam spray paint) cause (paint go to (on wall))”
I find this endlessly fascinating but I struggle to think of an actual application of this kind of knowledge. What could we actually do with an understanding of how thoughts are structured syntactically?
The one and only thing that comes to mind would be intelligently designing future languages to fit more snugly with cognition, which is no small thing admittedly. But I would love to discuss the implications of this research if anything comes to mind to anyone.
[written with talon voice recognition in mixed mode, excuse typos]
similar to artificial neural networks, humans follow approximate rules defined by the interference patterns of excitatory and inhibitory weights. designing languages to fit with cognition is an interesting idea and I am excited about the concept, however I don’t agree that language is the only referent that we should be using to estimate the structures of how the brain does tokenization and breaking sensory information packets into parts. there is ample evidence from math that the universe follows some sort of grammar. but much of what is known about how the brain learns is from neuroscience and very small scale experiments on few neurons building up to large networks, and so we know quite a bit about the connectivity patterns in, for example, the visual system and especially the visual cortex are fairly well understood in terms of the types of reasoning they do in the early layers and those wikipedia pages are good intros.
[I’m not terribly impressed with my own comment but it may be better than nothing and I need to do other stuff so I’m just posting it pre-downvoted]
No worries thanks for engaging. Regarding the language of thought hypothesis; any proponent of the hypothesis would readily admit that many of forms/methods of logic exist in cognition. The interesting finding is that all languages seem to share a ‘universal grammar’ which is broken down further than ‘subject, verb, noun’ but instead into ‘Heads, tails, qualifiers, modifiers’ and potentially other units I’m still unfamiliar with. Looking at languages like this coincidentally (or perhaps too conveniently depending on ones opinion) allows you two divide all human languages into ‘head first’ or ‘tail first’, English or Japanese, for example, respectively. Despite the fact that these units could theoretically have numerous compositions that do not and have not existed as a spoken human language.
The implication is that humans are not just designed to speak, but to speak a very certain way with marginal room for variety.
I suspect that that may be because that’s actually a really fundamental way for words to work and anything that invents words even if it isn’t human would invent words that work more or less the same way; it’s also possible it’s an artifact of something unique about us.