Well, I’m a linguist, and yes, we do have that. Actually, it works a lot like the philosophy of religion thing. Researchers within the subdiscipline that deals with X believe X is really important. But outside that subdiscipline/clique are a lot of people who have concluded that X is not important and/or doesn’t really exist. Naturally, the people who believe in X publish a lot more about X than the people who think X is a stinking pile of dwagon crap. This can lead to outsiders getting the impression that the field has a consensus position about X=awesome.
The best example I know is the debate about linguistic universals. Chomskyan universalists think that all human languages are fundamentally alike, that there is a genetically determined “universal grammar” which shapes their structure. The Chomskyans are a very strong and impressive clique and a lot of non-linguists get the impression that what they say is what every serious linguist believes. But this is not so. A lot of us think the “universal grammar” stuff is vacuous non-sense which we can’t be bothered with.
Starting a big fight with the Chomskyans has not been a good career move for the past half-century but this may be changing. In 2009, a couple of linguists started a shitstorm with the article The Myth of Language Universals: Language diversity and its importance for cognitive science. The abstract starts like this:
Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective.
Suddenly a lot of people are willing to die on this hill, so you can find a very ample supply of recent articles on both sides of this.
Question: my understanding is that the fact that humans manage to learn language so readily in early childhood, when compared with how bad we are at objectively simpler tasks like arithmetic, does suggest we have some kind of innate, specialized “language module”, even if the Chomskyan view gets some important details wrong. Would that be generally accepted among linguistics, or is it contentious? And in the latter case, why would it be contentious?
(I ask because this understanding of language is one of the main building blocks in what I understand about human intelligence.)
Great questions. I would say that a majority of linguists probably accept the fast-childhood-acquisition argument for the innateness of language but a lot depends on how the question is phrased. I would agree that language is innate to humans in the weak and banal sense that humans in any sort of natural environment will in short order develop a complex system of communication. But I don’t think it follows that we have a specialized language module—we may be using some more generic part of our cognitive capacity. I’m not sure if we really have the data to settle this yet.
The whole thing is tricky. How fast is fast? If humans definitely had no language model and had to learn language using a more generic cognitive ability, how fast would we expect them to do it? Five years? Ten years? Fifty years? Never? I don’t know of any convincing argument ruling out that the answer would be “pretty much the speed at which they are actually observed to learn it”.
And what qualifies as language, anyway? Deaf children can learn complex sign languages. Is that just as innate as spoken language or are they using a more generic cognitive ability? My one-year-old is a whiz on the iPad. Is he using the language module or a more generic cognitive ability? Is it a language module or a symbolic processing module? Or an abstract-thinking module?
I’m personally very skeptical that the brain has any sort of neatly defined language module—is that really Azathoth’s style? There is a lot more to say about this, maybe there’d be enough interest for a top-level post.
I’m personally very skeptical that the brain has any sort of neatly defined language module—is that really Azathoth’s style? There is a lot more to say about this, maybe there’d be enough interest for a top-level post.
But I don’t think it follows that we have a specialized language module—we may be using some more generic part of our cognitive capacity. I’m not sure if we really have the data to settle this yet.
The whole thing is tricky. How fast is fast? If humans definitely had no language model and had to learn language using a more generic cognitive ability, how fast would we expect them to do it? Five years? Ten years? Fifty years? Never? I don’t know of any convincing argument ruling out that the answer would be “pretty much the speed at which they are actually observed to learn it”.
Honestly, I suspect the answer is “never”… unless the “more general capacity” is only somewhat more general. Languages seem to be among the most complicated things most people ever learn, with the main competition for the title of “the most complicated” coming from things like “how to interact socially with other humans.”
And what qualifies as language, anyway? Deaf children can learn complex sign languages. Is that just as innate as spoken language or are they using a more generic cognitive ability?
What I’ve read on this is that the way deaf children learn sign language is extremely similar to how most children learn spoken language.
There is a lot more to say about this, maybe there’d be enough interest for a top-level post.
You are not alone—that is the orthodox Chomskyan position. Chomsky has argued that grammar is unlearnable given the limited data available to children, and therefore there must be an innate linguistic capacity. This is the celebrated “poverty of the stimulus” argument. Like most of Chomsky’s ideas, it is armchair theorizing with little empirical support.
I would totally support a top-level post.
Given the number of replies and upvotes, that seems warranted. I’ll try to find the time.
Certainly, humans are endowed with some sort of predisposition toward language learning. The substantive issue is whether a full description of that predisposition incorporates anything that entails specific contingent facts about natural languages.
So this makes it sound like the only thing the authors are rejecting is the idea of a system with certain rigid assumptions built in—as opposed to, say, a more or less Bayesian system that has a prior which favors certain assumptions without making those assumptions indefeasible. Am I reading that right?
Yes, you’re reading that right. They address this even more explicitly at the beginning of section 2.2 on page 17, and, especially in footnotes 5 and 6.
As for the statement that humans have “some sort of predisposition toward language learning”, that is weak enough for even me to agree with it. We are social animals, with innate desires to communicate and the intelligence to do so in complex ways.
But I don’t think it follows that we have a specialized language module—we may be using some more generic part of our cognitive capacity. I’m not sure if we really have the data to settle this yet.
There was an autistic savant, Chris, whose skill was in learning languages, and who was unable to learn a fake language put together by researchers that used easy but non-attested types of rules (eg. reversing the whole sentence to form a question). What do you make of it?
I’ve always thought it was fairly weak evidence in the sense that autistic people often have all kinds of other things potentially going on with them, that it’s a sample size of 1, and so on.
I’ve wondered about that. Someone should try writing an iPad app that a toddler can play with to have their brain bombarded by math, and see if that leads to math coming as naturally to them as language. I doubt it would work but it might be worth trying.
It seems that simply bombarding the brain isn’t sufficient, even for language, and that social interaction is required (see this study), so that playing math games with the child would be a better idea.
How does the brain decide whether it thinks of something as a social interaction? I would assume that computer/video games with significant social components hack into that, so hacking into it to teach math should be doable.
That makes intuitive sense, at least in hindsight, since TV provides ample non-linguistic information that you can learn to associate with the linguistic information.
I think this book maybe of some interest to you Chris. It was the text book recommend for a CogSci class I did, dealing with how cognitive systems develop in response to their environment.
Also a follow-up question: I remember reading that children who do not learn any language by the age of 7-9 forever lose the capability to acquire a language (examples were children brought up by animals and maybe a couple of cases of child abuse). Is that actually true?
This is a difficult issue. There are very few documented instances of feral children and it is hard to isolate their language deficiency from their other problems.
What we do have a lot of documentation on is children with various types of intellectual disabilities. My four-year old daughter is autistic and has an IQ of 50. Her language is around the level of a 24 month old (though possibly with a bigger vocabulary and worse grammar). Does she have a deficient language module? That doesn’t really seem like a great explanation for anything. Her mental deficiencies are much broader than that. If there were a lot of children with deficient language but otherwise normal development that would lend some support to a language module model. But this isn’t really the case. If your language is borked that usually means that other things are borked too.
Another thing about my daughter: She’s made me realize how smart humans are. A retarded 4 year old is still really really smart compared to other species. My daughter certainly has far more sophisticated language than this guy did. I bet she could beat a chimp in other cognitive tasks as well.
Try looking into Joshua Tenenbamu’s cognitive-science research. As I recall, he’s a big Bayesian (so LW will love him), and he published a paper about probabilistic learning of causality models in humans. If I had to bet, I would say that evolution came up with a learning system for us that can quickly and dirtily learn many different possible kinds of causality, since the real thing works too quickly for evolution to hardcode a model of it into our brains. Also, the real thing involves assumptions like The Universe Is Lawful that aren’t even evolutionarily useful to non-civilized pre-human apes—it doesn’t look lawful to them!
We could then have evolved language out of our ability to learn models of causality, as a way of communicating statements in our learned internal logics. This would certainly explain the way that verbal thinking contains lots more ambiguity, incoherence and plain error than formalized (mathematical) thinking.
I am unclear on what observations I would differentially expect, here.
That is, if I observe that languages vary along dimension X, presumably a Chomskyan says “the universal grammar includes a parametrizable setting for X with the following range of allowed values” and an antiChomskyan simply says “languages can vary with respect to X with the following range of allowed values.” The antiChomskyan wins on Occamian grounds (in this example at least) but is this really a hill worth dying on?
The question is whether the variety in human languages is constrained by our biology or by general structural issues which any intelligence which developed a communication system would come up against. This should have implications for cognitive science and maybe AI design.
Note that the anti-Chomskyans are not biology-denying blank-slaters. Geoffrey Sampson, who has written a good book about this, is a racist reactionary.
Ah! Yes, OK, that makes sense. Thanks for clarifying.
I don’t have a horse in this race, but I studied linguistics as an undergrad in the 80s so am probably an unexamined Chomskyist by default. That said, I certainly agree that if such general structural constraints exist (which is certainly plausible) then we ought to identify and study them, not just assume them away.
Is there a language that doesn’t have any kind of discrete words and concepts? ’cause I’m pretty sure there are possible intelligences that could construct a communication system that uses only approximate quantitative representations (configuration spaces or replaying full sensory) instead of symbols.
This is probably why, in my experience, innateness issues of any kind also don’t play a role in the everyday practice of most linguists.
The people who study the issue of natural languages being somehow interestingly constrained by biology are, incidentally, not normal linguists, but they’re mixture of computer scientists, mathematical linguists, and psychologists, who look at the formal properties of natural language grammars and their learnability properties. And if there are such constraints, there is of course the further question of whether we’re dealing with something that is specific to language, or a general cognitive principle.
Being a much more ordinary linguist, I don’t even know what the state of that field is. So basically, I don’t really get what all the fuss is about.
A more significant divide in linguists seems to me to be between the people who do formally well-defined stuff and those who don’t. Ironically, a lot of Chomskyans fall into the latter category.
Also, there’s much more impressive developmental evidence for certain kinds of things being innate than language acquisition.
Well, I’m a linguist, and yes, we do have that. Actually, it works a lot like the philosophy of religion thing. Researchers within the subdiscipline that deals with X believe X is really important. But outside that subdiscipline/clique are a lot of people who have concluded that X is not important and/or doesn’t really exist. Naturally, the people who believe in X publish a lot more about X than the people who think X is a stinking pile of dwagon crap. This can lead to outsiders getting the impression that the field has a consensus position about X=awesome.
The best example I know is the debate about linguistic universals. Chomskyan universalists think that all human languages are fundamentally alike, that there is a genetically determined “universal grammar” which shapes their structure. The Chomskyans are a very strong and impressive clique and a lot of non-linguists get the impression that what they say is what every serious linguist believes. But this is not so. A lot of us think the “universal grammar” stuff is vacuous non-sense which we can’t be bothered with.
Starting a big fight with the Chomskyans has not been a good career move for the past half-century but this may be changing. In 2009, a couple of linguists started a shitstorm with the article The Myth of Language Universals: Language diversity and its importance for cognitive science. The abstract starts like this:
Suddenly a lot of people are willing to die on this hill, so you can find a very ample supply of recent articles on both sides of this.
Question: my understanding is that the fact that humans manage to learn language so readily in early childhood, when compared with how bad we are at objectively simpler tasks like arithmetic, does suggest we have some kind of innate, specialized “language module”, even if the Chomskyan view gets some important details wrong. Would that be generally accepted among linguistics, or is it contentious? And in the latter case, why would it be contentious?
(I ask because this understanding of language is one of the main building blocks in what I understand about human intelligence.)
Great questions. I would say that a majority of linguists probably accept the fast-childhood-acquisition argument for the innateness of language but a lot depends on how the question is phrased. I would agree that language is innate to humans in the weak and banal sense that humans in any sort of natural environment will in short order develop a complex system of communication. But I don’t think it follows that we have a specialized language module—we may be using some more generic part of our cognitive capacity. I’m not sure if we really have the data to settle this yet.
The whole thing is tricky. How fast is fast? If humans definitely had no language model and had to learn language using a more generic cognitive ability, how fast would we expect them to do it? Five years? Ten years? Fifty years? Never? I don’t know of any convincing argument ruling out that the answer would be “pretty much the speed at which they are actually observed to learn it”.
And what qualifies as language, anyway? Deaf children can learn complex sign languages. Is that just as innate as spoken language or are they using a more generic cognitive ability? My one-year-old is a whiz on the iPad. Is he using the language module or a more generic cognitive ability? Is it a language module or a symbolic processing module? Or an abstract-thinking module?
I’m personally very skeptical that the brain has any sort of neatly defined language module—is that really Azathoth’s style? There is a lot more to say about this, maybe there’d be enough interest for a top-level post.
I would look forward to reading that post.
Honestly, I suspect the answer is “never”… unless the “more general capacity” is only somewhat more general. Languages seem to be among the most complicated things most people ever learn, with the main competition for the title of “the most complicated” coming from things like “how to interact socially with other humans.”
What I’ve read on this is that the way deaf children learn sign language is extremely similar to how most children learn spoken language.
I would totally support a top-level post.
You are not alone—that is the orthodox Chomskyan position. Chomsky has argued that grammar is unlearnable given the limited data available to children, and therefore there must be an innate linguistic capacity. This is the celebrated “poverty of the stimulus” argument. Like most of Chomsky’s ideas, it is armchair theorizing with little empirical support.
Given the number of replies and upvotes, that seems warranted. I’ll try to find the time.
Reading the article:
So this makes it sound like the only thing the authors are rejecting is the idea of a system with certain rigid assumptions built in—as opposed to, say, a more or less Bayesian system that has a prior which favors certain assumptions without making those assumptions indefeasible. Am I reading that right?
Yes, you’re reading that right. They address this even more explicitly at the beginning of section 2.2 on page 17, and, especially in footnotes 5 and 6.
As for the statement that humans have “some sort of predisposition toward language learning”, that is weak enough for even me to agree with it. We are social animals, with innate desires to communicate and the intelligence to do so in complex ways.
There was an autistic savant, Chris, whose skill was in learning languages, and who was unable to learn a fake language put together by researchers that used easy but non-attested types of rules (eg. reversing the whole sentence to form a question). What do you make of it?
I’ve always thought it was fairly weak evidence in the sense that autistic people often have all kinds of other things potentially going on with them, that it’s a sample size of 1, and so on.
As an ignorant layman, I’d expect a large part of our so-called cognitive capacity to be a poorly hacked-and-generalized language module.
Children hear adults speaking all the time, but they don’t usually hear adults doing maths very often.
I’ve wondered about that. Someone should try writing an iPad app that a toddler can play with to have their brain bombarded by math, and see if that leads to math coming as naturally to them as language. I doubt it would work but it might be worth trying.
It seems that simply bombarding the brain isn’t sufficient, even for language, and that social interaction is required (see this study), so that playing math games with the child would be a better idea.
How does the brain decide whether it thinks of something as a social interaction? I would assume that computer/video games with significant social components hack into that, so hacking into it to teach math should be doable.
I believe the way it works for language is that one can learn it from television, but not radio.
Nope. It needs to be something with feedback.
That makes intuitive sense, at least in hindsight, since TV provides ample non-linguistic information that you can learn to associate with the linguistic information.
I think this book maybe of some interest to you Chris. It was the text book recommend for a CogSci class I did, dealing with how cognitive systems develop in response to their environment.
Also a follow-up question: I remember reading that children who do not learn any language by the age of 7-9 forever lose the capability to acquire a language (examples were children brought up by animals and maybe a couple of cases of child abuse). Is that actually true?
This is a difficult issue. There are very few documented instances of feral children and it is hard to isolate their language deficiency from their other problems.
What we do have a lot of documentation on is children with various types of intellectual disabilities. My four-year old daughter is autistic and has an IQ of 50. Her language is around the level of a 24 month old (though possibly with a bigger vocabulary and worse grammar). Does she have a deficient language module? That doesn’t really seem like a great explanation for anything. Her mental deficiencies are much broader than that. If there were a lot of children with deficient language but otherwise normal development that would lend some support to a language module model. But this isn’t really the case. If your language is borked that usually means that other things are borked too.
Another thing about my daughter: She’s made me realize how smart humans are. A retarded 4 year old is still really really smart compared to other species. My daughter certainly has far more sophisticated language than this guy did. I bet she could beat a chimp in other cognitive tasks as well.
Mostly: http://en.wikipedia.org/wiki/Critical_period_hypothesis
Try looking into Joshua Tenenbamu’s cognitive-science research. As I recall, he’s a big Bayesian (so LW will love him), and he published a paper about probabilistic learning of causality models in humans. If I had to bet, I would say that evolution came up with a learning system for us that can quickly and dirtily learn many different possible kinds of causality, since the real thing works too quickly for evolution to hardcode a model of it into our brains. Also, the real thing involves assumptions like The Universe Is Lawful that aren’t even evolutionarily useful to non-civilized pre-human apes—it doesn’t look lawful to them!
We could then have evolved language out of our ability to learn models of causality, as a way of communicating statements in our learned internal logics. This would certainly explain the way that verbal thinking contains lots more ambiguity, incoherence and plain error than formalized (mathematical) thinking.
I am unclear on what observations I would differentially expect, here.
That is, if I observe that languages vary along dimension X, presumably a Chomskyan says “the universal grammar includes a parametrizable setting for X with the following range of allowed values” and an antiChomskyan simply says “languages can vary with respect to X with the following range of allowed values.” The antiChomskyan wins on Occamian grounds (in this example at least) but is this really a hill worth dying on?
The question is whether the variety in human languages is constrained by our biology or by general structural issues which any intelligence which developed a communication system would come up against. This should have implications for cognitive science and maybe AI design.
Note that the anti-Chomskyans are not biology-denying blank-slaters. Geoffrey Sampson, who has written a good book about this, is a racist reactionary.
Ah! Yes, OK, that makes sense. Thanks for clarifying.
I don’t have a horse in this race, but I studied linguistics as an undergrad in the 80s so am probably an unexamined Chomskyist by default. That said, I certainly agree that if such general structural constraints exist (which is certainly plausible) then we ought to identify and study them, not just assume them away.
Is there a language that doesn’t have any kind of discrete words and concepts? ’cause I’m pretty sure there are possible intelligences that could construct a communication system that uses only approximate quantitative representations (configuration spaces or replaying full sensory) instead of symbols.
This is probably why, in my experience, innateness issues of any kind also don’t play a role in the everyday practice of most linguists.
The people who study the issue of natural languages being somehow interestingly constrained by biology are, incidentally, not normal linguists, but they’re mixture of computer scientists, mathematical linguists, and psychologists, who look at the formal properties of natural language grammars and their learnability properties. And if there are such constraints, there is of course the further question of whether we’re dealing with something that is specific to language, or a general cognitive principle.
Being a much more ordinary linguist, I don’t even know what the state of that field is. So basically, I don’t really get what all the fuss is about.
A more significant divide in linguists seems to me to be between the people who do formally well-defined stuff and those who don’t. Ironically, a lot of Chomskyans fall into the latter category.
Also, there’s much more impressive developmental evidence for certain kinds of things being innate than language acquisition.