Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science.
Danger! You’re not looking at the whole system. Children’s knowledge doesn’t just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.
So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge “boost”, and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using. It’s not that there’s a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.
You seem to be endorsing the discredited “blank slate” paradigm.
A better strategy would be to look at how evolution “learned” and “encoded” that data, and how to represent such assumptions about this environment, which is what I’m attempting to do with a model I’m working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what “intelligence” means in such a model, and how to get it.
(By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: “If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.”
That hasn’t been demonstrated—AFAIK.
Children are not blank slates—but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected—for a sufficiently smart agent.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
If large amounts of our knowledge base was encoded through evolution, we would
see people with weird, specific cognitive deficits—say, the inability to use nouns—as > a result of genetic mutations.
That’s not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn’t need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn’t encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using “you” to refer to themselves and “I” to refer to anyone else). Similarly, there’s a clear genetic distinction in language processing between humans and other primates in that many of the “sentences” constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations.
Well, people do have weird, specific cognitive deficits, but not of that kind. Remember, grammatical structures themselves are the result of human-specific tendencies to form tools into a shape our native hardware is already capable of handling (pardon the metaphor overload). Grammatical structures are not a list of arbitrary rules imposed by aliens, but conventions that already make sense to human brains.
In any case, keep in mind that I said the information accumulated by evolution is stored in the interplay of the genes and the womb, and invariant features of the environment. The way that e.g. noun use comes about is a result of a combination of these; like JoshuaZ notes, there needn’t be a gene-to-noun mapping under this theory.
Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how.
We do see this! It’s possible to be blind despite having functioning eyes simply because the brain didn’t receive sensory information from the eyes early on. It’s called Amblyopia
The problem is caused by either no transmission or poor transmission of the visual stimulation through the optic nerve to the brain for a sustained period of dysfunction or during early childhood thus resulting in poor or dim vision.
In other words, an expected environmental invariant—light being regularly fed through the eye—wasn’t present, preventing the manifestation of the accumulated knowledge of evolution.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
I’m making a POTS argument, but more based on reading Pinker. There are patterns common to all languages, and there are kinds of grammatical errors children never make. This, along with similar phenomena in other areas, shows that children aren’t blank slates that accept whatever they get, but are born with a kind of pre-formatting that has certain expectations of their stimuli that causes them to constrain the solution set to the point where they don’t need the massive data that would be necessary to train a blank slate.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
I’ll say one thing about the POTS argument, though. The basic idea is that people compare the amount of linguistic data absorbed by the child to his linguistic competence, find that the latter cannot be explained by the former, and conclude that there must be some sort of built-in language module. But they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes. This argument is supported by the existence of mirror neurons.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
Well, earlier, the way you had stated your position, it looked like you were saying that all knowledge acquisition (or nearly all) comes from sense data, and children use some method, superior to scientific experimentation, to maximally exploit that data. If you grant a role for evolution to be “passing correct answers” to human minds, then yes, our positions are much closer than I had thought.
But still, it’s not enough to say “evolution did it”. You would have to say how the process of evolution—which works only via genes—gains that knowledge and converts it into a belief on the part of the organism. Your research program, as you’ve described it, mentions nothing about this.
I’ll say one thing about the POTS argument, though. … they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes.
The problem of vision (inference of a 3-D scene from a 2-D image) is ill-posed and has an even more intractable search space. It doesn’t seem like a child’s brain (given the problem of local optima) even has enough time to reach the hypothesis that a 3-D scene is generating the sense data. But I’d be happy to be proven wrong by seeing an algorithm that would identify the right hypothesis without “cheating” (i.e. being told where to look, which is what I claim evolution does).
This argument is supported by the existence of mirror neurons.
How so? Mirror neurons still have to know what salient aspect of the sense data they’re supposed to be mirroring. It’s not like there’s a one-to-one mapping between “monkey see” and “monkey do”.
Danger! You’re not looking at the whole system. Children’s knowledge doesn’t just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.
So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge “boost”, and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using. It’s not that there’s a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.
You seem to be endorsing the discredited “blank slate” paradigm.
A better strategy would be to look at how evolution “learned” and “encoded” that data, and how to represent such assumptions about this environment, which is what I’m attempting to do with a model I’m working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what “intelligence” means in such a model, and how to get it.
(By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: “If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.”
That hasn’t been demonstrated—AFAIK.
Children are not blank slates—but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected—for a sufficiently smart agent.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
That’s not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn’t need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn’t encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using “you” to refer to themselves and “I” to refer to anyone else). Similarly, there’s a clear genetic distinction in language processing between humans and other primates in that many of the “sentences” constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.
Well, people do have weird, specific cognitive deficits, but not of that kind. Remember, grammatical structures themselves are the result of human-specific tendencies to form tools into a shape our native hardware is already capable of handling (pardon the metaphor overload). Grammatical structures are not a list of arbitrary rules imposed by aliens, but conventions that already make sense to human brains.
In any case, keep in mind that I said the information accumulated by evolution is stored in the interplay of the genes and the womb, and invariant features of the environment. The way that e.g. noun use comes about is a result of a combination of these; like JoshuaZ notes, there needn’t be a gene-to-noun mapping under this theory.
We do see this! It’s possible to be blind despite having functioning eyes simply because the brain didn’t receive sensory information from the eyes early on. It’s called Amblyopia
In other words, an expected environmental invariant—light being regularly fed through the eye—wasn’t present, preventing the manifestation of the accumulated knowledge of evolution.
I’m making a POTS argument, but more based on reading Pinker. There are patterns common to all languages, and there are kinds of grammatical errors children never make. This, along with similar phenomena in other areas, shows that children aren’t blank slates that accept whatever they get, but are born with a kind of pre-formatting that has certain expectations of their stimuli that causes them to constrain the solution set to the point where they don’t need the massive data that would be necessary to train a blank slate.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
I’ll say one thing about the POTS argument, though. The basic idea is that people compare the amount of linguistic data absorbed by the child to his linguistic competence, find that the latter cannot be explained by the former, and conclude that there must be some sort of built-in language module. But they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes. This argument is supported by the existence of mirror neurons.
Well, earlier, the way you had stated your position, it looked like you were saying that all knowledge acquisition (or nearly all) comes from sense data, and children use some method, superior to scientific experimentation, to maximally exploit that data. If you grant a role for evolution to be “passing correct answers” to human minds, then yes, our positions are much closer than I had thought.
But still, it’s not enough to say “evolution did it”. You would have to say how the process of evolution—which works only via genes—gains that knowledge and converts it into a belief on the part of the organism. Your research program, as you’ve described it, mentions nothing about this.
The problem of vision (inference of a 3-D scene from a 2-D image) is ill-posed and has an even more intractable search space. It doesn’t seem like a child’s brain (given the problem of local optima) even has enough time to reach the hypothesis that a 3-D scene is generating the sense data. But I’d be happy to be proven wrong by seeing an algorithm that would identify the right hypothesis without “cheating” (i.e. being told where to look, which is what I claim evolution does).
How so? Mirror neurons still have to know what salient aspect of the sense data they’re supposed to be mirroring. It’s not like there’s a one-to-one mapping between “monkey see” and “monkey do”.