Here’s a hint. Define a scientific method as any process by which reliable predictions can be obtained. Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don’t make controlled experiments. So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.
Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science.
Danger! You’re not looking at the whole system. Children’s knowledge doesn’t just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.
So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge “boost”, and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using. It’s not that there’s a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.
You seem to be endorsing the discredited “blank slate” paradigm.
A better strategy would be to look at how evolution “learned” and “encoded” that data, and how to represent such assumptions about this environment, which is what I’m attempting to do with a model I’m working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what “intelligence” means in such a model, and how to get it.
(By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: “If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.”
That hasn’t been demonstrated—AFAIK.
Children are not blank slates—but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected—for a sufficiently smart agent.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
If large amounts of our knowledge base was encoded through evolution, we would
see people with weird, specific cognitive deficits—say, the inability to use nouns—as > a result of genetic mutations.
That’s not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn’t need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn’t encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using “you” to refer to themselves and “I” to refer to anyone else). Similarly, there’s a clear genetic distinction in language processing between humans and other primates in that many of the “sentences” constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations.
Well, people do have weird, specific cognitive deficits, but not of that kind. Remember, grammatical structures themselves are the result of human-specific tendencies to form tools into a shape our native hardware is already capable of handling (pardon the metaphor overload). Grammatical structures are not a list of arbitrary rules imposed by aliens, but conventions that already make sense to human brains.
In any case, keep in mind that I said the information accumulated by evolution is stored in the interplay of the genes and the womb, and invariant features of the environment. The way that e.g. noun use comes about is a result of a combination of these; like JoshuaZ notes, there needn’t be a gene-to-noun mapping under this theory.
Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how.
We do see this! It’s possible to be blind despite having functioning eyes simply because the brain didn’t receive sensory information from the eyes early on. It’s called Amblyopia
The problem is caused by either no transmission or poor transmission of the visual stimulation through the optic nerve to the brain for a sustained period of dysfunction or during early childhood thus resulting in poor or dim vision.
In other words, an expected environmental invariant—light being regularly fed through the eye—wasn’t present, preventing the manifestation of the accumulated knowledge of evolution.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
I’m making a POTS argument, but more based on reading Pinker. There are patterns common to all languages, and there are kinds of grammatical errors children never make. This, along with similar phenomena in other areas, shows that children aren’t blank slates that accept whatever they get, but are born with a kind of pre-formatting that has certain expectations of their stimuli that causes them to constrain the solution set to the point where they don’t need the massive data that would be necessary to train a blank slate.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
I’ll say one thing about the POTS argument, though. The basic idea is that people compare the amount of linguistic data absorbed by the child to his linguistic competence, find that the latter cannot be explained by the former, and conclude that there must be some sort of built-in language module. But they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes. This argument is supported by the existence of mirror neurons.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
Well, earlier, the way you had stated your position, it looked like you were saying that all knowledge acquisition (or nearly all) comes from sense data, and children use some method, superior to scientific experimentation, to maximally exploit that data. If you grant a role for evolution to be “passing correct answers” to human minds, then yes, our positions are much closer than I had thought.
But still, it’s not enough to say “evolution did it”. You would have to say how the process of evolution—which works only via genes—gains that knowledge and converts it into a belief on the part of the organism. Your research program, as you’ve described it, mentions nothing about this.
I’ll say one thing about the POTS argument, though. … they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes.
The problem of vision (inference of a 3-D scene from a 2-D image) is ill-posed and has an even more intractable search space. It doesn’t seem like a child’s brain (given the problem of local optima) even has enough time to reach the hypothesis that a 3-D scene is generating the sense data. But I’d be happy to be proven wrong by seeing an algorithm that would identify the right hypothesis without “cheating” (i.e. being told where to look, which is what I claim evolution does).
This argument is supported by the existence of mirror neurons.
How so? Mirror neurons still have to know what salient aspect of the sense data they’re supposed to be mirroring. It’s not like there’s a one-to-one mapping between “monkey see” and “monkey do”.
Okay, I think I should take a minute to clarify where exactly we disagree. Starting from your conclusion:
So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.
This by itself isn’t objectionable: of course you can move your probability distribution on your future observations closer to reality’s true probability distribution without controlled experiments. And Bayesian inference is how you do it.
But you also say:
Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don’t make controlled experiments
I agree that children learn how to solve AI-complete problems, including reliable prediction in this environment (and also face-recognition, character-recognition, bipedal traversal of a path barring obstacles, etc.). But you seem to have already concluded (too hastily, in my opinion) that the answer lies in a really good epistemology that children have that allows them to extract near-maximal knowledge from the data in their experiences.
I claim that this ignores other significant sources of the knowledge children have, which can explain how they gain (accurate) knowledge even when it’s not entailed by their sense data. For example, if some other process feeds them knowledge—itself gained through a reliable epistemology—then they can have beliefs that reflect reality, even though they didn’t personally perform the (Bayes-approximating) inference on the original data.
So that answers the question of how the person got the accurate belief without performing lots of controlled experiments, and the problem regresses to that of how the other process gained that knowledge and transmitted it to the person. And I say (based on my reading of Pinker’s How the Mind Works) that the most likely possibility for the “other process” is that of evolution.
As for the transmission mechanism, it’s most likely the interplay between the genes, the womb, and reliably present features of the environment. All of these can be exploited by evolution, in very roundabout ways, to increase fitness. For example, the DNA/womb system can interact in just the right way to give the brain a certain structure, favorable to some “rituals of cognition” but not others.
This is why I don’t expect you to find a superior epistemology by looking at how children handle their experiences—you’ll be stuck wondering why they make one inference from the data rather than another that’s just as well-grounded but wrong. Though I’m still interested in hearing why you think you’ve made progress and what insights your method has given you.
What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case?
Define a scientific method as any process by which reliable predictions can be obtained
At the risk of being blunt, that sounds like a Humpty Dumpty move. There are many processes which yield reliable predictions that we don’t call science, and many processes we identify as part of the scientific method which don’t yield reliable predictions.
What you’ve said above can be re-expressed as “if we think theorize/predict/test is the only way to make reliable predictions about the world, then our current understanding of how to make reliable predictions is incomplete”. Well, I agree. :)
It’s been 30+ years since Paul Feyerabend wrote Against Method, and the idea that the “scientific method” is inexistent is no longer even the heresy it once was. He wrote that science is founded on methodological diversity, the only necessary prerequisite of any method’s inclusion being that it works. It sounds a bit like what you’re getting at, and I’d recommend looking into it if you haven’t already.
You apparently think, that it isn’t necessary. I am quite sure it isn’t, too. We need some technical innovations, yes, but from the scientific point of view, it’s done.
On the contrary! Any major scientific revolution could use some AI power. I am not sure that so called Quantum Gravity (or String Theory of a kind), can be constructed in a reasonable time without a big AI involvement. Could be too hard for a naked human mind.
So yes, we probably need the AI for a big scientific revolution, but no more scientific revolutions to build AI.
You apparently think, that it isn’t necessary. I am quite sure it isn’t, too. We need some technical innovations, yes, but from the scientific point of view, it’s done.
Are you familiar with the current state of the art in AI? Can you point to a body of work that you think will scale up to AI with a few more “technical innovations”?
I am not a native English speaker. But I do know, what the “state of the art” means. However, instead of debating much abut that, I would first like to see an older question answered. The one of NancyLebovitz. It is above, the same I have emphasized a little in a replay.
What scientific breakthroughs we need, before we can build a decent AI?
Sprichst du lieber Deutsch? Das ist eine Sprache, die ich auch kann. Willst du, dass ich manchmal für dich übersetze?
ETA: Wow, I knew humans were extremely bigoted toward those not like them, but I would never have guessed that they’d show such bigotry toward someone merely for helping a possibly-German-speaking poster to communicate. Bad apes! No sex for you!
I am writing a book about a new approach to AI. The book is a roadmap, after I’m finished, I will follow the roadmap. That will take many years.
I have near-zero belief that AI can succeed without a major scientific revolution.
I’m interested in what sort of scientific revolution you think is needed and why.
Well… you’ll have to read the book :-)
Here’s a hint. Define a scientific method as any process by which reliable predictions can be obtained. Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don’t make controlled experiments. So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.
Danger! You’re not looking at the whole system. Children’s knowledge doesn’t just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.
So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge “boost”, and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using. It’s not that there’s a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.
You seem to be endorsing the discredited “blank slate” paradigm.
A better strategy would be to look at how evolution “learned” and “encoded” that data, and how to represent such assumptions about this environment, which is what I’m attempting to do with a model I’m working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what “intelligence” means in such a model, and how to get it.
(By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: “If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.”
That hasn’t been demonstrated—AFAIK.
Children are not blank slates—but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected—for a sufficiently smart agent.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
That’s not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn’t need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn’t encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using “you” to refer to themselves and “I” to refer to anyone else). Similarly, there’s a clear genetic distinction in language processing between humans and other primates in that many of the “sentences” constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.
Well, people do have weird, specific cognitive deficits, but not of that kind. Remember, grammatical structures themselves are the result of human-specific tendencies to form tools into a shape our native hardware is already capable of handling (pardon the metaphor overload). Grammatical structures are not a list of arbitrary rules imposed by aliens, but conventions that already make sense to human brains.
In any case, keep in mind that I said the information accumulated by evolution is stored in the interplay of the genes and the womb, and invariant features of the environment. The way that e.g. noun use comes about is a result of a combination of these; like JoshuaZ notes, there needn’t be a gene-to-noun mapping under this theory.
We do see this! It’s possible to be blind despite having functioning eyes simply because the brain didn’t receive sensory information from the eyes early on. It’s called Amblyopia
In other words, an expected environmental invariant—light being regularly fed through the eye—wasn’t present, preventing the manifestation of the accumulated knowledge of evolution.
I’m making a POTS argument, but more based on reading Pinker. There are patterns common to all languages, and there are kinds of grammatical errors children never make. This, along with similar phenomena in other areas, shows that children aren’t blank slates that accept whatever they get, but are born with a kind of pre-formatting that has certain expectations of their stimuli that causes them to constrain the solution set to the point where they don’t need the massive data that would be necessary to train a blank slate.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
I’ll say one thing about the POTS argument, though. The basic idea is that people compare the amount of linguistic data absorbed by the child to his linguistic competence, find that the latter cannot be explained by the former, and conclude that there must be some sort of built-in language module. But they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes. This argument is supported by the existence of mirror neurons.
Well, earlier, the way you had stated your position, it looked like you were saying that all knowledge acquisition (or nearly all) comes from sense data, and children use some method, superior to scientific experimentation, to maximally exploit that data. If you grant a role for evolution to be “passing correct answers” to human minds, then yes, our positions are much closer than I had thought.
But still, it’s not enough to say “evolution did it”. You would have to say how the process of evolution—which works only via genes—gains that knowledge and converts it into a belief on the part of the organism. Your research program, as you’ve described it, mentions nothing about this.
The problem of vision (inference of a 3-D scene from a 2-D image) is ill-posed and has an even more intractable search space. It doesn’t seem like a child’s brain (given the problem of local optima) even has enough time to reach the hypothesis that a 3-D scene is generating the sense data. But I’d be happy to be proven wrong by seeing an algorithm that would identify the right hypothesis without “cheating” (i.e. being told where to look, which is what I claim evolution does).
How so? Mirror neurons still have to know what salient aspect of the sense data they’re supposed to be mirroring. It’s not like there’s a one-to-one mapping between “monkey see” and “monkey do”.
Okay, I think I should take a minute to clarify where exactly we disagree. Starting from your conclusion:
This by itself isn’t objectionable: of course you can move your probability distribution on your future observations closer to reality’s true probability distribution without controlled experiments. And Bayesian inference is how you do it.
But you also say:
I agree that children learn how to solve AI-complete problems, including reliable prediction in this environment (and also face-recognition, character-recognition, bipedal traversal of a path barring obstacles, etc.). But you seem to have already concluded (too hastily, in my opinion) that the answer lies in a really good epistemology that children have that allows them to extract near-maximal knowledge from the data in their experiences.
I claim that this ignores other significant sources of the knowledge children have, which can explain how they gain (accurate) knowledge even when it’s not entailed by their sense data. For example, if some other process feeds them knowledge—itself gained through a reliable epistemology—then they can have beliefs that reflect reality, even though they didn’t personally perform the (Bayes-approximating) inference on the original data.
So that answers the question of how the person got the accurate belief without performing lots of controlled experiments, and the problem regresses to that of how the other process gained that knowledge and transmitted it to the person. And I say (based on my reading of Pinker’s How the Mind Works) that the most likely possibility for the “other process” is that of evolution.
As for the transmission mechanism, it’s most likely the interplay between the genes, the womb, and reliably present features of the environment. All of these can be exploited by evolution, in very roundabout ways, to increase fitness. For example, the DNA/womb system can interact in just the right way to give the brain a certain structure, favorable to some “rituals of cognition” but not others.
This is why I don’t expect you to find a superior epistemology by looking at how children handle their experiences—you’ll be stuck wondering why they make one inference from the data rather than another that’s just as well-grounded but wrong. Though I’m still interested in hearing why you think you’ve made progress and what insights your method has given you.
I am reminded of a phrase from Yudkowsky’s An Intuitive Explanation of Bayes’ Theorem, which I was rereading today for no particularly good reason:
On the off-chance you haven’t heard about this: Unconscious statistical processing in learning languages.
At the risk of being blunt, that sounds like a Humpty Dumpty move. There are many processes which yield reliable predictions that we don’t call science, and many processes we identify as part of the scientific method which don’t yield reliable predictions.
What you’ve said above can be re-expressed as “if we think theorize/predict/test is the only way to make reliable predictions about the world, then our current understanding of how to make reliable predictions is incomplete”. Well, I agree. :)
It’s been 30+ years since Paul Feyerabend wrote Against Method, and the idea that the “scientific method” is inexistent is no longer even the heresy it once was. He wrote that science is founded on methodological diversity, the only necessary prerequisite of any method’s inclusion being that it works. It sounds a bit like what you’re getting at, and I’d recommend looking into it if you haven’t already.
You apparently think, that it isn’t necessary. I am quite sure it isn’t, too. We need some technical innovations, yes, but from the scientific point of view, it’s done.
On the contrary! Any major scientific revolution could use some AI power. I am not sure that so called Quantum Gravity (or String Theory of a kind), can be constructed in a reasonable time without a big AI involvement. Could be too hard for a naked human mind.
So yes, we probably need the AI for a big scientific revolution, but no more scientific revolutions to build AI.
Are you familiar with the current state of the art in AI? Can you point to a body of work that you think will scale up to AI with a few more “technical innovations”?
What art? What are you talking about? Every random action can go as art.
Art is definitively not an AI problem.
Are you a native English speaker? State of the art refers to the best developed techniques and knowledge in a field.
I am not a native English speaker. But I do know, what the “state of the art” means. However, instead of debating much abut that, I would first like to see an older question answered. The one of NancyLebovitz. It is above, the same I have emphasized a little in a replay.
What scientific breakthroughs we need, before we can build a decent AI?
Sprichst du lieber Deutsch? Das ist eine Sprache, die ich auch kann. Willst du, dass ich manchmal für dich übersetze?
ETA: Wow, I knew humans were extremely bigoted toward those not like them, but I would never have guessed that they’d show such bigotry toward someone merely for helping a possibly-German-speaking poster to communicate. Bad apes! No sex for you!
Unfortunately, my German is even worse than my English. A Google Translator chip into my head would be quite handy, already.
OTOH … the spelling is already installed in my browser, so I can spell much less wrong. ;-)
It may have helped if you’d explained yourself to onlookers in English, or simply asked in English (given Thomas’s apparent reasonable fluency).
I disagree with the downvotes, though.
All I said was, “Do you prefer to speak German? That’s a language that I can also do [sic]. Do you want me to sometimes translate for you?”
The decision to start speaking in German where it’s unnecessary for communicating what you’ve said, was stupid, and should be punished accordingly.
Sex is unnecessary and stupid too, ape. How about some tolerance for other’s differences? Oh, right, I forgot.
I think it was answered—a better understanding of how informal (non-scientific) learning works.
That might not be all that’s needed for AI, but I’m sure it’s a crucial piece.
And that would be a scientific revolution—a better understanding of how informal (non-scientific) learning works?
Let use such potent labels careffuly!