Yes, I have an intention to do so, because I’m convinced that it is very important to the future of humanity. I don’t quite know how I’ll be able to contribute yet, but I think I’m smart and creative enough that I’ll be able to acquire the necessary knowledge and thinking habits (that’s the part I’m working on these days) and eventually contribute something novel, if I can do all that soon enough for it to matter.
I’m working on one as part of a game, where I’m knocking off just about every concept I’ve run into—goal systems, eurisko-type self-modifying code, AIXI, etc. I’ll claim it’s nontrivial because the game is, and I very much intend to make it unusually smart by game standards.
But that’s not really true AI. It’s for fun, as much as anything else. I’m not going to claim it works very well, if at all; it’s just interesting to see what kind of code is involved.
(I have, nevertheless, considered FAI. There’s no room to implement it, which was an interesting thing to discover in itself. Clearly my design is insufficiently advanced.)
I happened to see this
today, which you might find interesting. He’s using genetic algorithms to make the creatures that the player has to kill evolve. At one point they evolved to exploit bugs in the game.
The most obvious problem, which I suspect most games would have in common, is that it has no notion that it’s a game. As far as the AI is concerned, the game world is all there is.
It wants to win a war,and it has no idea that there’s a player on the other side. Building up its understanding to the point where that is not the case would be, well, both way too much work and probably beyond my abilities.
You can ask, but at the moment it’s more of a design document plus some proof-of-concept algorithms. 99% incomplete, in other words, and I don’t really want to get people excited over something that might never come to fruition.
I can’t really describe the game, because that’d be mostly wishful thinking, but perhaps some design criteria will satisfy your curiosity. So, some highlights I guess:
4X space-based RTS. Realism is important: I want this to look somewhat like reality, with the rule of fun applied only where it has to be, not as an attempt to justify bad physics.
Therefore, using non-iterative equations where possible (and some places they really shouldn’t be used) to allow for drastic changes in simulation speed—smoothly going from slower than realtime during battles to a million times faster for slower-than-light interstellar travel. That is to say, using equations that do a constant amount of work to return the state at time T, instead of doing work proportional to the amount of in-game time that has passed.
Therefore, having a lot of systems able to work (and translate between) multiple levels of abstraction. Things that require an iterative simulation to look good when inspected in real-time may be unnoticably off as a cheaper abstraction if time moves a thousand times faster.
To support this, I’m using an explicit cause-effect dependency graph, which lead me to..
Full support for general relativity. Obviously that makes it a single-player game, but the time compression pretty much requires that already.
Causality, FTL, Relativity—pick any two. I’m dropping causality. The cause-effect graph makes it relatively (ha ha, it is to laugh—theoretically it’s just looking for cycles, but the details are many) simple to detect paradoxes. What happens if there are paradoxes, though.. that, I don’t know yet. Anything from gribbly lovecraftian horrors to wormholes spontaneously collapsing will do.
Hopefully, I’ll find the time to work on it, because it sounds like an interesting game to play. :P
Ambitious time travel (or anomalous causality) game mechanics are fun.
There’s the Achron RTS game which involves some kind of multiplayer time travel. As far as I can tell, they deal with paradoxes by cheating with a “real” time point that progresses normally as the players do stuff. There is only a window of a few minutes around the real time point to do time-travel stuff in, and things past the window get frozen into history. Changes in the past also propagate slowly into the rest of the timeline as the real time point advances. So paradoxes end up as oscillations of timewaves until some essential part moves out of the time travel window and gets frozen in an arbitrary state.
I’m not sure how well something similar could work with a relativistic space game where you end up displaced from time by just moving around instead of using gamedev-controllable magic timetravel juice.
Your concept also kinda reminds me of a very obscure Amiga game called Gravity. Based on the manual it had relativistic space-time, programmable drones, terraforming planets from gas giants and all sorts of hard SF spacegame craziness not really seen games pretty much ever nowadays.
I’ve been playing Achron, but it’s not really an inspiration. How should I put it..
My understanding of physics is weak enough without trying to alter it. If I stick as closely as possible to real-life physics, I know I won’t run into any inconsistencies.
Therefore, there will be no time-travel. I might do something cute with paradoxes later, but the immediate solution for those is to blow the offending ship or wormhole up, as real-life wormholes have been theorized to do via virtual particles.
Blow up the paradox-causing FTL? Sounds like that could be weaponized.
I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a “Relativity and FTL travel” FAQ.
I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?
I think the only possible answer to that is “through play-testing”.
As I understand it, real-life wormhole physics gives enormous advantages to a defender. However, this is a wargame, so I will have to limit that somewhat. Exactly how, and to what degree—well, that’s something I will be confronting in a year or two.
(And yes, it could be weaponized. Doing so might not be a good idea, depending on the lovecraft parameter, but you can certainly try.)
Here’s a hint. Define a scientific method as any process by which reliable predictions can be obtained. Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don’t make controlled experiments. So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.
Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science.
Danger! You’re not looking at the whole system. Children’s knowledge doesn’t just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.
So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge “boost”, and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using. It’s not that there’s a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.
You seem to be endorsing the discredited “blank slate” paradigm.
A better strategy would be to look at how evolution “learned” and “encoded” that data, and how to represent such assumptions about this environment, which is what I’m attempting to do with a model I’m working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what “intelligence” means in such a model, and how to get it.
(By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: “If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.”
That hasn’t been demonstrated—AFAIK.
Children are not blank slates—but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected—for a sufficiently smart agent.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
If large amounts of our knowledge base was encoded through evolution, we would
see people with weird, specific cognitive deficits—say, the inability to use nouns—as > a result of genetic mutations.
That’s not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn’t need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn’t encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using “you” to refer to themselves and “I” to refer to anyone else). Similarly, there’s a clear genetic distinction in language processing between humans and other primates in that many of the “sentences” constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations.
Well, people do have weird, specific cognitive deficits, but not of that kind. Remember, grammatical structures themselves are the result of human-specific tendencies to form tools into a shape our native hardware is already capable of handling (pardon the metaphor overload). Grammatical structures are not a list of arbitrary rules imposed by aliens, but conventions that already make sense to human brains.
In any case, keep in mind that I said the information accumulated by evolution is stored in the interplay of the genes and the womb, and invariant features of the environment. The way that e.g. noun use comes about is a result of a combination of these; like JoshuaZ notes, there needn’t be a gene-to-noun mapping under this theory.
Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how.
We do see this! It’s possible to be blind despite having functioning eyes simply because the brain didn’t receive sensory information from the eyes early on. It’s called Amblyopia
The problem is caused by either no transmission or poor transmission of the visual stimulation through the optic nerve to the brain for a sustained period of dysfunction or during early childhood thus resulting in poor or dim vision.
In other words, an expected environmental invariant—light being regularly fed through the eye—wasn’t present, preventing the manifestation of the accumulated knowledge of evolution.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
I’m making a POTS argument, but more based on reading Pinker. There are patterns common to all languages, and there are kinds of grammatical errors children never make. This, along with similar phenomena in other areas, shows that children aren’t blank slates that accept whatever they get, but are born with a kind of pre-formatting that has certain expectations of their stimuli that causes them to constrain the solution set to the point where they don’t need the massive data that would be necessary to train a blank slate.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
I’ll say one thing about the POTS argument, though. The basic idea is that people compare the amount of linguistic data absorbed by the child to his linguistic competence, find that the latter cannot be explained by the former, and conclude that there must be some sort of built-in language module. But they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes. This argument is supported by the existence of mirror neurons.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
Well, earlier, the way you had stated your position, it looked like you were saying that all knowledge acquisition (or nearly all) comes from sense data, and children use some method, superior to scientific experimentation, to maximally exploit that data. If you grant a role for evolution to be “passing correct answers” to human minds, then yes, our positions are much closer than I had thought.
But still, it’s not enough to say “evolution did it”. You would have to say how the process of evolution—which works only via genes—gains that knowledge and converts it into a belief on the part of the organism. Your research program, as you’ve described it, mentions nothing about this.
I’ll say one thing about the POTS argument, though. … they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes.
The problem of vision (inference of a 3-D scene from a 2-D image) is ill-posed and has an even more intractable search space. It doesn’t seem like a child’s brain (given the problem of local optima) even has enough time to reach the hypothesis that a 3-D scene is generating the sense data. But I’d be happy to be proven wrong by seeing an algorithm that would identify the right hypothesis without “cheating” (i.e. being told where to look, which is what I claim evolution does).
This argument is supported by the existence of mirror neurons.
How so? Mirror neurons still have to know what salient aspect of the sense data they’re supposed to be mirroring. It’s not like there’s a one-to-one mapping between “monkey see” and “monkey do”.
Okay, I think I should take a minute to clarify where exactly we disagree. Starting from your conclusion:
So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.
This by itself isn’t objectionable: of course you can move your probability distribution on your future observations closer to reality’s true probability distribution without controlled experiments. And Bayesian inference is how you do it.
But you also say:
Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don’t make controlled experiments
I agree that children learn how to solve AI-complete problems, including reliable prediction in this environment (and also face-recognition, character-recognition, bipedal traversal of a path barring obstacles, etc.). But you seem to have already concluded (too hastily, in my opinion) that the answer lies in a really good epistemology that children have that allows them to extract near-maximal knowledge from the data in their experiences.
I claim that this ignores other significant sources of the knowledge children have, which can explain how they gain (accurate) knowledge even when it’s not entailed by their sense data. For example, if some other process feeds them knowledge—itself gained through a reliable epistemology—then they can have beliefs that reflect reality, even though they didn’t personally perform the (Bayes-approximating) inference on the original data.
So that answers the question of how the person got the accurate belief without performing lots of controlled experiments, and the problem regresses to that of how the other process gained that knowledge and transmitted it to the person. And I say (based on my reading of Pinker’s How the Mind Works) that the most likely possibility for the “other process” is that of evolution.
As for the transmission mechanism, it’s most likely the interplay between the genes, the womb, and reliably present features of the environment. All of these can be exploited by evolution, in very roundabout ways, to increase fitness. For example, the DNA/womb system can interact in just the right way to give the brain a certain structure, favorable to some “rituals of cognition” but not others.
This is why I don’t expect you to find a superior epistemology by looking at how children handle their experiences—you’ll be stuck wondering why they make one inference from the data rather than another that’s just as well-grounded but wrong. Though I’m still interested in hearing why you think you’ve made progress and what insights your method has given you.
What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case?
Define a scientific method as any process by which reliable predictions can be obtained
At the risk of being blunt, that sounds like a Humpty Dumpty move. There are many processes which yield reliable predictions that we don’t call science, and many processes we identify as part of the scientific method which don’t yield reliable predictions.
What you’ve said above can be re-expressed as “if we think theorize/predict/test is the only way to make reliable predictions about the world, then our current understanding of how to make reliable predictions is incomplete”. Well, I agree. :)
It’s been 30+ years since Paul Feyerabend wrote Against Method, and the idea that the “scientific method” is inexistent is no longer even the heresy it once was. He wrote that science is founded on methodological diversity, the only necessary prerequisite of any method’s inclusion being that it works. It sounds a bit like what you’re getting at, and I’d recommend looking into it if you haven’t already.
You apparently think, that it isn’t necessary. I am quite sure it isn’t, too. We need some technical innovations, yes, but from the scientific point of view, it’s done.
On the contrary! Any major scientific revolution could use some AI power. I am not sure that so called Quantum Gravity (or String Theory of a kind), can be constructed in a reasonable time without a big AI involvement. Could be too hard for a naked human mind.
So yes, we probably need the AI for a big scientific revolution, but no more scientific revolutions to build AI.
You apparently think, that it isn’t necessary. I am quite sure it isn’t, too. We need some technical innovations, yes, but from the scientific point of view, it’s done.
Are you familiar with the current state of the art in AI? Can you point to a body of work that you think will scale up to AI with a few more “technical innovations”?
I am not a native English speaker. But I do know, what the “state of the art” means. However, instead of debating much abut that, I would first like to see an older question answered. The one of NancyLebovitz. It is above, the same I have emphasized a little in a replay.
What scientific breakthroughs we need, before we can build a decent AI?
Sprichst du lieber Deutsch? Das ist eine Sprache, die ich auch kann. Willst du, dass ich manchmal für dich übersetze?
ETA: Wow, I knew humans were extremely bigoted toward those not like them, but I would never have guessed that they’d show such bigotry toward someone merely for helping a possibly-German-speaking poster to communicate. Bad apes! No sex for you!
I used to be interested in working on AI, but my current general understanding of the field indicates that for me to do anything worthwhile in the field would require acquiring a lot of additional knowledge and skills — or possibly having a different sort of mind. I am spending my time on other projects where I more readily see how I can do something worthwhile.
Count me as “having an intention to do that in the future”. Although I’m currently just an undergraduate studying math and computer science, I hope to (within 5-10 years) start doing everything I can to help with the task of FAI design.
Question: How many of you, readers and contributers here on this site, actually do work on some (nontrivial) AI project?
Or have an intention to do that in the future?
Yes, I have an intention to do so, because I’m convinced that it is very important to the future of humanity. I don’t quite know how I’ll be able to contribute yet, but I think I’m smart and creative enough that I’ll be able to acquire the necessary knowledge and thinking habits (that’s the part I’m working on these days) and eventually contribute something novel, if I can do all that soon enough for it to matter.
I’m working on one as part of a game, where I’m knocking off just about every concept I’ve run into—goal systems, eurisko-type self-modifying code, AIXI, etc. I’ll claim it’s nontrivial because the game is, and I very much intend to make it unusually smart by game standards.
But that’s not really true AI. It’s for fun, as much as anything else. I’m not going to claim it works very well, if at all; it’s just interesting to see what kind of code is involved.
(I have, nevertheless, considered FAI. There’s no room to implement it, which was an interesting thing to discover in itself. Clearly my design is insufficiently advanced.)
I happened to see this today, which you might find interesting. He’s using genetic algorithms to make the creatures that the player has to kill evolve. At one point they evolved to exploit bugs in the game.
As a programmer, I’m curious exactly how there is no room to implement it. (I understand the “no room” concept, but want details.)
The most obvious problem, which I suspect most games would have in common, is that it has no notion that it’s a game. As far as the AI is concerned, the game world is all there is.
It wants to win a war,and it has no idea that there’s a player on the other side. Building up its understanding to the point where that is not the case would be, well, both way too much work and probably beyond my abilities.
May I ask what game?
You can ask, but at the moment it’s more of a design document plus some proof-of-concept algorithms. 99% incomplete, in other words, and I don’t really want to get people excited over something that might never come to fruition.
I can’t really describe the game, because that’d be mostly wishful thinking, but perhaps some design criteria will satisfy your curiosity. So, some highlights I guess:
4X space-based RTS. Realism is important: I want this to look somewhat like reality, with the rule of fun applied only where it has to be, not as an attempt to justify bad physics.
Therefore, using non-iterative equations where possible (and some places they really shouldn’t be used) to allow for drastic changes in simulation speed—smoothly going from slower than realtime during battles to a million times faster for slower-than-light interstellar travel. That is to say, using equations that do a constant amount of work to return the state at time T, instead of doing work proportional to the amount of in-game time that has passed.
Therefore, having a lot of systems able to work (and translate between) multiple levels of abstraction. Things that require an iterative simulation to look good when inspected in real-time may be unnoticably off as a cheaper abstraction if time moves a thousand times faster.
To support this, I’m using an explicit cause-effect dependency graph, which lead me to..
Full support for general relativity. Obviously that makes it a single-player game, but the time compression pretty much requires that already.
Causality, FTL, Relativity—pick any two. I’m dropping causality. The cause-effect graph makes it relatively (ha ha, it is to laugh—theoretically it’s just looking for cycles, but the details are many) simple to detect paradoxes. What happens if there are paradoxes, though.. that, I don’t know yet. Anything from gribbly lovecraftian horrors to wormholes spontaneously collapsing will do.
Hopefully, I’ll find the time to work on it, because it sounds like an interesting game to play. :P
Ambitious time travel (or anomalous causality) game mechanics are fun.
There’s the Achron RTS game which involves some kind of multiplayer time travel. As far as I can tell, they deal with paradoxes by cheating with a “real” time point that progresses normally as the players do stuff. There is only a window of a few minutes around the real time point to do time-travel stuff in, and things past the window get frozen into history. Changes in the past also propagate slowly into the rest of the timeline as the real time point advances. So paradoxes end up as oscillations of timewaves until some essential part moves out of the time travel window and gets frozen in an arbitrary state.
I’m not sure how well something similar could work with a relativistic space game where you end up displaced from time by just moving around instead of using gamedev-controllable magic timetravel juice.
Your concept also kinda reminds me of a very obscure Amiga game called Gravity. Based on the manual it had relativistic space-time, programmable drones, terraforming planets from gas giants and all sorts of hard SF spacegame craziness not really seen games pretty much ever nowadays.
I’ve been playing Achron, but it’s not really an inspiration. How should I put it..
My understanding of physics is weak enough without trying to alter it. If I stick as closely as possible to real-life physics, I know I won’t run into any inconsistencies.
Therefore, there will be no time-travel. I might do something cute with paradoxes later, but the immediate solution for those is to blow the offending ship or wormhole up, as real-life wormholes have been theorized to do via virtual particles.
Blow up the paradox-causing FTL? Sounds like that could be weaponized.
I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a “Relativity and FTL travel” FAQ.
I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?
I think the only possible answer to that is “through play-testing”.
As I understand it, real-life wormhole physics gives enormous advantages to a defender. However, this is a wargame, so I will have to limit that somewhat. Exactly how, and to what degree—well, that’s something I will be confronting in a year or two.
(And yes, it could be weaponized. Doing so might not be a good idea, depending on the lovecraft parameter, but you can certainly try.)
Did this ever get made? I have had (what feels like separate) intentions to make game with most of the bullet point (minus relativity)
I have a (atleast skill implicit) understanding how one would account for causality(essentially meta-time)
I am writing a book about a new approach to AI. The book is a roadmap, after I’m finished, I will follow the roadmap. That will take many years.
I have near-zero belief that AI can succeed without a major scientific revolution.
I’m interested in what sort of scientific revolution you think is needed and why.
Well… you’ll have to read the book :-)
Here’s a hint. Define a scientific method as any process by which reliable predictions can be obtained. Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don’t make controlled experiments. So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.
Danger! You’re not looking at the whole system. Children’s knowledge doesn’t just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.
So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge “boost”, and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using. It’s not that there’s a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.
You seem to be endorsing the discredited “blank slate” paradigm.
A better strategy would be to look at how evolution “learned” and “encoded” that data, and how to represent such assumptions about this environment, which is what I’m attempting to do with a model I’m working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what “intelligence” means in such a model, and how to get it.
(By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: “If you were to look only at what sensory data children get, you would find that it is woefully insufficient to “train” them to the level they eventually reach, no matter what epistemology they’re using.”
That hasn’t been demonstrated—AFAIK.
Children are not blank slates—but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected—for a sufficiently smart agent.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits—say, the inability to use nouns—as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can’t see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general.
How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
That’s not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn’t need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn’t encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using “you” to refer to themselves and “I” to refer to anyone else). Similarly, there’s a clear genetic distinction in language processing between humans and other primates in that many of the “sentences” constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.
Well, people do have weird, specific cognitive deficits, but not of that kind. Remember, grammatical structures themselves are the result of human-specific tendencies to form tools into a shape our native hardware is already capable of handling (pardon the metaphor overload). Grammatical structures are not a list of arbitrary rules imposed by aliens, but conventions that already make sense to human brains.
In any case, keep in mind that I said the information accumulated by evolution is stored in the interplay of the genes and the womb, and invariant features of the environment. The way that e.g. noun use comes about is a result of a combination of these; like JoshuaZ notes, there needn’t be a gene-to-noun mapping under this theory.
We do see this! It’s possible to be blind despite having functioning eyes simply because the brain didn’t receive sensory information from the eyes early on. It’s called Amblyopia
In other words, an expected environmental invariant—light being regularly fed through the eye—wasn’t present, preventing the manifestation of the accumulated knowledge of evolution.
I’m making a POTS argument, but more based on reading Pinker. There are patterns common to all languages, and there are kinds of grammatical errors children never make. This, along with similar phenomena in other areas, shows that children aren’t blank slates that accept whatever they get, but are born with a kind of pre-formatting that has certain expectations of their stimuli that causes them to constrain the solution set to the point where they don’t need the massive data that would be necessary to train a blank slate.
We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.
I’ll say one thing about the POTS argument, though. The basic idea is that people compare the amount of linguistic data absorbed by the child to his linguistic competence, find that the latter cannot be explained by the former, and conclude that there must be some sort of built-in language module. But they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes. This argument is supported by the existence of mirror neurons.
Well, earlier, the way you had stated your position, it looked like you were saying that all knowledge acquisition (or nearly all) comes from sense data, and children use some method, superior to scientific experimentation, to maximally exploit that data. If you grant a role for evolution to be “passing correct answers” to human minds, then yes, our positions are much closer than I had thought.
But still, it’s not enough to say “evolution did it”. You would have to say how the process of evolution—which works only via genes—gains that knowledge and converts it into a belief on the part of the organism. Your research program, as you’ve described it, mentions nothing about this.
The problem of vision (inference of a 3-D scene from a 2-D image) is ill-posed and has an even more intractable search space. It doesn’t seem like a child’s brain (given the problem of local optima) even has enough time to reach the hypothesis that a 3-D scene is generating the sense data. But I’d be happy to be proven wrong by seeing an algorithm that would identify the right hypothesis without “cheating” (i.e. being told where to look, which is what I claim evolution does).
How so? Mirror neurons still have to know what salient aspect of the sense data they’re supposed to be mirroring. It’s not like there’s a one-to-one mapping between “monkey see” and “monkey do”.
Okay, I think I should take a minute to clarify where exactly we disagree. Starting from your conclusion:
This by itself isn’t objectionable: of course you can move your probability distribution on your future observations closer to reality’s true probability distribution without controlled experiments. And Bayesian inference is how you do it.
But you also say:
I agree that children learn how to solve AI-complete problems, including reliable prediction in this environment (and also face-recognition, character-recognition, bipedal traversal of a path barring obstacles, etc.). But you seem to have already concluded (too hastily, in my opinion) that the answer lies in a really good epistemology that children have that allows them to extract near-maximal knowledge from the data in their experiences.
I claim that this ignores other significant sources of the knowledge children have, which can explain how they gain (accurate) knowledge even when it’s not entailed by their sense data. For example, if some other process feeds them knowledge—itself gained through a reliable epistemology—then they can have beliefs that reflect reality, even though they didn’t personally perform the (Bayes-approximating) inference on the original data.
So that answers the question of how the person got the accurate belief without performing lots of controlled experiments, and the problem regresses to that of how the other process gained that knowledge and transmitted it to the person. And I say (based on my reading of Pinker’s How the Mind Works) that the most likely possibility for the “other process” is that of evolution.
As for the transmission mechanism, it’s most likely the interplay between the genes, the womb, and reliably present features of the environment. All of these can be exploited by evolution, in very roundabout ways, to increase fitness. For example, the DNA/womb system can interact in just the right way to give the brain a certain structure, favorable to some “rituals of cognition” but not others.
This is why I don’t expect you to find a superior epistemology by looking at how children handle their experiences—you’ll be stuck wondering why they make one inference from the data rather than another that’s just as well-grounded but wrong. Though I’m still interested in hearing why you think you’ve made progress and what insights your method has given you.
I am reminded of a phrase from Yudkowsky’s An Intuitive Explanation of Bayes’ Theorem, which I was rereading today for no particularly good reason:
On the off-chance you haven’t heard about this: Unconscious statistical processing in learning languages.
At the risk of being blunt, that sounds like a Humpty Dumpty move. There are many processes which yield reliable predictions that we don’t call science, and many processes we identify as part of the scientific method which don’t yield reliable predictions.
What you’ve said above can be re-expressed as “if we think theorize/predict/test is the only way to make reliable predictions about the world, then our current understanding of how to make reliable predictions is incomplete”. Well, I agree. :)
It’s been 30+ years since Paul Feyerabend wrote Against Method, and the idea that the “scientific method” is inexistent is no longer even the heresy it once was. He wrote that science is founded on methodological diversity, the only necessary prerequisite of any method’s inclusion being that it works. It sounds a bit like what you’re getting at, and I’d recommend looking into it if you haven’t already.
You apparently think, that it isn’t necessary. I am quite sure it isn’t, too. We need some technical innovations, yes, but from the scientific point of view, it’s done.
On the contrary! Any major scientific revolution could use some AI power. I am not sure that so called Quantum Gravity (or String Theory of a kind), can be constructed in a reasonable time without a big AI involvement. Could be too hard for a naked human mind.
So yes, we probably need the AI for a big scientific revolution, but no more scientific revolutions to build AI.
Are you familiar with the current state of the art in AI? Can you point to a body of work that you think will scale up to AI with a few more “technical innovations”?
What art? What are you talking about? Every random action can go as art.
Art is definitively not an AI problem.
Are you a native English speaker? State of the art refers to the best developed techniques and knowledge in a field.
I am not a native English speaker. But I do know, what the “state of the art” means. However, instead of debating much abut that, I would first like to see an older question answered. The one of NancyLebovitz. It is above, the same I have emphasized a little in a replay.
What scientific breakthroughs we need, before we can build a decent AI?
Sprichst du lieber Deutsch? Das ist eine Sprache, die ich auch kann. Willst du, dass ich manchmal für dich übersetze?
ETA: Wow, I knew humans were extremely bigoted toward those not like them, but I would never have guessed that they’d show such bigotry toward someone merely for helping a possibly-German-speaking poster to communicate. Bad apes! No sex for you!
Unfortunately, my German is even worse than my English. A Google Translator chip into my head would be quite handy, already.
OTOH … the spelling is already installed in my browser, so I can spell much less wrong. ;-)
It may have helped if you’d explained yourself to onlookers in English, or simply asked in English (given Thomas’s apparent reasonable fluency).
I disagree with the downvotes, though.
All I said was, “Do you prefer to speak German? That’s a language that I can also do [sic]. Do you want me to sometimes translate for you?”
The decision to start speaking in German where it’s unnecessary for communicating what you’ve said, was stupid, and should be punished accordingly.
Sex is unnecessary and stupid too, ape. How about some tolerance for other’s differences? Oh, right, I forgot.
I think it was answered—a better understanding of how informal (non-scientific) learning works.
That might not be all that’s needed for AI, but I’m sure it’s a crucial piece.
And that would be a scientific revolution—a better understanding of how informal (non-scientific) learning works?
Let use such potent labels careffuly!
I used to be interested in working on AI, but my current general understanding of the field indicates that for me to do anything worthwhile in the field would require acquiring a lot of additional knowledge and skills — or possibly having a different sort of mind. I am spending my time on other projects where I more readily see how I can do something worthwhile.
Count me as “having an intention to do that in the future”. Although I’m currently just an undergraduate studying math and computer science, I hope to (within 5-10 years) start doing everything I can to help with the task of FAI design.