I’m not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider “write fizzbuzz from a description” to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult. Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens.
… raw general intelligence …
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence. I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known.
The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence.
Anyway, even if it turns out that “universal induction” techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.
I think we’re now saying the same thing, but to be clear: I don’t think it follows at all that an AGI needs to be good at X, for any interesting X, in order to be considered an AGI. No, it has the meta-level condition instead: it must be able to become good at X, if doing so accomplishes its goals and it is given suitable inputs and processing power to accomplish that learning task.
Indeed, my blitz AGI design involves no natural language processing components, at all. The initial goal loading and debug interfaces would be via a custom language best described as a cross between vocabulary-limited Lojban and a strongly typed functional programming language. Having looked at the best approaches to NLP so far (Watson et al), and expert opinions on what would be required to go beyond that and build a truly human-level understanding of language, I found nothing that could not be rediscovered and developed by a less capable seed AI, if given sufficient resources and time.
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.
Ok, try this experiment: start with a high-level diagram of what you would consider to be a complete human-level AGI design, e.g. able to do everything a human can do, as good or better. I think we’re on the same page in assuming that at least on one level it would consist of a ton of little specialized programs handling the various specialized aspects of human intelligence. Enumerate all of these, and take a guess at how they are interconnected. I doubt you’ll be able to fit it all in one sheet of paper, or even 10. Here’s a start based on OpenCog, but there’s lots lots more details you will need to fill in:
Now consider each component in turn. If you cut that component out of the diagram (perhaps rearranging some of the connections as necessary), could you reliably recreate it with the remaining pieces, if tasked with doing so and given the necessary inputs and processing power? If so, get rid of it. If not, ask: what are the minimum (less than human-level) capabilities required, which let you recreate the rest? Replace with that. Continue until the design can’t be simplified further.
This experiment is a form of local search, and you may have to repeat from different starting points, or employ other global search methods to be sure that you are arriving at something close to the global minimum seed AGI design, but as an exercise I hope it gets the point across.
The basic AGI design I arrived as involved a dozen different “universal induction” techniques with different strengths, a meta-architecture for linking them together, a generic and powerful internal language for representing really anything, and basic scaffolding to stand in for the rest. It’s damn slow an inefficient at first, but like a human infant a good portion of its time would be spent “dreaming” where it analyzes its acquired memories and seeks improvements to its own processes… and gains there have multiplying affects. Don’t discount the importance of power-law mechanisms.
I’m not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider “write fizzbuzz from a description” to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.
Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens.
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.
I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known.
The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence.
Anyway, even if it turns out that “universal induction” techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.
I think we’re now saying the same thing, but to be clear: I don’t think it follows at all that an AGI needs to be good at X, for any interesting X, in order to be considered an AGI. No, it has the meta-level condition instead: it must be able to become good at X, if doing so accomplishes its goals and it is given suitable inputs and processing power to accomplish that learning task.
Indeed, my blitz AGI design involves no natural language processing components, at all. The initial goal loading and debug interfaces would be via a custom language best described as a cross between vocabulary-limited Lojban and a strongly typed functional programming language. Having looked at the best approaches to NLP so far (Watson et al), and expert opinions on what would be required to go beyond that and build a truly human-level understanding of language, I found nothing that could not be rediscovered and developed by a less capable seed AI, if given sufficient resources and time.
Ok, try this experiment: start with a high-level diagram of what you would consider to be a complete human-level AGI design, e.g. able to do everything a human can do, as good or better. I think we’re on the same page in assuming that at least on one level it would consist of a ton of little specialized programs handling the various specialized aspects of human intelligence. Enumerate all of these, and take a guess at how they are interconnected. I doubt you’ll be able to fit it all in one sheet of paper, or even 10. Here’s a start based on OpenCog, but there’s lots lots more details you will need to fill in:
http://goertzel.org/MonsterDiagram.jpg
Now consider each component in turn. If you cut that component out of the diagram (perhaps rearranging some of the connections as necessary), could you reliably recreate it with the remaining pieces, if tasked with doing so and given the necessary inputs and processing power? If so, get rid of it. If not, ask: what are the minimum (less than human-level) capabilities required, which let you recreate the rest? Replace with that. Continue until the design can’t be simplified further.
This experiment is a form of local search, and you may have to repeat from different starting points, or employ other global search methods to be sure that you are arriving at something close to the global minimum seed AGI design, but as an exercise I hope it gets the point across.
The basic AGI design I arrived as involved a dozen different “universal induction” techniques with different strengths, a meta-architecture for linking them together, a generic and powerful internal language for representing really anything, and basic scaffolding to stand in for the rest. It’s damn slow an inefficient at first, but like a human infant a good portion of its time would be spent “dreaming” where it analyzes its acquired memories and seeks improvements to its own processes… and gains there have multiplying affects. Don’t discount the importance of power-law mechanisms.