Common sense and natural language understanding are suspected to be ‘AI complete’. (p14) (Recall that ‘AI complete’ means ‘basically equivalent to solving the whole problem of making a human-level AI’)
I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven’t seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.
Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don’t quite seem to work. For while I don’t think that we’ll see perfect machine translation before AGI, I’m much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn’t expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.
I guess the core of what I’m trying to say is that arguments about AI-completeness has so far sounded like: “This problem is very very hard, we don’t really know how to solve it. AI in general is also very very hard, and we don’t know how to solve it. So they should be the same.” Heuristically there’s nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I’m just missing the part that goes: “This is very very hard. But if we knew it this other thing would be really easy.”
A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that’s the rationale behind the Turing test.
It’s interesting that you mention machine translation though. I wouldn’t equate that with language understanding. Modern translation programs are getting very good, and may in time be “perfect” (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data—not through understanding it.
I think that “the role of language in human thought” is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.
I’m guessing that an AI’s cognitive ability wouldn’t change no matter what human language it’s using, but I’d be interested to know what people doing AI research think about this.
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Because of the entrenched base of QWERTY typists, the idea didn’t get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.
It got me to thinking at the time, though, about whether a suitably designed human language would “open up” more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.
With respect to IA, might we get a freebie just out of redesigning—designing from scratch—a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?
Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less “wait states” while we are listening to a speaker) and which has larger conceptual bandwidth?
We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)
However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to “drop back” in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn’t make much difference, fortunately, but still, a new language might be helpful.
This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.
Lera Boroditsky is one of the premier researchers on this topic. They’ve also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.
But the question is more broad—whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.
I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I’m wondering is what intelligence would look like if it weren’t constrained by language—if that’s even possible. I need to read/learn more on this topic. I find it really interesting.
Human-level natural language facility was, after all, the core competency by which Turing’s 1950 Test proposed to determine whether—across the board—a machine could think.
Depends on the criteria we place on “understanding.” Certainly an AI may act in a way that invite us to attribute ‘common sense’ to it in some situations, without solving the ’whole problem.” Watson would seem to be a case in point—apparently demonstrating true language understanding within a broad, but still strongly circumscribed domain.
Even if we take “language understanding” in the strong sense (i.e. meaning native fluency, including ability for semantic innovation, things like irony, etc), there is still the question of phenomenal experience: does having such an understanding entail the experience of such understanding—self-consciousness, and are we concerned with that?
I think that “true” language understanding is indeed “AI complete”, but in a rather trivial sense that to match a competent human speaker one needs to have most of the ancillary cognitive capacities of a competent human.
Whether we are concerned about the internal experiences of machines seems to depend largely on whether we are trying to judge the intrinsic value of the machines, or judge their consequences for human society. Both seem important.
Common sense and natural language understanding are suspected to be ‘AI complete’. (p14) (Recall that ‘AI complete’ means ‘basically equivalent to solving the whole problem of making a human-level AI’)
Do you think they are? Why?
I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven’t seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.
Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don’t quite seem to work. For while I don’t think that we’ll see perfect machine translation before AGI, I’m much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn’t expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.
I guess the core of what I’m trying to say is that arguments about AI-completeness has so far sounded like: “This problem is very very hard, we don’t really know how to solve it. AI in general is also very very hard, and we don’t know how to solve it. So they should be the same.” Heuristically there’s nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I’m just missing the part that goes: “This is very very hard. But if we knew it this other thing would be really easy.”
A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that’s the rationale behind the Turing test.
It’s interesting that you mention machine translation though. I wouldn’t equate that with language understanding. Modern translation programs are getting very good, and may in time be “perfect” (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data—not through understanding it.
I think that “the role of language in human thought” is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.
I’m guessing that an AI’s cognitive ability wouldn’t change no matter what human language it’s using, but I’d be interested to know what people doing AI research think about this.
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Because of the entrenched base of QWERTY typists, the idea didn’t get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.
It got me to thinking at the time, though, about whether a suitably designed human language would “open up” more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.
With respect to IA, might we get a freebie just out of redesigning—designing from scratch—a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?
Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less “wait states” while we are listening to a speaker) and which has larger conceptual bandwidth?
We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)
However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to “drop back” in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn’t make much difference, fortunately, but still, a new language might be helpful.
This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.
Lera Boroditsky is one of the premier researchers on this topic. They’ve also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.
But the question is more broad—whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.
I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I’m wondering is what intelligence would look like if it weren’t constrained by language—if that’s even possible. I need to read/learn more on this topic. I find it really interesting.
A somewhat limited effort to reduce tasks to one another in this vein: http://www.academia.edu/1419272/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
Human-level natural language facility was, after all, the core competency by which Turing’s 1950 Test proposed to determine whether—across the board—a machine could think.
Depends on the criteria we place on “understanding.” Certainly an AI may act in a way that invite us to attribute ‘common sense’ to it in some situations, without solving the ’whole problem.” Watson would seem to be a case in point—apparently demonstrating true language understanding within a broad, but still strongly circumscribed domain.
Even if we take “language understanding” in the strong sense (i.e. meaning native fluency, including ability for semantic innovation, things like irony, etc), there is still the question of phenomenal experience: does having such an understanding entail the experience of such understanding—self-consciousness, and are we concerned with that?
I think that “true” language understanding is indeed “AI complete”, but in a rather trivial sense that to match a competent human speaker one needs to have most of the ancillary cognitive capacities of a competent human.
Whether we are concerned about the internal experiences of machines seems to depend largely on whether we are trying to judge the intrinsic value of the machines, or judge their consequences for human society. Both seem important.