So in order for an AGI to be recognized as intelligent, it would have to share with us a familiarity with the world. It is impossible to program this in, or in any way assemble such familiarity. It is achieved only by experience.
NO! How do you know what’s impossible? How can you be sure that it’s impossible to have a program that given a million books and a million hours of home videos, is not capable of having “familiarity with the world”, despite never having interacted with it? Remember that your argument about intelligence doesn’t require exactly the type of “familiarity with the world” that humans have.
You’re just giving us a flat assertion about a complex technical problem, without any argument at all!
How can you be sure that it’s impossible to have a program that given a million books and a million hours of home videos, is not capable of having “familiarity with the world”, despite never having interacted with it?
This would be a familiarity with the world by my standards, though in order for an AI to understand the language in the books, it would have to have familiarity of a shared world with the author. The trouble, in short, would be teaching it to read.
Well, I can see how an AI could come up with patterns and quickly compose hypotheses about the meaning of those books...but that (and the example the article discussed) is a case of translation. Learning a first language, and translating a new one into an old one are very different problems. One way to put my point is that it can look for patterns and meaning, but only because it is capable of meaning things of its own. And it is not possible to program this into something, it has to be got by experience. So it would be very easy to teach the AI to read English (I assume this could be taught in less time than I have words for) but I’m talking about teaching it to read full stop.
How do you suppose children learn language if recognizing patterns and meanings does not qualify? What non-pattern recognizing, non-meaning assigning experiences in particular are indispensable?
I suppose I should say first that I don’t think we have any good idea about how it is that someone first learns a language. Suppose a baby’s first word is ‘mama’. Is this yet a word? Can the baby mean anything by it? Probably not. When do we reach that point where a baby becomes capable of meaning? I expect there is no hard black line. At some point, we recognize someone as a language user.
That said, I think ‘recognizing patterns and meanings’ may well be a fine description of language-learning. I’m not saying that it’s incorrect, just that it’s not programmable. I’m saying that this kind of recognition requires a familiarity with a shared world.
One way to put my point is that it can look for patterns and meaning, but only because it is capable of meaning things of its own. And it is not possible to program this into something, it has to be got by experience.
Again, why? We had a program capable of understanding simple sentences forty years ago, SHRDLU:
Person: Pick up a big red block.
Computer: OK.
Person: Grasp the pyramid.
Computer: I don’t understand which pyramid you mean.
Person (changing their mind): Find a block which is taller than the one you are holding and put it into the box.
Computer: By “it”, I assume you mean the block which is taller than the one I am holding.
Computer: OK.
Person: What does the box contain?
Computer: The blue pyramid and the blue block.
Person: What is the pyramid supported by?
Computer: The box.
Person: How many blocks are not in the box?
Computer: Four of them.
Person: Is at least one of them narrower than the one which I told you to pick up?
Computer: Yes, the red cube.
etc.
I don’t see why it would be impossible to make something much better by “just programming it in”. Is there some kind of reading level that no algorithm can surpass if it doesn’t learn by experience?
I guess this comes down to the very complicated question of what ‘understanding a language’ amounts to. I take it we can agree that SHRDLU wasn’t thinking in any sense comparable to a human being or an AGI (since I take it we agree that SHRDLU wasn’t an AGI). But also notice that if your example is one of language-learning, you’ve picked a case where the learning thing already knows (some substantial part of) a language.
And lastly, I wouldn’t consider this a counterexample to the claim that learning a language requires a familiarity with a shared world. The machine you describe is obviously making reference to a shared world in its conversation.
NO! How do you know what’s impossible? How can you be sure that it’s impossible to have a program that given a million books and a million hours of home videos, is not capable of having “familiarity with the world”, despite never having interacted with it? Remember that your argument about intelligence doesn’t require exactly the type of “familiarity with the world” that humans have.
You’re just giving us a flat assertion about a complex technical problem, without any argument at all!
This would be a familiarity with the world by my standards, though in order for an AI to understand the language in the books, it would have to have familiarity of a shared world with the author. The trouble, in short, would be teaching it to read.
That’s not hard at all. Give it a big corpus of stuff to read, make it look for patterns and meaning. It would figure it out very quickly.
have you seen that alien message?
Well, I can see how an AI could come up with patterns and quickly compose hypotheses about the meaning of those books...but that (and the example the article discussed) is a case of translation. Learning a first language, and translating a new one into an old one are very different problems. One way to put my point is that it can look for patterns and meaning, but only because it is capable of meaning things of its own. And it is not possible to program this into something, it has to be got by experience. So it would be very easy to teach the AI to read English (I assume this could be taught in less time than I have words for) but I’m talking about teaching it to read full stop.
How do you suppose children learn language if recognizing patterns and meanings does not qualify? What non-pattern recognizing, non-meaning assigning experiences in particular are indispensable?
I suppose I should say first that I don’t think we have any good idea about how it is that someone first learns a language. Suppose a baby’s first word is ‘mama’. Is this yet a word? Can the baby mean anything by it? Probably not. When do we reach that point where a baby becomes capable of meaning? I expect there is no hard black line. At some point, we recognize someone as a language user.
That said, I think ‘recognizing patterns and meanings’ may well be a fine description of language-learning. I’m not saying that it’s incorrect, just that it’s not programmable. I’m saying that this kind of recognition requires a familiarity with a shared world.
Again, why? We had a program capable of understanding simple sentences forty years ago, SHRDLU:
I don’t see why it would be impossible to make something much better by “just programming it in”. Is there some kind of reading level that no algorithm can surpass if it doesn’t learn by experience?
I guess this comes down to the very complicated question of what ‘understanding a language’ amounts to. I take it we can agree that SHRDLU wasn’t thinking in any sense comparable to a human being or an AGI (since I take it we agree that SHRDLU wasn’t an AGI). But also notice that if your example is one of language-learning, you’ve picked a case where the learning thing already knows (some substantial part of) a language.
And lastly, I wouldn’t consider this a counterexample to the claim that learning a language requires a familiarity with a shared world. The machine you describe is obviously making reference to a shared world in its conversation.