I’ve come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.
I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it’s very likely that I’m misunderstanding your point). I am currently reading your words on the screen. I can’t hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I’m not too different from a software program that is receiving the text via some input stream, so I don’t see an a priori reason why such a program could not understand the text as well as I do.
I assume telms is referring to embodied cognition, the idea that your ability to communicate with her, and achieve mutual understanding of any sort, is made possible by shared concepts and mental structures which can only arise in an “embodied” mind.
I am rather skeptical about this thesis as far as artificial minds go; somewhat less skeptical about it if applied only to “natural” (i.e., evolved) minds — although in that case it’s almost trivial; but in any case don’t know enough about it to have a fully informed opinion.
Oh, ok, that makes more sense. As far as I understand, the idea behind embodied cognition is that intelligent minds must have a physical body with a rich set of sensors and effectors in order to develop; but once they’re done with their development, they can read text off of the screen instead of talking.
That definitely makes sense in case of us biological humans, but just like you, I’m skeptical that the thesis applies to all possible minds at all times.
I skimmed both papers, and found them unconvincing. Granted, I am not a philosopher, so it’s likely that I’m missing something, but still:
In the first paper, Harnad argues that rule-based expert systems cannot be used to build a Strong AI; I completely agree. He further argues that merely building a system out of neural networks does not guarantee that it will grow to be a Strong AI either; again, we’re on the same page so far. He further points out that, currently, nothing even resembling Strong AI exists anywhere. No argument there.
Harnad totally loses me, however, when he begins talking about “meaning” as though that were some separate entity to which “symbols” are attached. He keeps contrasting mere “symbol manipulation” with true understanding of “meaning”, but he never explains how we could tell one from the other.
In the second paper, Harnad basically falls into the same trap as Searle. He lampoons the “System Reply” by calling it things like “a predictable piece of hand-waving”—but that’s just name-calling, not an argument. Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ? Sure, the man inside doesn’t understand Chinese, but that’s like saying that a car cannot drive uphill at 70 mph because no human driver can run uphill that fast.
The rest of his paper amounts to a moving of the goalposts. Harnad is basically saying, “Ok, let’s say we have an AI that can pass the TT via teletype. But that’s not enough ! It also needs to pass the TTT ! And if it passes that, then the TTTT ! And then maybe the TTTTT !” Meanwhile, Harnad himself is reading articles off his screen which were published by other philosophers, and somehow he never requires them to pass the TTTT before he takes their writings seriously.
Don’t get me wrong, it is entirely possible that the only way to develop a Strong AI is to embody it in the physical world, and that no simulation, no matter how realistic, will suffice. I am open to being convinced, but the papers you linked are not convincing. I’m not interested in figuring out whether any given person who appears to speak English really, truly understands English; or whether this person is merely mimicking a perfect understanding of English. I’d rather listen to what such a person has to say.
Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ?
Haven’t read the Harnad paper yet, but the reason Searle’s convinced seems obvious to me: he just doesn’t take his own scenario seriously — seriously enough to really imagine it, rather than just treating it as a piece of absurd fantasy. In other words, he does what Dennett calls “mistaking a failure of imagination for an insight into necessity”.
In The Mind’s Eye, Dennett and Hofstadter give the Chinese Room scenario a much more serious fictional treatment, and show in great detail what elements of it trigger Searle’s intuitions on the matter, as well as how to tweak those intuitions in various ways. Sadly but predictably, Searle has never (to my knowledge) responded to their dissection of his views.
Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow.
Harnad talks a lot about whether a body “has a mind”: whether a Turing Test could show if a body “has a mind”, how we know a body “has a mind”, etc.
What on earth does he mean by “mind”? Not… the same thing that most of us here at LessWrong mean by it, I should think.
He also refers to artificial intelligence as “computer models”. Either he is using “model” quite strangely as well… or he has some… very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It’s really rather distressing.)
Searle has shown that a mindless symbol-manipulator could pass the [Turing Test] undetected.
This has surely got to be one of the most ludicrous pronouncements I’ve ever seen a philosopher make.
people can do a lot more than just communicating verbally by teletype. They can recognize and identify and manipulate and describe real objects, events and states of affairs in the world. [italics added]
One of these things is not like the others...
Similar arguments can be made against behavioral “modularity”: It is unlikely that our chess-playing capacity constitutes an autonomous functional module, independent of our capacity to see, move, manipulate, reason, and perhaps even to speak.
Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak.
Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.
Yeah, I think that’s the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word “mind”, then it’s “you know, that thing that makes us different from machines”. So, we are different from AIs because we are different from AIs. It’s obvious when you put it that way !
I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it’s very likely that I’m misunderstanding your point). I am currently reading your words on the screen. I can’t hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I’m not too different from a software program that is receiving the text via some input stream, so I don’t see an a priori reason why such a program could not understand the text as well as I do.
I assume telms is referring to embodied cognition, the idea that your ability to communicate with her, and achieve mutual understanding of any sort, is made possible by shared concepts and mental structures which can only arise in an “embodied” mind.
I am rather skeptical about this thesis as far as artificial minds go; somewhat less skeptical about it if applied only to “natural” (i.e., evolved) minds — although in that case it’s almost trivial; but in any case don’t know enough about it to have a fully informed opinion.
Oh, ok, that makes more sense. As far as I understand, the idea behind embodied cognition is that intelligent minds must have a physical body with a rich set of sensors and effectors in order to develop; but once they’re done with their development, they can read text off of the screen instead of talking.
That definitely makes sense in case of us biological humans, but just like you, I’m skeptical that the thesis applies to all possible minds at all times.
Some representative papers of Stevan Harnad are:
The symbol grounding problem
Other bodies, other minds: A machine incarnation of an old philosophical problem
I skimmed both papers, and found them unconvincing. Granted, I am not a philosopher, so it’s likely that I’m missing something, but still:
In the first paper, Harnad argues that rule-based expert systems cannot be used to build a Strong AI; I completely agree. He further argues that merely building a system out of neural networks does not guarantee that it will grow to be a Strong AI either; again, we’re on the same page so far. He further points out that, currently, nothing even resembling Strong AI exists anywhere. No argument there.
Harnad totally loses me, however, when he begins talking about “meaning” as though that were some separate entity to which “symbols” are attached. He keeps contrasting mere “symbol manipulation” with true understanding of “meaning”, but he never explains how we could tell one from the other.
In the second paper, Harnad basically falls into the same trap as Searle. He lampoons the “System Reply” by calling it things like “a predictable piece of hand-waving”—but that’s just name-calling, not an argument. Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ? Sure, the man inside doesn’t understand Chinese, but that’s like saying that a car cannot drive uphill at 70 mph because no human driver can run uphill that fast.
The rest of his paper amounts to a moving of the goalposts. Harnad is basically saying, “Ok, let’s say we have an AI that can pass the TT via teletype. But that’s not enough ! It also needs to pass the TTT ! And if it passes that, then the TTTT ! And then maybe the TTTTT !” Meanwhile, Harnad himself is reading articles off his screen which were published by other philosophers, and somehow he never requires them to pass the TTTT before he takes their writings seriously.
Don’t get me wrong, it is entirely possible that the only way to develop a Strong AI is to embody it in the physical world, and that no simulation, no matter how realistic, will suffice. I am open to being convinced, but the papers you linked are not convincing. I’m not interested in figuring out whether any given person who appears to speak English really, truly understands English; or whether this person is merely mimicking a perfect understanding of English. I’d rather listen to what such a person has to say.
Haven’t read the Harnad paper yet, but the reason Searle’s convinced seems obvious to me: he just doesn’t take his own scenario seriously — seriously enough to really imagine it, rather than just treating it as a piece of absurd fantasy. In other words, he does what Dennett calls “mistaking a failure of imagination for an insight into necessity”.
In The Mind’s Eye, Dennett and Hofstadter give the Chinese Room scenario a much more serious fictional treatment, and show in great detail what elements of it trigger Searle’s intuitions on the matter, as well as how to tweak those intuitions in various ways. Sadly but predictably, Searle has never (to my knowledge) responded to their dissection of his views.
I like the expression and can think of times where I have looked for something that expresses this all-to-common practice simply.
Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow.
Harnad talks a lot about whether a body “has a mind”: whether a Turing Test could show if a body “has a mind”, how we know a body “has a mind”, etc.
What on earth does he mean by “mind”? Not… the same thing that most of us here at LessWrong mean by it, I should think.
He also refers to artificial intelligence as “computer models”. Either he is using “model” quite strangely as well… or he has some… very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It’s really rather distressing.)
This has surely got to be one of the most ludicrous pronouncements I’ve ever seen a philosopher make.
One of these things is not like the others...
Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak.
Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.
Yeah, I think that’s the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word “mind”, then it’s “you know, that thing that makes us different from machines”. So, we are different from AIs because we are different from AIs. It’s obvious when you put it that way !