In his paper, Searle brings forward a lot of arguments.
Early in his argumentation and referring to the Chinese room, Searle makes this argument (which I ask you not to mix with later arguments without care):
it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories. whether in Chinese. English. or whatever. since in the Chinese case the computer is me. and in cases where the computer is not me, the computer has nothing more than I
Later, he writes:
the whole point of the original example was to argue that such symbol manipulation by itself couldn’t be sufficient for understanding Chinese.
I am framing this argument in a way it can be analyzed:
1) P (the Chinese room) is X (a program capable of passing Turing test in Chinese);
2) Searle can be any X and not understanding Chinese (as exemplified by Searle being the Chinese room and not understanding Chinese, which can be demonstrated for certain programs)
thus 3) no X is understanding Chinese
Searle is arguing that “no program is understanding Chinese” (I stress this in order to reply to Said). The argument “P is X, P is not B, thus no X is B” is an invalid syllogism. Nevertheless, Searle believes in this case that “P not being B” implies (or strongly points towards) “X not being B”.
Yes, Searle’s intuition is known to be problematic and can be argued against accordingly.
My point however is that there is out there in the space of X a program P that is quite unintuitive. I am suggesting a positive example of “P possibly understanding Chinese” which could cut short the debate. Don’t you see that giving a positive answer to the question “can a program understand?” may bring some insight in Searle’s argument too (such as developing it into a “Chinese room test” to assess whether a given program can indeed understand)? Don’t you want to look into my suggested program P (semiotic AI)?
In the beginning of my post I made it very clear:
Humans learn Chinese all the time; yet it is uncommon having them learning Chinese by running a program
Searle can be any X?? WTF? That’s a bit confusingly written.
The intuition Searle is pumping is that since he, as a component of the total system doesn’t understand Chinese it seems counterintuitive to conclude that the whole system understands Chinese. When Searle says he is the system he is pointing to the fact that he is doing all the actual interpretation of instructions and is seems weird to think that the whole system has some extra experiences that let it understand Chinese even though he does not. When Searle uses the word understand he does not mean demonstrate the appropriate input output behavior he is presuming it has that behavior and asking about the system’s experiences.
Searle’s view from his philosophy of language is that our understanding and mening is grounded in our experiences and what makes a person count as understanding (as opposed to merely dumbly parroting) Chinese is that they have certain kinds of experiences while manipulating the words. When Searle asserts the room doesn’t understand Chinese he is asserting that it doesn’t have the requisite experiences (because it’s not having any experiences) that someone would need to have to count as understanding Chinese.
Look, I’ve listened to Searle explain this himself multiple times during the 2 years of graduate seminars on philosophy of mind I took with him and have discussed this very argument with him at some length. I’m sorry but you are interpreting him incorrectly.
I know I’m not making the confusion you suggest because I’ve personally talked with him at some length about his argument.
In his paper, Searle brings forward a lot of arguments.
Early in his argumentation and referring to the Chinese room, Searle makes this argument (which I ask you not to mix with later arguments without care):
Later, he writes:
I am framing this argument in a way it can be analyzed:
1) P (the Chinese room) is X (a program capable of passing Turing test in Chinese);
2) Searle can be any X and not understanding Chinese (as exemplified by Searle being the Chinese room and not understanding Chinese, which can be demonstrated for certain programs)
thus 3) no X is understanding Chinese
Searle is arguing that “no program is understanding Chinese” (I stress this in order to reply to Said). The argument “P is X, P is not B, thus no X is B” is an invalid syllogism. Nevertheless, Searle believes in this case that “P not being B” implies (or strongly points towards) “X not being B”.
Yes, Searle’s intuition is known to be problematic and can be argued against accordingly.
My point however is that there is out there in the space of X a program P that is quite unintuitive. I am suggesting a positive example of “P possibly understanding Chinese” which could cut short the debate. Don’t you see that giving a positive answer to the question “can a program understand?” may bring some insight in Searle’s argument too (such as developing it into a “Chinese room test” to assess whether a given program can indeed understand)? Don’t you want to look into my suggested program P (semiotic AI)?
In the beginning of my post I made it very clear:
Searle can be any X?? WTF? That’s a bit confusingly written.
The intuition Searle is pumping is that since he, as a component of the total system doesn’t understand Chinese it seems counterintuitive to conclude that the whole system understands Chinese. When Searle says he is the system he is pointing to the fact that he is doing all the actual interpretation of instructions and is seems weird to think that the whole system has some extra experiences that let it understand Chinese even though he does not. When Searle uses the word understand he does not mean demonstrate the appropriate input output behavior he is presuming it has that behavior and asking about the system’s experiences.
Searle’s view from his philosophy of language is that our understanding and mening is grounded in our experiences and what makes a person count as understanding (as opposed to merely dumbly parroting) Chinese is that they have certain kinds of experiences while manipulating the words. When Searle asserts the room doesn’t understand Chinese he is asserting that it doesn’t have the requisite experiences (because it’s not having any experiences) that someone would need to have to count as understanding Chinese.
Look, I’ve listened to Searle explain this himself multiple times during the 2 years of graduate seminars on philosophy of mind I took with him and have discussed this very argument with him at some length. I’m sorry but you are interpreting him incorrectly.
I know I’m not making the confusion you suggest because I’ve personally talked with him at some length about his argument.